Linux Commands

Linux Commands

Created 1996-08-03, revised 2021-06-10, corrections/complaints contact Hugh Sparks

I use these notes to remember various Linux commands and procedures. When I learn something new, a section gets appended to the document. I appreciate hearing about errors.

Other cheat sheets

Making a Linux cheat sheet is a popular activity. The following link will give you an up-to-date list:

Search for Linux cheat sheets

Some usage hints

If you know where you want to go, you can enter the section name as part of the browser URL. The browser will adjust the view to show the proper part of the document. Example:

The anchor name, in this case "toc_audio" is created by prepending "toc_" onto the section name in lower case. Use hyphens where spaces appear in the name. Only top level sections have anchors.


Typical changes to the default httpd.conf file


Test the validity of the httpd.conf file

apachectl configtest

Talk to a web server with telnet

telnet 80
GET /whatever.html HTTP/1.1
<enter another LF> 


A container is a group of directives that apply to requests that have a common path. The container is said to match the path. The path may be an actual file system path on the server or an abstrct path that appears in a URL after the server name.

The directives in a container commonly apply access and authentication rules or invoke other programs to process the request.

The following sections describe three basic containers: VirtualHost, Directory, and Location.


Virtual hosts allow one physical server to appear as many others, each with its own IP name.

Add this global directive:

NameVirtualHost *

Each virtual host has a container for additional directives:

<VirtualHost *>
    DocumentRoot /var/www/html/myDirectory

You must have a CNAME entry for myHost in your zone file or a definition for in your /etc/hosts file.


A directory container should be always be used when dealing with documents in the file system:

<Directory /home/user/www>
    Order deny,allow
    Deny from


Location sections match paths in URL space. They may trigger webapps or refer to other resources that have nothing to do with the file system. Location directives are applied last and override the effect of overlapping Directory containers. It is unwise to use Location containers to match file system paths.

<Location /jeuler>
    ProxyPass         ajp://localhost:8009/jeuler/
    ProxyPassReverse  ajp://localhost:8009/jeuler/

Configure Apache for XML

Some browsers won't display xsl formatted xml documents unless the associated xsl file is served with the appropriate mime type (text/xml or application/xml) This can be configured by adding these directives:

AddType application/xml .xml
AddType application/xml .xsl

Apache security

The server will accept or reject requests based on rules. The rules can be categorized by what part of the request they examine:


Access rules discriminate based on the IP number of the client.


Authentication rules discriminate based on a username and
password supplied by the client.


One a client has passed the access and authentication
barriers, authorization rules determine what the client
is allowed to do.


Limit rules apply different authorization rules depending
on the request type.

A common cause of confusion is that Apache sets up defaults for access and authentication as well as rules that control arbitration between them. If these rules are not explicitly stated in your sections, it's impossible to understand how the directives work or why things don't work the way you expect.

To diagnose access problems:

tail -f /var/log/httpd/error_log

Basic authentication

Basic authentication sends clear text passwords over the web, so it's not safe to use by itself. It can be used securely over SSL (https://) connections. There is a trend among "user friendly" operating systems to bully users into avoiding basic authentication over unsecure channels.

Authentication directives are used inside Directory or Location sections:


<Directory /var/www/html/privateStuff>
    AuthType Basic
    AuthUserFile /etc/httpd/users.htpasswd
    AuthName "Access to private stuff is restricted"
    require valid-user

The AuthName message gets displayed by the client's browser when it pops up a dialog window requesting the username and password.

The directive "Require valid-user" allows access to anyone who appears in the htpasswd file. Alternatively, you can allow access for selected users:

require user bevis mary 

Creating a password file:

htpasswd -c /etc/httpd/users.htpasswd aUserName

The program will prompt for the password. The password file SHOULD NOT be located under any Directory section controlled by the server.

Adding a user to the password file:

htpasswd /etc/httpd/users.htpasswd aUserName

The program will prompt for the password.

Deleting a user from the password file:

htpasswd -D /etc/httpd/users.htpasswd aUserName

Digest authentication

Digest authentication uses encryption, so it's a better choice for regular http:// access.

Digest directives can be added to Location or Directory sections. For a Location:

<Location /thisLocation>
    AuthType Digest
    AuthName Administrators 
    AuthUserFile /etc/httpd/users.htdigest
    AuthDigestDomain /thisLocation /someOtherLocation ...
    require valid-user

The AuthName has a specialized meaning here: it names a "realm". Realms are an abstraction used to group one or more locations on the server you want to protect as a unit.

Each line in the AuthUserFile contains a username, password, and realm name. This allows the administrator to have one AuthUserFile for all the authentication rules.

Realms control access to the path named in the Location directive and all "sub paths" relative to that location: In the example above, a path such as "/thisLocation/here" is also part of the realm.

Most browsers allow users to "Remember my password" when they are prompted to enter credentials. The AuthDigestDomain directive lists other paths protected by the same credientials so the user won't have to enter them again. The browser quietly submits the previously used credentials when any of the other paths are used.

To construct the parameter list for AuthDigestDomain, simply list all the path expressions used in Location and Directory sections names that have the same realm.

Creating a digest password file and adding the first user:

htdigest -c /etc/httpd/users.htdigest aRealm aUserName

The program will prompt for the password.

Adding a user to the digest file (or changing an existing user's password):

htdigest /etc/httpd/users.htdigest aRealm aUserName

The program will prompt for the password.

Deleting a user:

The htdigest command doesn't have a delete option. Just edit the .digest and delete the line with the username you want to remove.

Using groups

Groups allow you to define sets of users. They can be used with any authentication type.

To use a group called "slackers", add these directives:

Require group slackers
AuthGroupFile /etc/httpd/users.htgroups

In this example, a user must authenticate as usual and in addition belong to the group "slackers"

The file users.htgroups may define any number of groups and the users who belong to them. Each line in the file begins with a group name followed by the user names that belong:

administrators: fred kevin jane
slackers: judy steve 

As with other authentication data files, it's best to keep the groups file out of the server's Directory scope.

Access control

Individual ip addresses:

Allow from
Deny from

Subnet expressions - These two directives are equivalent:

Allow from
Allow from

The order in which allow and deny tests are applied is important. There are three choices:

Order allow,deny

Everyone starts out denied. Allow rules are processed first, then deny rules. If an allow rule lets them in, a deny rule will throw them out.

Order deny,allow

Everyone starts out allowed. Deny rules are processed first, then allow rules. If a deny rule throws them out, an allow rule will let them in.

Order mutual-failure

Everyone starts out denied. Client must match an allow rule and NOT match a deny rule.

Example - Allow everyone:

Order allow,deny
Allow from all

Example - Exclude everyone:

Order deny,allow
Deny from all

Example - Allow only one subnet:

Order deny,allow
Deny from all
Allow from 

Example - Allow only specified clients:

Order allow,deny
Allow from 

Example - Allow only one subnet and exclude one of those clients:

Order allow,deny
Allow from
Deny from

Combine access control and authorization

Access and authorization directives are always in effect. They may be specified explicity in sections or inherited from the top level server configuration.

The "Satisfy" directive determines how they are combined. Clients must pass access AND authentication control:

Satisfy all

Clients must pass access OR authentication control:

Satisfy any

Common idioms

Allow local LAN users without authentication but require authentication from all others:

Satisfy any
Order deny,allow
Deny from all
Allow from
AuthType Basic
AuthName "Special area"
AuthUserFile /etc/httpd/users.htpasswd
Require valid-user

Require everyone to authenticate:

Satisfy all
Order allow,deny
Allow from all
AuthType Basic
AuthName "Special area"
AuthUserFile /etc/httpd/users.htpasswd 
Require valid-user 

Allow users from only one subnet and they must authenticate:

Satisfy All
Order deny,allow
Deny from all
Allow from
AuthType Basic
AuthName "Special area"
AuthUserFile /etc/httpd/useres.htpasswd
Require valid-user

Use different rules for different request types

The Limit directives allow the server to apply different rules depending on the type of request. (Sometimes called the "request method")

The top level server configuration file normally has hese default rules that allow everyone to see content on the site but not modify the site:

    Order allow,deny
    Allow from all
    Order deny,allow
    Deny from all

Everything inside the section applies to the listed methods. Everything inside the section applies to all methods NOT listed.

The following example lets everyone read the site, but restricts writing to authorized users:

! Allow access from any ip address:

Order allow,deny
Allow from all

! Setup authentication by password

AuthType Basic
AuthName "Special area"
AuthUserFile /etc/httpd/users.htpasswd 
Require valid-user

! Decide how to combine authentication and access rules 
! based on the request type:

    ! Let anyone read
    Satisfy Any
    ! All other requests need authentication
    Satisfy All

Prevent recursion in rewrite rules

In this example, files that end with ".xml" or ".mml" are rewritten to find them in the "mxyzptlk" directory:

RewriteEngine on
RewriteCond %{REQUEST_URI} !(^/mxyzptlk/.*)
RewriteRule (.*)\.(xml|mml)$ mxyzptlk$1.$2 [P]

Using these rules, the client browser will show the orignal URL in the address display, not the rewritten version.

Clean up semaphores so Apache will restart

To fix this error:

"...No space left on device: mod_python: Failed to create global mutex..."

Execute this command:

ipcrm sem `ipcs -s | grep apache | cut -d' ' -f2`

Enable Windows WebDAV access

Getting apache to serve WebDAV content to Windows clients is notoriously difficult. See if you like this method:

First, you need a lock file directive in the Apache configuration file.

On linux this could be:

DAVLockDB /var/lib/dav/DavLock

If you're running apache on Windows, use the expression:

DAVLockDB var/DavLock

The "var" directory in the Windows case is a directory you create in the Apache installation root directory.

The following examples use an Apache Directory section, but the same configuration can be used inside Location or VirtualHost sections.

Alias /webdav  /var/www/webdav

<Directory /var/www/webdav>
        DAV on
        Satisfy all
        Order allow,deny
        Allow from all
        ForceType application/octet-stream
        AuthType Digest
        AuthName "davusers"
        AuthDigestDomain /webdav /fleep /goop /dreep
        AuthUserFile davusers.digest
        Require valid-user
        Options Indexes

Create the digest authentication file:

htdigest -c /etc/httpd/davusers.digest "davusers" yourUserName

The parameter "davusers" is the name of a realm - The realm concept allows one file to contain credentials for multiple services and/or locations. In this case, we are using the realm "davusers."

You will get a prompt for a password and the file will be created. Adding more users to the realm is similar, but leave out the -c switch, which creates a new file erasing the old one. As the syntax suggests, htdigest can store many users with multiple realms all in the same file.

Enable WebDAV access using SSL

This is a better method but it depends on having SSL configured on your server. Other than the aggravation of buying and configuring a certificate, this method seems to make Windows behave better - There are no more mysterious recursive directories and the protocol is easier to specify:

Alias /webdav  /var/www/webdav

<Directory /var/www/webdav>
        DAV on
        Satisfy all
        Order allow,deny
        Allow from all
        ForceType application/octet-stream
        AuthType Digest
        AuthName "davusers"
        AuthDigestDomain /webdav /fleep /goop /dreep
        AuthUserFile davusers.digest
        Require valid-user
        Options Indexes

Configure the digest file as described in the previous section. When you specify the url on the Windows side, you don't need the port number mumbo-jumbo:

Clearly, the SSL certificate must be created to exactly match the server name. You can also use this method inside a virtual host if the name matches the certificate.

Understanding AuthDigestDomain

The parameters to AuthDigestDomain consist of path expressions used to access all locations associated with the same digest realm. This information is sent to the client browser so the user won't have to re-enter the same credentials for locations specified in the directive after they have been prompted for one of them.

In the examples above, the "davusers" realm is being used to protect our /webdav location. It also protects locations /fleep /goop /dreep. These locations would be described by other Apache sections. Each of the other sections should contain the same expressions:

AuthName "davusers"
AuthDigestDomain /webdav /fleep /goop /dreep

You might think that Apache would be smart enough to figure this out since the two sections each have the same realm. But for some reason, it must be specified.

Let anyone read but only authenticated users write

In the previous examples, replace the directive "Require value-user" with this section:

    Require valid-user

Windows XP client configuration

On the Windows XP side, open "My Network Places" and double-click "Add Network Place". Hit "Next", then select "Choose another network location" and hit "Next" again. Enter a url for the virtual host in this form:

The wizard will grind away and then prompt for the username, password, and a name for the shortcut. The new shortcut will be added to the "My Network Places" folder. Note the appended port number: It is important. It somehow short-circuits Microsoft's attempt to discourage the use of the WebDAV protocol.

Windows 7 client configuration

On the Windows 7, right-click on "My Computer" and select "Map Network Drive..." Enter a folder path in this form:

If you don't have "My Computer" on your desktop, you can do the deed from any Explorer (not Internet Explorer) window:

File menu->Add a network location
Select "Choose a custom network location..."
Press "Next"
In the "Internet or network address:" box, use a URL of the form:

Note that your really must have SSL setup to make webdav work smoothly with Windows 7 and beyond. There are work-arounds that involve modifying the registry...

Windows clients work but are insanely slow

Internet Explorer->Tools->Internet Options
Select the Connections tab.
Press the "LAN Settings" button.
UNCHECK: "automatically detect settings"

You can't really imagine how slow webdav will be unless you do this.

Fix WebDAV on Windows XP and Windows Vista

Windows XP and Vista clients need a patch to fix multiple webdav bugs. I used to maintain a link here, but Microsoft keeps shifting things around. Just do a search for "KB907306" and you'll find it without difficulty. Note: Windows 7 and later versions don't need this patch.

UPDATE: As of Windows 10, webdav works without fuss. Microsoft relented.

Fix Apache when clients can read but not write

This is a marvelously obsure bug. I suspect it has happened to others. The symptom on the Windows side: Users can map the webdav directory and copy files from the server. But when they try to copy a file to the server, an error pops up:

Can't read from source file or disk

Watching the logs on the apache server, we see these lines:

Could not open the lock database
Permission denied: Could not open property database

It turns out to be caused by corruption of the lock database. I have no idea how this happens, but the fix is simple: stop the server, delete the lock files and restart:

service httpd stop
rm -f /var/lib/dav/lockdb.dir
rm -f /var/lib/dav/lockdb.pag
service httpd start

The lock files will be recreated and windows clients will have read/write access to the webdav directory.


Backup linux volumes

Normal unix-to-unix with locally mounted paths:

rsync -ax --numeric-ids --delete sourceDir/ destDir

The trailing / on the sourceDir is very important: It means copy the contents of sourceDir into destDir, rather than copying the sourceDir itself.

If you have ssh setup from the source to the destination machine, backups are fast and easy to do over the network.

If you can do this successfully: (See the SSH section in this document.)


Then you can do this:

rsync -ax -e ssh --delete sourceDir/ 

Note that the /path/to/sourceDir is the path as seen on the remote machine. You can also pull data from a remote machine by reversing the source and destination parameters.

You can also rsync to or from network mounts via nfs or samba. If you rsync between linux and windows machines using samba, all sorts of permission and symbolic link problems are likely.

The rsync command is great for copying large things because you can see progress:

rsync -ah --progress sourceFiles destDir/

Backup NTFS or SMB volumes

Backup to a vfat or smb filesystem using only time attribute: (Pulling from the linux side.)

rsync -a --delete --exclude-from="excludeNT.txt" sourceDir/ destDir

The excludeNT.txt file contains the names of files that should not be copied. They are locked system files that will cause error messages if not excluded during the backup. You can include a path prefix when necessary, but the name of the file alone is sufficient if it's not likely to appear anywhere in your personal files.


Other applications running on Windows may lock files. By observing the messages from rsync you can add to the list and achieve a quiet backup.


Tar commands

tar czf arch.tgz path   # Make an archive (Add v for verbose)
tar xzf arch.tgz    # Restore an archive (Add v for verbose)
tar tf arch.tar     # List an archive (must not be gziped)

Other tar options

-C directory        # Change to this directory first
-T fileList     # Use this list of  file names
--same-owner        # Keep original owner when extracting
--same-permissions  # Keep original permissions when extracting
--absolute-paths    # Don't strip leading /
--directory dirPath # Change to this directory first
--files-from=fileList   # Get file names from another file

Gzip a file or directory

gzip file
gunzip file.gz

Zip a file or directory

zip -r files...

Archive and show progress

tar cf - $thing -P | pv -s $(du -sb $thing | awk '{print $1}') | gzip > $thing.tgz

Cpio options

Mode of operation is one of "pio":

p   Pass files through without using an archive file
i   Extract from an archive
o   Create an archive

Other common options:

t   List the contents of the archive
m   Preserve modification times 
d   Create directories as needed
u   Overwrite files without warnings

Extract files from a cpio archive, create directories as needed

cpio -mid < archive.cpio

Check for absolute file names in cpio archives

List the archive to see if it has absolute names. Use --no-absolute-filenames if necessary. This doesn't happen very often, but if it does and you are root a Bad Thing (tm) can happen.

List a cpio archive

cpio -t < archive.cpio

Use cpio to copy everyting in current dir to targetDir

Includes invisible dot files. Preserves all dates.

find . | cpio -pudm targetDir 

On modern Linux systems "cp -a" will do the same thing.

Create a cpio archive from a list of files in current directory

find . | cpio -o > archive.cpio


Play samples from a file

play test.wav

Use 'play' on systems with artsd (such as kde)

On these systems, /dev/dsp is always tied up by artsd. Use the artsdsp command to run any program that would normally access /dev/dsp directly:

artsdsp play test.wav

Record samples to a wav file

Record a "normal" stereo wav file:

rec -c 2 -f U -r 44100 -s w -v 8.0 test.wav

Record options:

-c 2        Two channels (stereo)
-r 44100    Sample rate
-f  Sample encoding:
    s   Signed linear (2's compliment)
    u   Unsigned linear
    U   U-law (logarithmic) U.S. standard
    A   A-law (logarithmic) EU. standard
    a   ADPCM (Adaptive Differential Pulse-Code Modulation)
    i   IMA_ADPCM
    g   GSM
-s  Sample size:
    b   8 bit bytes
    w   16 bit words
    l   32 bit long words
    f   32 bit floats
    d   64 bit floats
    D   80 bit IEEE floats
-t  File format:
    au  Sun
    cdr CD track
    gsm GSM 06.10 Lossy Speech Compression
    wav Windows RIFF (Header contains all params)
-v  Set the volume
    1.0 No change
    2.0 Linear increase by 2.0
    0.5 Linear decrease by 2.0
    8.0 About right to balance with other .wavs

The file format can be specified by giving the file a matching extension.

ADPCM, IMA_ADPCM & GSM are intended for speech compression. U-law would be appropriate for music.

Play sounds concurrently


Some people make this an alias for 'play'

Reroute microphone through esd

esdrec | esdcat

Play an mp3 file

mpg123 yourfile.mp3

Convert an mp3 file to a wav

First run:

mpg123 -s yourfile.mpg > yourfile.raw

The above command will display the sample rate and the number of channels. (Mono or Stereo)

The output is 16 bit, signed pcm, little endian. No header.

sox -c 2 -w -s -r xxx yourfile.raw yourfile.wav

The xxx value must be the sample rate displayed by mpg123. You can pipeline mpg123 into sox. Use a - for the sox input.

An easier way to do both steps:

lame --decode yourfile.mp3 yourfile.wav

Use sox to play (almost) any sound file

sox inputOptions inputFile outputOptions outputFile

Do a "man soxexam" to see many examples.

Format options:


-c n    Where n = 1,2 or 4

Sample rate

-r rate Where rate is in Hertz

Sample size

-b   8 bits
-w  16 bits
-l  32 bits


-s  Signed linear
-u  Unsigned linear
-U  U-law (U.S. logarithmic)
-A  A-law (Euro logarithmic)
-a  ADPCM (Adaptive pulse-code modulation)
-g  GSM
-f  Floating point

Input file format is controled by the file extension:

.wav    (You don't need to specify other options)
.au (Options may or may not be needed)

Convert a wav to an mp3

lame [-b bitrate] infile.wav outfile.mp3

Resample an mp3

lame [-b newbitrate] --mp3input oldfile.mp3 newfile.mp3

Rip the audio from a video with ffmpeg

ffmpeg -i myVideo.flv -ab 128k myAudio.mp3

Rip the audio from a video with mplayer

mplayer -novideo -ao pcm:file=result.wav source.avi

Batch convert flac files to mp3 using ffmpeg

for f in *.flac; do ffmpeg -i "$f" -ab 196k -map_metadata 0 "${f%.flac}.mp3"; done



Variables are created by assignement:

strVar='This is a string'

They are referenced using the dollar prefix:


Concatenation of strings is implied:

newVar='This is ' $oldVar

Note: Spaces are not allowed around the = symbol.

Undefining variables:

unset myVar

Using command results as a parameter

Enclose the command in back-quotes: Example: getting the size of a directory

dirSize=`du -s myDirectory | awk '{print $1}'`

Picking out the nth element of a string

The string should be pipelined to this command:

awk '{print $n}' 


SIZE=`du -s -k myPath/myDir | awk '{print $1}'`
if [$SIZE -gt 4096]; then
    echo "The directory myDir contains more than 4096kb"

Picking out the nTh element from multi-line text

This example returns the free memory of the machine, which appears in the middle of the second line of /proc/meminfo. Note the escapes required on nested quotes:

sh -c 'echo $4' `cat /proc/meminfo`

Picking out the nTh line of a file

awk "NR=123 {print;exit}" myfile.txt

Inline file creation

cat > myPath/myFile <<- 'EOF'

Predicates used on path names

-d  Is a directory
-e  Exists
-f  Is a regular file
-h  Is a symbolic link
-r  Is readable
-s  Size is > 0
-w  Is writable
-x  Is executable


if [ -e <path> ] ; then
    # Do this if file exists

if [ ! -d <path> ] ; then
    # Do this if it's not a directory

String predicates

-z <astring>    # Length of string is zero
-n <astring>    # Length of string is non-zero

Infix file predicates

-nt  Newer than. Or file1 exists and file2 does not.
-ot  Older than. Or file2 exists and file1 does not. 

if [ <file1> -nt <file2> ] ; then
    Do this if file1 is newer than file2 (or file2 does not exist)

String infix operators

=, !=, <, >

Numerical infix operators

-eq, -ne, -lt, -le, -gt, -ge

Logical connectives

NOT prefix operator: !
AND operator: &&
OR operator: ||

Control structures


if [ -e $pathname ] ;  then
    # It exists
elif [ -e $thatname ] ; then
    # That exists
elif [ -e $theOther ] ; then
    # The other exits
    # They don't

Case statement:

In this example, prepending "x" makes the construct work even if GOOB is undefined. Quoting variable expansion is always a good idea when whitespace may be present in the value.

case x"$GOOB" in
        echo abc
        echo def
        echo unknown


myDirs="dev etc bin proc mnt tmp var lib"

for dir in $myDirs ; do
    mkdir $targetRoot/$dir
    chmod u=rwx,og=rx $targetRoot/$dir

for i in 121 19 34 56 78; do
    echo $i

for i in `seq 1 10`; do
    echo $i

for ((i=1; i<=10; i+=1)) ; do
    echo $i


while [ $line != "" ]; do
    a1=`echo $line | sed -e 's/.*&//'`
    line=`echo $line | sed -e "s/&"$a1"//"`
    echo $a1    

until [ $count -lt 1 ]; do
    echo $count
    let count=count-1

Script or function parameters

$0              Name of the script or function
$1 ... $n       Each parameter
$@              All parameters starting with $1
$#              Number of parameters
$?              Exit status of last bash command

To shift all parameters left by 1: $1=$2, $2=$3 etc:

shift 1

You can shift by any positive n.

User-defined functions

Local functions work like shell scripts. They have their own $0..$n parameters:

function demo   
{   echo Function: $0
    echo Param 1: $1
    shift 1
    for i in $@ ; do
        echo Arg: $i

demo special 123 456 789

Historical note: Instead of "function demo" to start a function definition, the form "demo()" is allowed. This notation confounds the unwashed because they wish they could put a formal parameter list inside the ()s.

Exit status

Every bash script or function returns a numerical status. A non-zero status denotes failure.

To exit a script immediately and return failure:

exit 1

To exit with success:

exit 0

If the script runs off the end, "exit 0" is implied.

Checking for expected number of parameters

if [ $# -eq 3 ] ; then
    echo Usage: $0 arg1 arg2 arg3
    exit 1  

Process command line arguments in a script

while [ $# -gt 0 ] ; do
    echo $1

Process each line in a file

for line in `cat myfile.txt`; do
    echo $line

Using a prompt menu in a script

select opt in "Hello Goodby"; do
    if [ $opt = "Hello" ]; then
        echo Hello there!
    elif [ $opt = "Goodby" ]; then
        echo done
        echo Try again...

Using read/write and file descriptors

exec 3< MyFile.txt
while [ ! $done ] ; do
    read <&3 myline
    if [ $? != 0 ]; then
    echo $myline

Redirecting command output

Redirecting selected streams to a file:

1>  Only stdout
2>  Only stderr
&>  Combine stdout and stderr

The &> is often, but not aways, the default behavior of a command. It is a Bash-only construct. A more portable way to do the same thing is:

2>&1    Combine stdout and stderr

Suppress error messages when running a command:

myCommand 2>/dev/null

Redirection examples

Assume the current directory contains a file:


And assume that this file does not exist:


A command that partly succeeds:

ls -l exists nosuch

Console will show the exists listing and an error about nosuch.

Send stdout to a file:

ls -l exists nosuch > Demo

Console shows "exists" listing and error about nosuch. Demo contains only the "exists" listing.

Send stderr to a file:

ls -l exists nosuch  2> Demo

Console shows only the "exists" listing. Demo contains only the error message about nosuch.

Add stderr to stdout and redirect to a file:

ls -l exists nosuch 2>&1 1> Demo

Console shows the error about "nosuch." Demo contains only the "exists" listing.

Note the precidence:

First stdout goes to Demo. Then stderr replaces stdout and goes to the console.

Add stderr to stdout and pipeline result to another program:

ls exists nosuch 2>&1 | grep exists

Add stderr to stdout and ignore stdout:

ls -l exists nosuch 2>&1 1> /dev/null

Console will still show stderr. This form is often used to discard information about normal command execution from a script.

Combine all outputs and send to null:

ls -l exists.txt nosuch.txt &> /dev/null

The last example is often used to suppress all console output.


The "expr" command evalutes a string as an arithmetic expression:

expr 2 + 3
expr 12 / 4
expr 14 % 3  

Note that expr requires spaces between operands and operators.

Parenthesis and "*" must be escaped:

expr 5 \* \(3 + 2\)

A somewhat neater way:

echo $[2*3+1]

Using the $[...] construct, nothing needs to be escaped.

The "let" command evalutaes the right side:

let a=4+6

This is equivalent to:

a=expr 4 + 6

Spaces are not allowed in "let" expressions.

Parenthesis must be escaped, but not "*":

let p=5*\(3+2\)

Again, it's easier to forget about the escapes and use:

let p=$[5*(3+2)]

Use "bc" for floating point computation

echo 45.3/2 | bc -l

The trailing -l (lower case L) loads the floating point library

x=`echo 99.0/3.14 | bc -l`
y=`echo 14.2 + 11 | bc -l`
echo $x + $y | bc -l

Base conversions

echo "obase=16; 1234" | bc

Select decimal places for result

echo "scale=3; 1/3" | bc

You don't need to use the -l if you set the scale > 0

echo "scale=3; 7/2" | bc
echo "scale=0; 7/2" | bc 

Show how long a bc calculation takes (p to 1000 places)

time echo "scale=1000; 4*a(1)" | bc -l -q

Formatting with printf

Leading zeros:

printf "%04d\n" 3

Hex printf "%x\n" 23

All the usual suspects...

Alias commands

An alias is a type of macro:

alias name='expression'

It is often used to "improve" shell commands. For example, to make the rm command ask for confirmation:

alias rm="rm -i"

By convention, aliases are defined in a hidden file in the users home directory named ".bashrc".

You can display an alias defintion using:

alias myName

To remove an alias:

unalias name

Exporting variables

Variables defined inside a shell script are only visible inside that script unless they are exported.

Exporting a variable makes it visible to external scripts called from inside the parent script.

To export one or more variables:

export var1 ... varn

Assignment and export can be combined:

export var1=expression1 ... varn=expressionN

Pulling up exported variables

When a parent script call a sub-script, variables exported by the sub-script are not visible in the parent. The parent can make them visible by using the "source" command:


There is a commonly used shorthand for this:


Note the space after the dot.

For example if you edit your login script ".bash_profile" and add some exported variables, you can make them visible (without loging out and back in) by executing:

source .bash_profile

Exiting a script on error

Adding these lines at the beginning of a script will cause it to exit if any subsequent command fails:

set -o errexit
set -o pipefail
set -o nounset

Detecting mounted file systems

Trying to mount a filesystem that's already mounted will throw an error in your your bash script. It's surprising there's no predicate to detect this, but you can use this construct instead:

if [ -z "$(findmnt $myPath)" ] ; then
    mount $myPath


Files and directories

ls                        # Show current directory contents
ls -l $path               # Show details about the file or directory
ls -d $path               # List directory names instead of contents
cd $dir                   # Change current directory to $dir
cd ..                     # Change current directory to parent directory
pwd                       # Show current directory path
mkdir $dir                # Create a directory
cp $src(s) $dst           # Copy $src file(s) to $dst
cp $src(s) $dir           # Copy $src file(s) into the directory $dir
mv $src $dst              # Move $src to $dst. Also used to rename files.
mv $src(s) $dir           # Move a group of files into a directory
rm $file(s)               # Remove (delete) files
rmdir $dir(s)             # Delete empty directory(s)
rm -rf $dirs(s)           # Delete files and/or directory(s) with their contents
> $file                   # Erase the contents of a file
touch $newFile            # Create a new empty file
touch $oldFile            # Change the modification time to "now"
touch -t YYMMDDhhmm $path # Change the modification time


Send the output of commandA to commandB:

commandA | commandB


Send the output (stdout) of commandA to a file:

commandA > somefilePath

Concatenate the output of commandA onto the end of a file:

commandA >> somefilepath

Splitting up and combining command output

commandA X optionalFilePath

Where X is:

1>  Only stdout
2>  Only stderr
&>  Combine stdout and stderr

The "&>" is often, but not always, the default behavior of a command.

Mass move

When using mass move, parameters must be quoted to avoid collision with the usual interpretation of wild cards. Each "*" on the source side corresponds to "#n" on the destination side where n=1 is the first "*" value, etc.


mmv "*.old" ""      # Change file extensions
mmv -a "*.txt" "all.txt"  # Append files into one

Copy a hierarchical directory and preserve all attributes

cp -a $sourceDir $destDir

Backup a hierarchical directory

rsync -a --delete $sourceDir $destDir

Copy files and show progress

rsync -aP $source $dest

Change the owner of a file

chown owner file         # owner only
chown file   # owner & group
chown .group file        # group only
chown owner. file        # owner & group=owner

Change the permissions of a file

chmod changes fileName

The changes are a comma separated list of expressions. Each expression is of the form:

users+permissions   # Add permissions
users-permissions   # Remove permissions
users=permissions   # Set exactly these permissions

The users can be one or more of the letters:

u   User    (Oner of the file)
g   Group   (Group of users)
o   Others  (Everyone else)
a   All (Same as "ugo", the default)

The permissions can be one or more of the letters:

r   Read
w   Write   
x   Execute

The user classes are specified in the order UserGroupOther, with three bits for each to enable or disable ReadWriteExecute.


chmod u+rwx,g+rw,o-rwx aFile

Numerical form: Use three binary integers for "ugo" and one bit each for "rwx":

chmod 760 aFile

Is the same as:

chmod   u+rwx,g+rw-x,o-rwx aFile
Binary:   111   11 0   000
Decimal:   7     6      0

Show disk usage of current dir or selected dir

du -s <dir>

Show sizes in Gs excluding elements smaller than 1G

du -sBG -t 1G *

Write to stdout

echo anything

Write to a file

echo anything > <path>

Append to a file

echo anything >> <path>

Update the modified time for a file

touch <path>

Quickly create an empty file

> <path>

Show differences between files

diff -r leftDir rightDir

Show files that differ without details

diff -r -q leftDir rightDir

Trace execution of a shell script

sh -x <fileName>

Monitor additions to a log file

tail -f <fileName>
ln -s fileName linkName

List files in color

ls --color=tty
(Alias this to ls)

List a single column of names only

ls -1

List directories only

find -type d -maxdepth 1
(Alias this to lsd)

List files in order of modification time

ls -lrt

List files in order of size

ls -lrS

List all open files and sockets


Show the number of files in a directory

ls -1 | wc -l

Show the total size of all files of a given type anywhere in a hierarchy

    du -ch -- **/*.jpg | tail -n 1

Run a shell script so it changes the environment

source .bash_profile (or whatever script you changed)

Run a command relative to another root file system

chroot newroot command

Execute a shell script and echo the commands for debugging

sh -x yourScript

Low-level high speed copy

The dd command can copy between files or devices.

dd if=sourcePath of=destPath bs=1M

Using the optional bs=n option can speed up the copy. You can use the suffix K, M, or G on numbers.

By adding the count option, you can write an exact number of fixed-size records. For example, to destroy the partition table of a disk:

dd if=/dev/zero of=/dev/sdb3 bs=1K count=1

To display the status of a dd copy that's aready started, use this command in another shell window:

kill -USR1 `pgrep '^dd$'`

This will cause dd to show a status message in it's original window and continue with the copy operation.

Boot problems

Enter and exit a chroot correctly

Use this script:

./ path-to-chroot-files

The script:

# - Mount system directories and enter a chroot


mount -t proc  proc $target/proc
mount -t sysfs sys $target/sys
mount -o bind /dev $target/dev
mount -o bind /dev/pts $target/dev/pts

chroot $target /bin/env -i \
    HOME=/root TERM="$TERM" PS1='[\u@chroot \W]\$ ' \
    PATH=/bin:/usr/bin:/sbin:/usr/sbin:/bin \
    /bin/bash --login

echo "Exiting chroot environment..."

umount $target/dev/pts
umount $target/dev/
umount $target/sys/
umount $target/proc/

When you're finished inside, simply type "exit" and everything is restored to the normal state.

Update the initial ramdisk (initrd)

To update the /boot/initramfs image for the running kernel:

dracut -fv

To update some other kernel

dracut -fv --kver 5.2.7-100.fc29.x86_64

The kver name is in the form you see when you run:

uname -r

Unpack a initramfs image

Fedora makes this a trial by packing the .img file with glark at both ends.

dnf install binwalk


binwalk $myrd

Take note of the decimal offset for the "gzip compressed data" near the end of the binwalk listing. For this example, it turned out to be 15360. Now proceed:

dd if=$myrd of=myrd.gz bs=15360 skip=1
gunzip myrd.gz
mkdir result
cd result
cpio -mid -H newc --no-absolute-filenames < ../myrd

You now have the initramfs in the result directory.

Force dracut to include incremental systemd service file changes

The systemd collection has a mechanism for patching or replacing service files with editing the original file directly. This is nice because it allows you to mess with things in a way that won't be undone by future updates to the package that provided the service. Unfortuately, dracut won't include such fixes in the initramfs.

When you make an incremental change using an expression like:

systemctl edit mything.service

A directory and file are created here:


This file contains whatever you typed in the editing session.

To make dracut include the fix, create this file:


(The name doesn't actually matter as long as it ends with .conf)

That contains:


Your next dracut invocation will include the patch.

If you use the --full option when creating a patch, the new version of the service file is placed in the directory /etc/systemd/system. To make dracut include this file, use:



Using cdrecord with non-scsi drives

The primary tool described in the following sections is "cdrecord". The most current versions of this program accept normal Linux CD device names, e.g. "/dev/cdrom" and support both SCSI and ATAPI drives.

Earlier versions of cdrecord only worked with SCSI drives and required the bizarre "x,y,z" drive name notation.

Create a data CDR readable by Linux (-r) or Windows (-J)

nice --18 mkisofs -l -J -r -V MyVolumeName sourceDirectory/  \
    | cdrecord speed=x dev=/dev/cdrom -data -

To make a CDRW, add blank=fast to cdrecord options. Speed should be 8 for CDRs and 4 for CDRW on my HP 9200.

Create a data DVD readable by Linux (-r) or Windows (-J)

growisofs -dvd-compat -Z /dev/hdc -J -r /path/to/directory

Create a video DVD

growisofs -dvd-video -Z /dev/hdc /pathTo/Directory

The Directory should contain the AUDIO_TS and VIDEO_TS subdirectories expected on a video.

Create an ISO image file from a directory of files

mkisofs -l -r -J -V MyVolumeName -o myISOfile.iso.bin sourceDirectory/

Display info about writable media

dvd+rw-mediainfo /dev/hdc

Copy a raw DATA CD at the sector level. Source is on /dev/cdrom

cdrecord -v dev=/dev/cdrom speed=2 -isosize /dev/cdrom

Make an audio cd track from an mp3 file

mpg123 -s file1.mp3 \
    | cdrecord speed=x dev=/dev/cdrom -audio -pad -swab -nofix - 

Use this command for each track, then fixate using the command documented next:

Fixate the CD

cdrecord dev=/dev/cdrom -fix

Rip a music CD track

cdparanoia [-d device] trackRange result.wav

Rip all the tracks on an audio cd to a set of wav files

One wav per track:

cdparanoia 1- -B

Rip and convert one track to one mp3

cdparanoia trackNumber - | lame -b 160 - result.mp3

Record an audio cd from a directory full of wav files

One wav per track:

cdrecord speed=s dev=/dev/cdrom -audio *.wav

Track range examples

1-  # Entire CD
-- -3   # Beginning through track 3
2-4 # Tracks 2 through 4

Create a CDR from an ISO image

cdrecord speed=4 dev=/dev/cdrom -data imageFile.iso.bin
For cdrw, add: blank=fast

Create a CDR from a raw partition

cdrecord speed=4 dev=/dev/cdrom -isosize -dao -data /dev/hda2 
For cdrw, add: blank=fast

Create an ISO image file from a CD

readcd dev=/dev/cdrom f=myImageFile.iso.bin

Dealing with older versions of cdrecord

Older versions of cdrecord require scsi drivers or scsi emulation with atapi drives. The following sections show how to deal with this situation.

Make your ide cdrom look like a scsi device

The cdrecord program wants to see scsi devices: The cdrom module must be loaded first, but it will normally be loaded if it was operating in ide mode. Otherwise, do an "insmod cdrom" first.

rmmod ide-cd
insmod cdrom
insmod sr_mod
insmod ide-scsi

The scsi-mod will be loaded if you have a real scsi interface in your machine. Otherwise, # it must be loaded before sr_mod.

Restore the cd to normal (IDE) operation

rmmod sr_mod ide-scsi
insmod ide-cd

Make atapi cd drives look like scsi at boot time

For this example, assume you have two ide drives:

hdc and hdd.

Method 1: Add this line to you kernel boot options:

append="hdc=ide-scsi hdd=ide-scsi"

Method 2: Add these lines to /etc/modules.conf:

options ide-cd ignore=hdc 
options ide-cd ignore=hdd
pre-install sg modprobe ide-scsi
pre-install sr_mod modprobe ide-scsi
pre-install ide-scsi modprobe ide-cd

Devices for the cd drives in scsi mode

/dev/scd0   cdram
/dev/scd1   cdrom
/dev/scd1   dvd

Device names for cd drives in ide mode

/dev/hdc    cdram
/dev/hdd    cdrom
/dev/hdd    dvd

List all SCSI devices visible to cdrecord in x,y,z format

The cdrecord program will use "dev=x,y,z" notation where x,y,z are shown by the command:

cdrecord -scanbus


Selected system configuration files:

/boot/*                          # Linux kernel and initrd files
/etc/aliases                     # Redirect mail
/etc/auto.mount                  # Autofs mountpoints
/etc/dhcp/dhcpd.conf             # Specify DHCP names and numbers
/etc/exports                     # NFS shares
/etc/named.conf                  # DNS configuration
/etc/fstab                       # Boot time mountpoints
/etc/hosts                       # Define hostnames and ip numbers
/etc/httpd/conf.d/*.conf         # Apache configuration files
/etc/mail/            # Sendmail configuration macros
/etc/mail/local-host-names       # Mail domains handled by this server
/etc/modprobe.d/*                # Modify kernel module parameters
/etc/php.ini                     # PHP global settings
/etc/profile.d/*                 # Shared environment variables
/etc/resolv.conf                 # Specify IP name server
/etc/samba/smb.conf              # Samba shares
/etc/selinux/config              # Where to turn off selinux
/etc/ssh/ssh_host_rsa_key .      # Host private RSA key
/etc/ssh/ .  # Host public RSA key
/etc/sudoers                     # Users allowed to "sudo"
/etc/sysconfig/network           # Specify IP gateway
/etc/sysconfig/network-scripts   # "ifcfg" scripts for network adapters
/etc/sysctl.d/*                  # Boot time kernel parameter settings
/var/named                       # DNS zone files
/var/spool/mail                  # User inboxes
~/.bashrc                        # Per-user bash customization
~/.procmailrc                    # Per-user procmail filters
~/.ssh/authorized_keys           # Per-user authorized keys
~/.ssh/known_hosts               # Per-user RSA known hosts
~/.ssh/id_rsa                    # Per-user private RSA key
~/.ssh/                # Per-user public RSA key

Example /etc/fstab:

# Root and swap volumes

/dev/hda1           /              ext3    defaults 1 1
/dev/hda3           swap           swap    defaults 0 0

# Special device mounts

none                /proc          proc    defaults 0 0
none                /dev/pts       devpts  gid=5,mode=620 0 0
none                /dev/shm       tmpfs   defaults 0 0

# Removable media

/dev/fd0            /mnt/floppy    auto    noauto,owner 0 0
/dev/cdrom          /mnt/cdrom     iso9660 noauto,owner,ro 0 0

# Logical volumes on the boot device

/dev/vg2/spoolVol   /var/spool     ext2    defaults 0 0
/dev/vg2/homeVol    /home          ext2    defaults 0 0
/dev/vg2/wwwVol     /var/www       ext2    defaults 0 0

# Logical volumes on the backup device

/dev/vg1/backVol    /mnt/back      ext3    defaults 0 0
/dev/vg1/archVol    /mnt/dos       ext3    defaults 0 0

# Samba network

//hp/dos            /mnt/hpDos     cifs   noauto,username=administrator 0 0
//hp/c              /mnt/hpWin     cifs   noauto,username=administrator 0 0
//sparksVaio/C$     /mnt/vaio      cifs   noauto,username=administrator 0 0
//sparks9k/Main     /mnt/9kWin     cifs   noauto,username=administrator 0 0

# NFS network

# hp:/mnt/c         /mnt/dummy1    nfs     noauto,_netdev 0 0

# Loop mount example

# /mnt/Mac.hfs      /mnt/mac       hfs     noauto,loop 0 0

Example /etc/exports:

/mnt/back      *,no_root_squash)
/mnt/dos       *,no_root_squash)
/var/www/html  *,no_root_squash) 

Example grub2 configuration

For non-EFI systems, there is a symbolic link:

/etc/grub2.cfg -> /boot/grub2/grub.cfg

For EFI systems, the configuration file is:


Here is a typical menu entry:

menuentry 'Fedora (4.18.13-100.fc27.x86_64)' ... {
    set gfxpayload=keep
    insmod gzio
    insmod part_gpt
    set root='hd0,gpt2'
    if [ x$feature_platform_search_hint = xy ]; then
      search --no-floppy <lots of places>
      search --no-floppy --fs-uuid --set=root f29f6832999194c6
    linuxefi /fedora@/boot/vmlinuz-4.18.13-100.fc27.x86_64 ro rhgb quiet
    initrdefi /fedora@/boot/initramfs-4.18.13-100.fc27.x86_64.img

Example /etc/sysconfig/static-routes

When a device is started, the static-routes file is read by the script ifup-routes. For each line that matches the device in the first parameter it reads the line:

read device args

The routes are added by a script that performs "route add" (Note the minus character before $args)

route add -$args $device

For example: (This is used to route back to basilisk)

eth0 host gw


List available ciphers

openssl list-cipher-commands

Encrypt a document using openssl

openssl des3 -salt -in mydoc.txt -out mydoc.txt.des3

Decrypt a document using openssl

openssl des3 -d -salt -in mydoc.txt.des3 -out mydoc.txt


The davfs filesystem allows you to treat remote webdav shares like regular mountpoints.

Install the package:

dnf install davfs2

That will create a file:


Edit this file, adding a line for each of your remote shares: remoteUserName remotePassword

For example a Nextcloud share looks like this: remoteUserName remotePassword

Now you can mount the share from the command line:

mkdir myStuff

mount -t davfs myStuff

Or in fstab: /mnt/myStuff davfs rw,noauto 0 0

Or with autofs, in /etc/auto.mount:

myStuff -fstype=davfs :https\://

Note that you must escape the URL ":" in the auto.mount file.



# Shared

ddns-update-style none ;
option ###domain-name "" ;
option domain-name-servers ;
log-facility local1;

subnet netmask
{       authoritative ;

subnet netmask
{       option routers ;
        option subnet-mask ;
        option broadcast-address ;
        option domain-name "" ;
        option domain-name-servers ;
        {       range ;

# Fixed

host sparksc2
{       hardware ethernet 00:16:36:a5:fe:fb ;

host sparksc2w
{       hardware ethernet 00:18:de:5b:50:c6 ;

host cgsVaio
{       hardware ethernet 00:01:4a:16:b6:58;


Display hardware

All hardware:


Display hardware summary:

lshw -short

Display hardware of a selected class:

lshw -class <name>

Class names:



lshw -short -class processor

Display hardware with a GUI:


List subsystems

List CPU information


List PCI devices


List USB controllers


List block devices


Display information from the DMI tables:


dmidecode -t system


dmidecode -t memory


dmidecode -t processor


dmidecode -t bios

Use motherboard and chip sensors

To get the best effect, you first have to scan your hardware:


To see everything:


To monitor continuously every second:

watch -n 1 sensors

Disk drives

List all drives

fdisk -l

Display or modify partitions

fdisk /dev/devname

Display UUID, disk label, and other attributes

blkid /dev/sda1

Set and display disk drive features

hdparm options /dev/sda

With no options, displays brief drive information. There are dozens of options to control low-level device features. Most of these are already optimized by default and can be dangerous to manipulate. The most common use of hdparm is for setting and displaying power saving features.

-I      Display extended drive information.
-S  n   Spindown after n 5 sec tics (0 <= n <= 240)
-B  Display the state of advanced power saving features.
-B  1..127
    Enable advanced power saving features including spindown.
    1: Max power saving. 127: Min power saving.
-B  128..255
    Enable advanced power saving without spindown.
    128: Max power saving. 255: Disable power saving.
-y      Go to low power state (usually includes spindown)
-Y      Go to lowest power state. (requires reset)
-C      Show the current power saving state:
    active/idle, standby, sleeping
-t  Perform & display drive speed test results.

Hot swap a disk

For these examples, the device is /dev/sdX.

For luck, tail the journal in a separate shell window:

journalctl -f -t kernel 

Unmount file systems that use the device:

It depends. You have to know...


hdparm -Y /dev/sdX

Remove from kernel:

echo 1 > /sys/block/sdX/device/delete

Attach a new disk:

Plug it in. I like to attach the power cable first,
wait a bit, and then attach the SATA connector.

If it's an ATA controlled device, it will be recognized
and attached. If not, you can try a re-scan:

Scan for new devices:

exec tee /sys/class/scsi_host/host*/scan <<<'- - -' >/dev/null


DNF is the successor to Yum.

Install a package

dnf install <packageName>

Remove a package

dnf remove <packageName>

List installed packages

dnf list installed

List package groups

dnf group list

Install a group

dnf group install <groupName>

Find packages (installed or not)

dnf list <partial package name>*

Get info about a package:

dnf info <packageName>

List installed repositories

dnf repolist

Get updates

dnf update

Install a new repository:

dnf config-manager --add

Turn off gpg checking

dnf config-manager --save --setopts=server.csparks.com_rpms.gpgcheck=0


Using DNS at home

I find that "things go better with DNS". This applies to lots of programs including MySQL and sendmail.

I host my own domain and 2 or 3 others without having external secondary servers. This causes the angels to weep. But even if you just keep a local LAN, runnng DNS is convenient. If you don't host externally visible domains, proceed as shown below but leave out the external view in /etc/named.conf.

The following sections show samples of each configuration file. The zone files "*.zone" and "*.rev" contain a serial number that must change each time you edit the file.

Tell named to reload the configuration after changes:

rndc reload

If you try to use systemctl restart named, you will get an error because the socket is in use.


multi on
order hosts,bind



/etc/hosts   localhost localhost.localdomain




acl "mynet" { 127/8;; } ;

options {
    listen-on port 53 { any; };
    // listen-on-v6 port 53 { ::1; }; // Hugh
    directory   "/var/named";
    dump-file   "/var/named/data/cache_dump.db";
    statistics-file "/var/named/data/named_stats.txt";
    memstatistics-file "/var/named/data/named_mem_stats.txt";
    secroots-file   "/var/named/data/named.secroots";
    recursing-file  "/var/named/data/named.recursing";
    allow-transfer { none; };
    allow-notify { none; };
    version "unknown" ;

    dnssec-enable yes;
    dnssec-validation yes;

    managed-keys-directory "/var/named/dynamic";
    geoip-directory "/usr/share/GeoIP";

    pid-file "/run/named/";
    session-keyfile "/run/named/session.key";

    include "/etc/crypto-policies/back-ends/bind.config";

logging {
    channel default_debug {
            file "data/";
            severity dynamic;

include "/etc/named.root.key";

view "internal"{

    match-clients { mynet;  };
    recursion yes;
    allow-query { mynet; };
    allow-query-cache { mynet; };
    allow-recursion {; mynet; };
    match-recursive-only yes ;

    zone "." IN {
        type hint;
        file "";

    include "/etc/named.rfc1912.zones";

    zone "" {
        type master;
        file "";

    zone "" {
        type master;
        file "csparks.internal.rev";

    zone "" {
        type master;
        file "";

}; // end of internal view

view "external"{

    match-clients { any; };
    recursion no;
    allow-query { any; };
    allow-recursion { none; };
    match-recursive-only no ;

    zone "." IN {
        type hint;
        file "/var/named/";

    zone "" {
        type master ;
        file "" ;
        allow-transfer { ; };
        also-notify { ; };

    zone "" {
        type master ;
        file "" ;

}; // end of external view


zone "localhost.localdomain" IN {
    type master;
    file "named.localhost";
    allow-update { none; };

zone "localhost" IN {
    type master;
    file "named.localhost";
    allow-update { none; };

zone "" IN {
    type master;
    file "named.loopback";
    allow-update { none; };

zone "" IN {
    type master;
    file "named.loopback";
    allow-update { none; };

zone "" IN {
    type master;
    file "named.empty";
    allow-update { none; };


@       IN      SOA (
                    2010040701      ; serial: todays date + todays serial
                    8H              ; refresh, seconds
                    2H              ; retry, seconds
                    4W              ; expire, seconds
                    1D )            ; minimum, seconds

            MX 10
            MX 20
server          A
mail            A
dns1            A
dns2            A

another     A

ftp             CNAME   server
www             CNAME   server
shell           CNAME   server 


@       IN      SOA (
                    2010040701 ; Serial, todays date + todays serial
                    8H         ; Refresh
                    2H         ; Retry
                    4W         ; Expire
                    1D)        ; Minimum TTL


2               PTR
3               PTR


Send mail from the command line

Multiline text body:

mail -s 'A subject string'
Type your message here
and end with a <control>d 

One line:

echo "This is a one line message" | mail -s 'A subject string'

Message from a file:

cat messageBody.txt | mail -s 'A subject string'

Using mutt to send attachments

echo "See enclosed document" | mutt -s 'A subject string' -a 'myFile.bin' --

You can add the message body text by any means shown in the previous section for the "mail" command.

Talk to sendmail directly for debugging

This will create a minimal message:

telnet 25
mail from:
rcpt to:
Your message text goes here on one
or more lines. The last line must be a period:

This will create a message with all standard fields:

telnet 25
mail from:
rcpt to:
To: Recipient Display Name <>
From: Sender Display Name <>
Subject: This is a test

Type your message here and end with a dot:

Note that two s are required after the Subject line. If one or both display names aren't known or provided, use the respective email addresses without the angle brackets.

Talk to a POP server directly for debugging

telnet <destinationMachine> 110
USER <yourEmailAddress>
PASS <yourPassword> 

Talk to an IMAP server directly for debugging

telnet <destinationMachine> 143
a login <yourUsername> <yourPassword>
a select inbox
a fetch <n> full
a fetch <n> body[header]
a fetch <n> body[text]
a logout

Display outgoing mail queue

sendmail -bp



Configure sendmail as a server

This is only useful if your machine will act as a mail server for your domain. It is not necessary if you send and receive email via an ISP. This is not an adequate recipe if you intend to host mulitple domains.

Changes for /etc/

Enable listening on the external smtp port

dnl DAEMON_OPTIONS(Port=smtp,Addr=, Name=MTA)dnl

Masquerade changes header fields on outgoing mail so they alway appear to come from rather than from whatever machine on your internal LAN was the source. That's almost always what you want to to:


If-and-only-if you don't have DNS (bind) configured, you need to explicity tell sendmail your server's host name. This should match the whatever an external reverse-lookup of your IP address returns. If the names don't agree, some remote servers may reject mail from your domain.

define(`confDOMAIN_NAME', `')dnl

After changing /etc/mail/ you need to run the macro processor:

m4 /etc/mail/ &gt; /etc/mail/

Note: On Fedora systems, you can rebuild and any modified database files by simply running "make" in the sendmail directory.

Enable relaying for your domain in /etc/mail/access. This allows other machines on your LAN to send mail through the server to other destinations:

Connect:localhost.localdomain   RELAY
Connect:localhost       RELAY
Connect:       RELAY 550 You are a poltroon  RELAY

Rebuild the database (access.db)

makemap hash /etc/mail/access < /etc/mail/access

Populate local-host-names with all domain names and host names sendmail will accept as mail recipiants: The name of your server "" should match the name you specified for the MX record when you registered your domain name.

If you have linux client machines running on your internal LAN that will send mail via your server, they need to have the "dotted" name of the mail server in their /etc/hosts file. (This is not necessary if you are running a properly configured DNS server.) Note the trailing dot:

Enable the sendmail daemon at boot time:

chkconfig --add sendmail

Restart the server after making changes:

service sendmail restart

Show pending outgoing mail

sendmail -bp

Or simply:


Reroute mail sent to your server

The /etc/virtusertable is used to reroute mail. The left column has addresses for domains or email addresses accepted by your server. (You listed them in local-host-names.) The right column has the destination where the mail will be sent:

You can also send the same message to multiple destinations: 

This is a catch-all entry for

You can also send mail to the same user on a different domain:

In the example above, the %1 matches the username on mail directed to

Redistributing local mail via aliases

The /etc/aliases file redirects mail accepted for local delivery. It is used after the virtusertable does its thing. It has a similar format, but note the required colons:

root:       yourname
postmaster:     yourname
happylist:  yourname,bill,jane,walter,

Note that the last line implements a simple mailing list. The last member is on a remote machine.

Note: The "postmaster" is a required user on a domain that conforms to the email RFCs. If you discard mail not directed to known local users in virtusertable, you should first match and redirect in that file because it will never make it to the aliases redirect.

Configure the IMAP server

Entry for /etc/xinetd.d

service imap
{   socket_type     = stream
    wait            = no
    user            = root
    server          = /usr/sbin/imapd
    disable         = no

Create an md5 password file owned by root:

touch /etc/cram-md5.pwd

Add one line for each imap user of this form:


Both pop & imap will use this file to avoid transmitting clear-text passwords.

After editing, the file permissions should be changed:

chmod a-rwx,o+r /etc/cram-md5.pwd

Simple mailing lists

Edit /etc/aliases and add:

mything-users:  yourname,localuser1,localuser2,

If the member list gets too long, you can put them in a file:

mything-users:  :include/home/lists/mything-users

In the file /home/lists/mything-users:



755     /home/lists
644     /home/lists/mything-users

It's best to convince list users to add a prefix to their subject lines:

[mything-users] Whatever

Mailing lists using GNU mailman

This example assumes you have installed a redhat/fedora mailman rpm.

Edit: /usr/lib/mailman/Mailman/

Modify these definitions:


Create the "mailman" mailing list:

cd /usr/lib/mailman
./bin/newlist mailman

You will be asked to provide your email address and a password. A list of alias definitions are presented and you must copy these into:


Then run:


Provide a site password by running:

cd /usr/bin/mailman

Configure the system service

chkconfig mailman on
service mailman start

Edit the httpd configuration file in:


Un-comment and edit the line at the end to redirect mailman queries on your server, then restart httpd:

service httpd restart

Now you can visit

Check your own email and you should see the creation announcement for the new list "mailman."

To create new lists:

cd /usr/lib/mailman
./bin/newlist mynewlist

To delete a list

cd /usr/lib/mailman
./bin/rmlist listname

To remove all the associated archives as well:

./bin/rmlist -a listname

File systems

Format a floppy disk

fdformat /dev/fd0H1440
mkfs -t msdos /dev/fd0H1440 1440

When putting ext2 on a floppy, omit the su reserve:

mkfs -t ext2 -m 0 /dev/fd0H1440 1440

Some-but-not-all floppies can be enlarged:

fdformat /dev/fd0u1722

Mount filesystems

mount -t iso9660 -ro /dev/hdc /mnt/cdrom
mount -t vfat /dev/hda5       /mnt/dos
mount -t ext2 /dev/sda3       /mnt/jazz
mount -t ntfs /dev/hda1       /mnt/nt
mount -t cifs //sparks750/c   /mnt/sparks750
mount -t cifs //qosmio/c$     /mnt/qosmio -o username=ann,password=nelly
(See fstab below for more cifs options)
mount -t hfs  /dev/sda /mnt/jazz -o afpd -o uid=500
    (Currently, the afpd option hangs up the Mac...)
mount -t nfs /mnt/macroot

To support nfs mounts, remote system must have /etc/exports: /root *

Mounting labeled devices: e2fs and vfat partitions may be assigned labels.

To use a label:

mount -t ext3 -L mylabel /mnt/stuff

Newer versions of Linux figure out the filesystem type automatically so the -t options can often be omitted.

Labeling e2fs partitions

e2label /dev/sdb3 mylabel

Labeling vfat partitions

There is no simple tool like e2label for vfat partitions. First, you must mount the partition the old way. For this example, we assume it's /dev/sda3.

mount -t vfat /dev/sda3 ~/here

Now add a line in /etc/mtools.conf:

drive x: file="/dev/sda3"

Assign the partition a new label:

mtools x:MYLABEL

Display the label:

mtools -s x:

You can remove the line added to /etc/mtools.conf and unmount the partition:

umount ~/here

From now on, you can mount it using the label:

mount -t vfact -L MYLABEL ~/here

Or with a line in /etc/fstab:

LABEL=MYLABEL /mnt/myThing  vfat defaults 0 2 

This is especially nice for USB memory sticks because they will be associated with different devices depending on their mount order.

Make and mount a file system inside a file

dd if=/dev/zero of=MyDiskImage.ext2 bs=1k count=1000
mkfs -t ext2 MyDiskImage.ext2
mkdir here
mount -t ext2 -o loop MyDiskImage.ext2 here

Make and mount a file system using a loop device

Show all active (in use) loop devices:

losetup -a

Show the first available loop device:

losetup -f

Attach a loop device to a file:

losetup /dev/loop0 MyDiskImage.ext2

Mount the device with a specified filesystem:

mount -t ext2 /dev/loop0 here 

This does the same thing as mount with the -o option. It is easier to use the -o option because you don't have to deal with finding and specifying a free loop device.

When you are finished, unmount the volume:

umount here

And free the loop device:

losetup -d /dev/loop0

Make and format a Macintosh filesystem inside a file

dd if=/dev/zero of=MacDiskImage.hfs bs=1k count=whatever
hformat -l "HD1" MacDiskImage.hfs

Check and repair a filesystem

e2fsck -cfpv 

-c Check for bad blocks and put them on the "don't use" list
-f Force checking even if filesystem is "clean"
-p Perform automatic fileystem repairs
-v Verbose mode

Show free space on all files systems

df -h 

The -h option selects human-readable units.

List subdirectories sorted by size

du -sh -- */ | sort -rh

Show details about a linux file system

tune2fs -l /dev/hdax

Create an ext3 file system

mkfs -t ext2 -j /dev/hdax

Convert ext2 to ext3

tune2fs -j /dev/hdax

Resize a file system (offline)

Revert from ext3 to ext2 if necessary (see below) I have heard that this step is unnecessary.

unmount /dev/hda1
e2fsck -f /dev/hda1
resize2fs /dev/hda1 newSizeInBlocks
mount /dev/hda1 /mnt/point

If newSize is not specified, the file system will grow to fill the partition.

After shrinking a file system, you can shrink the partition to match.

After growing a partition, you can grow the file system to match.

Revert an ext3 file system to ext2

umount /dev/hda1            # Unmount the partition
tune2fs -O ^has_journal /dev/hda1   # Turn off journaling
e2fsk -y /dev/hda1          # Check for errors
mount -t ext2 /dev/hda1 /mnt/point  # Remount as ext2
cd /mnt/point               # Go to root directory
rm -f .journal              # Remove the journal

You must update entry in fstab if this is a permanent change.

Ext3 should be reverted to ext2 before resizing.

Convert an ext2 file system to ext3

tune2fs -j /dev/hda1

Edit fstab to indicate ext3.

If this is the root partition, you need to use an initrd to boot.

See redhat documentation for details.

Create and use an encrypted LUKS volume

This is the new and prefered way to handle file system encryption. See the next section on the older default method.

Create a zvol named "temp2"

zfs create zool/temp2 -V 32G

Randomize it: (optional, very slow, more secure)

badblocks -c 10240 -s -w -t random -v /dev/zool/temp2

Format with luks

cryptsetup create luksFormat /dev/zool/temp2

Mounting the encrypted volume

cryptsetup luksOpen /dev/zool/temp2 goom

The previous command creates a device node named "goom":


Format the volume

mkfs -t ext4 /dev/mapper/goom

Mount the filesystem

mount /dev/mapper/goop ~/here

To close down everything:

umount ~/here
cryptsetup remove goom

Create and use an encrypted (default) volume

Create OR mount an existing encrypted device

cryptsetup create gark /dev/zool/temp2

The previous command will also mount gark here:


Format the volume

mkfs -t ext4 /dev/mapper/gark

Mount the filesystem

mount /dev/mapper/gark here


umount here
cryptsetup remove gark

Automatically mount file systems

Configure autofs and you'll never have to type mount commands again!

The autofs service must be running for this to work.

service autofs status

If autofs was not running, you can start it using:

service autofs start

Configure autofs to start after reboot:

chkconfig autofs on

Configuration files

The master file specifies one or more directores where mount points will be automatically created and the files that contain the items to be mounted.


    /mnt   /etc/auto.mount
    /goop  /etc/auto.goop

Each mount point file contains any number lines of the form:

mount-point-name  -fstype=filesystem,options  :device

An example:


    dvd     -fstype=iso9660,ro,nosuid,nodev  :/dev/cdrom
    stick   -fstype=auto  :/dev/sdb1
    floppy  -fstype=auto  :/dev/fd0
    asus    -fstype=cifs,rw,noperm,username=xxxx,password=yyyy ://asus/mydir

After editing these files, you must reload:

service autofs reload

You can now access the contents of these directories by simply using them:

ls /mnt/stick
cd /mnt/asus

The autofs deamon will unmount these resources when they are unused for a specified time. This timeout can be configured in:


The timeout is specified in seconds using this expression:



See also IPTables

Open for a service:

firewall-cmd --add-service=samba
firewall-cmd --remove-service=samba

Remove a service

firewall-cmd --remove-service=<name>

List open services

firewall-cmd --list-services

List all possible service names

firewall-cmd --get-services

Open a port:

firewall-cmd --add-port=port/proto
firewall-cmd --remove-port=port/proto

Open a UDP port (ethercat)

firewall-cmd --add-port=34980/udp

Open a range of ports

firewall-cmd --add-port=8000-8100/tcp

Close a port

firewall-cmd --remove-port=<number>

List open ports:

firewall-cmd --list-ports

Protocol values:

tcp, udp, (others)

Non-permanent channges take effect immediately.

Make changes permanent by adding:


After making a permanent change:

firewall-cmd --reload

You can make all non-permanent changes permanent using:

firewall-cmd --runtime-to-permanent

Go in/out of panic mode

firewall-cmd --panic-on
firewall-cmd --panic-off


Load the firewire packet module

modprobe ieee1394

Load the firewire card controller

modprobe ohci1394

The ohci module will recognize your disk as a SCSI device and automatically load the serial bus protocol (sbp2) module.

If you need to see what's going on for debugging, do a tail -f /var/log/messages in another shell window before you load the module.

Scan the bus for the SCSI address

cdrecord --scanbus

Mine was at SCSI addresss 2,0,0 so it is /dev/sdb.

If the result had been 1,x,y it would be on /dev/sda.

Use fdisk to find the partition name

fdisk /dev/sdb

I found the DOS partition on the ipod at /dev/sdb2

Create a mount point

mkdir /mnt/ipod

Mount the device by hand

mount -t vfat /dev/sb2 /mnt/ipod

Example fstab entry

/dev/sb2  /mnt/ipod  vfat  noauto 0 0

Mount the device when an fstab entry exists

mount /mnt/ipod

Before you remove the device!

umount /mnt/ipod
rmmod sbp2

After the rmmod, the iPod will tell you that it's ok to disconnect. This precaution should be observed before unplugging any firewire disk.

Remounting (With firewire and ohci already loaded)

modprobe sbp2
mount /mnt/ipod



gdb <program>     # Start gdb and select the program
file <program>    # Specify or change the program in gdb
attach <pid>      # Attach to a running process
quit              # Exit debugger


cd                # Change directories
pwd               # Show working directory
edit <fileName>   # Edit a file


run <>     # Start program with parameters
start <>    # Start program and break at main()
step              # Step into
next              # Step over
cont              # Continue from break
jump <line>       # Jump to <line>
finish            # Finish this function and return to caller
return            # Return now (skip rest of func)
return <expr>     # Return now with this value


list <loc>        # List source starting at location
list <l1>,<l2>    # List source from l1 to l2
list              # No <line> continues listing
directory path    # Add a source file directory


frame             # Show current execution context
frame <n>         # Switch context to frame n
backtrace         # Show all frames on the stack
up                # Switch context up one frame
down              # Swtich context down one frame


break <loc>       # Set breakpoint
clear <loc>       # Clear breakpoint


watch <expr>      # Watch the value

Modifying breakpoints or watchpoints

delete <n>        # Delete by number
disable <n>       # Diabled by number
enable <n>        # Enable by number
condition <n> <boolean expression>


display <expr>    # Print value at each break
undisplay <n>
enable display <n>
disable display <n>


print <expr>      # Show value of expression
print/x <expr>    # Show value in hex
set <var> <expr>  # Change the value of a variable
whatis <var>      # Show the type of a variable


info breakpoints  # Show breakpoint locations and numbers
info watchpoints  # Show current watchpoints
info display      # Show displays
info args         # Show args of the context frame
info locals       # Show local vars
info variables rx # Show global vars. Rx=regexp
info functions rx # Show functions. Rx=regexp
info threads      # Show threads and process pid
info macro <name> # Show macro definition


234               # A line number
*$pc              # The current execution location
myfun             # A function
myFile.c:234      # A line in a source file
myFile.c:myfun    # A function in a file

Real-time programs

Programs that respond to high rate real-time events (SIG34) are difficult to debug without these steps:

In the user's home directory, create the file:


With contents:

handle SIG34 nostop noprint pass


set print thread-events off   # Stop annoying thread state messages
dll-symbols myThing.dll       # Preload dll symbols

Stack trace a hung process

Attach the process:

strace -p procesId

Then issue C to break and display the stack.


Resize images by percentage

mogrify -resize 50% *.jpg

Resize images to specified width (height will be proportional)

mogrify -resize 400 *.jpg

Convert color images to grayscale (blank and white)

mogrify -colorspace gray *.jpg

Convert all gifs to jpgs

mogrify -format jpg *.gif

Rotate a jpg 90 degrees clockwise, width equals height

mogrify -rotate 90 myfile.jpg

Rotate a jpg 90 degrees clockwise, width greather than height

mogrify -rotate "90>" myfile.jpg

Rotate a jpg 90 degrees clockwise, width less than height

mogrify -rotate "90<" myfile.jpg

Change photo file dates to match EXIF dates

jhead -ft *.jpg

Grub2 boot loader

A staggering amount of disinformation exits on the web about configuring grub2. This due to differences between linux distributions as well as frequent changes to the grub2 package. When people write helpful tutorials like this one and then forget about them, the world deteriorates.

This section describes the correct procedure for UEFI firmware as of 2024-01-24. If you're reading this in the far future, these examples are probably incorrect.

METHOD 1: Passing kernel command line options with grubby

This appears to be the new standard way used by Fedora installers. In this case, you can do without this file entirely:


Instead, each kernel command line parameter is set using "grubby".

Example: Blocking the nouveau driver when using proprietary nvidia drivers:

grubby --update-kernel=ALL --args="rd.driver.blacklist=nouveau"
grubby --update-kernel=ALL --args="modprobe.blacklist=nouveau"
grubby --update-kernel=ALL --args='nvidia-drm.modeset=1'
grubby --update-kernel=ALL --args="ibt=off"

Example: Get rid of selinux:

grubby --update-kernel=ALL --args="selinux=0"

Example: Turn off auditing:

grubby --update-kernel=ALL --args="audit=0"

Example: Booting with root file system on ZFS:

grubby --update-kernel=ALL --args="root=ZFS=zool/fedora"

In this example, "zool" the name of your rootfs pool
and "fedora" is the dataset name.

Removing a kernel command line parameter:

grubby --update-kernel=All --remove-args="audit"

METHOD 2: Using the /etc/default/grub file:

Edit: (or create)


Inside, define (for example)

GRUB_CMDLINE_LINUX="rd.driver.blacklist=nouveau modprobe.blacklist=nouveau"

Now run:

grub2-mkconfig -o /etc/grub2-efi.cfg

This will update the BLS entries for all your kernels in:


Repairing a damaged grub configuration:

Before Fedora 34 and currently on many other grub2-based linux distributions, this command was used to update the boot file after editing /etc/default/grub:

grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

This procedure is documented on countless websites.

After Fedora 33, the procedure changed:

grub2-mkconfig -o /etc/grub2-efi.cfg

The old grub.cfg file is now a short dispatcher script you should never change:


If you accidentally overwrote /boot/efi/EFI/fedora/grub.cfg by using grub2-mkconfig the old way, you can repair your installation:

Remove all existing grub.cfg files:

rm /boot/efi/EFI/fedora/grub.cfg
rm /boot/grub2/grub.cfg

Then reinstall grub:

dnf reinstall shim-* grub2-efi-* grub2-common


The exceptionally alert will notice that there are two symbolic links that point to the same file:

/etc/grub2-efi.cfg      => ../boot/grub2/grub.cfg
/etc/grub.cfg           => ../boot/grub2/grub.cfg

This is not a mistake. The magic will work.


To see all the hardware in your system as a formatted outline:


To see a shorter summary:

lshw -short

You can restrict the display using the -class name option:

lshw -class <name>

Commonly used class names:


The options can be combined. Example:

lshw -short -class memory


This where the boot file system lives, packed into a file. Example:


A new img file is created whenever you update the kernel. This is usually done automatically by the "dnf update" process. To recreate the file by hand:

dracut -fv

If you need to create the img file for some other kernel version:

dracut -fv --kver <kernel version>

Where kernel version has the form:


You can see the kernel version string for the running kernel using:

uname -r

To list the contents of the initramfs:

lsinitrd /boot/<any ramfs img file>

To unpack an initramfs file:

mkdir /tmp/initramfs
cd /tmp/initramfs
cp /boot/initramfs-x.y.z.img .
/usr/lib/dracut/skipcpio initramfs-x.y.z.img | zcat | cpio -ivd


Server motherboards often have dedicated LAN ports for remote console access. This lets you see and interact with the boot/BIOS process even on headless machines.

dnf install freeipmi

ipmiconsole -h hostIpmiAddress -u myUser -p myPassword

The hostIpmiAddress must be configured in the BIOS or have a known default before this will work. It's convenient to change it so it's on the same LAN as the host.

Other IPMI tools:

dnf install ipmitool

ipmitool chassis status
ipmitool sensor list
ipmitool sdr
ipmitool lan print
ipmitool shell

The ipmitool examples are for local access. To use them remotely, add:

-H hostIpmiAddress -U myUser -P myPassword ...


See also Firewall

Incoming and outgoing IP packets pass through chains. A chain is a list of rules. A rule specifies a pattern to match in an IP packet's header. If the rule does not match, the packet is passed on to the next rule in the chain. If the rule matches, the packet is passed to the target. The target of a rule can be another chain or one of the special targets: ACCEPT, DROP, QUEUE or RETURN.

ACCEPT - Let the packet through
DROP   - Throw the packet away
RETURN - Leave this chain and let the caller decide.
QUEUE  - Pass the packet to an external program. 

There are built-in chains and user-defined chains. If packet 'runs off' the end of a user-defined chain without triggering a rule, RETURN is the default target. If a packet runs off the end of a built-in chain, a default target is selected. This target is configured by a command that sets the default chain policy.

Chains are organized into named tables. There are two commonly used tables: "filter" and "nat". Both of these tables have some built-in chains that are connected in a flow diagram. (A link to the diagram is in the next section.)

Chains have names local to their parent table. It convenient to think of the complete name of a chain as the concatenation of the table name and the chain name. (Different tables may use the same local chain names.)

When a packet arrives for processing by the firewall, its source and destination address are examined to determine which built-in filter chain should be used:

INPUT   - Destination is on this machine.
OUTPUT  - Source is on this machine, destination is elsewhere.
FORWARD - Source and destination are elsewhere.

The FORWARD chain is exclusive: packets that arrive from outside to be routed elsewhere do not pass through the INPUT or OUTPUT chains.

The "nat" table contains chains for packets that get altered by rules. Built-in chains for "nat":

PREROUTING  - Alters packets before routing to INPUT or FORWARD.
OUTPUT      - Alters packets after INPUT and before OUTPUT.
POSTROUTING - Alters packets after OUTPUT or FORWARD.

PREROUTING is used to alter the packet destination (DNAT). This is used, for example, when you want to route mail or web traffic to some other machine on your LAN.

POSTROUTING is used to alter the packet source (SNAT). This is used to allow machines on your LAN to share a single IP address on the internet.

IPTables flow diagram

To really see what's going on, you need to study this diagram.

Commonly used flags for creating rules

-t TableName (default is filter)
-A ChainName to append this new rule
-s Source IP address
-d Destination IP address
-i Input interface
-o Output interface
-p IP protocol
-j Target
--sport Source port
--dport Desination port 


To drop all packets from an ip address stored in "badGuy":

iptables -t filter -A INPUT -i eth0 -s $badGuy -j DROP 

To pass all mail arriving on "netDev" to "anotherIP":

iptables -t nat -A PREROUTING -i $netDev -p tcp \
    --dport smtp -j DNAT --to-destination $anotherIP:smtp 

In the example above, the packet destination will be altered so it goes to $anotherIP. The FORWARD chain will then process the packet becase the source and destination are now external. If the the default policy for the FORWARD chain is not ACCEPT, you need to add this rule:

iptables -t filter -A FORWARD -i $netDev -p tcp \
    --dport smtp -d $otherMachine -j ACCEPT

TCP/IP header diagram

The flags are used to match various parts of the IP and/or TCP header.

To really see what's going on, you
need to study this [diagram.](TCPIPHeaders.txt)

Commonly used IP protocols

tcp, udp, icmp

Commonly used ports

http, ftp, nntp, pop3, imap, smtp, ssh, domain

Remove all rules on a chain or on all chains (--flush)

iptables -F optionalChainName

Delete a chain or all chains (--delete-chain)

iptables -X optionalChainName

Zero packet & byte counters in all chains (--zero)

iptables -Z optionalChainName

Create new chain (--new-chain)

iptables -N newChainName

Apply a default policy (--policy)

Only valid for built-in chains (INPUT, OUTPUT, etc.) The policy target cannot be another chain.

iptables -P chainName target

List the rules in a chain

iptables -L optionalChainName

Display all the rules or rules for a specified chain

iptables -L optionalChainName -n -v

Reset (eliminate) a firewall

iptables -t filter -F
iptables -t filter -X
iptables -t filter -Z

iptables -t nat -F
iptables -t nat -X
iptables -t nat -Z

iptables -P INPUT ACCEPT

Target for logging a rule (must go before the planned action)

-j LOG --log-prefix "Firewall: My rule fired"

Enable forwarding NAT when the server has a static IP address

The static IP of the server is in the variable $inetIP

echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -o $inetDev -j SNAT --to-source $inetIP
iptables -A FORWARD -i $lanDev -j ACCEPT

Enable forwarding NAT when the server has a dynamic IP address

echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/ip_dynaddr
iptables -t nat -A POSTROUTING -o $inetDev -j MASQUERADE

Forwarding a port to another server

iptables -t nat -A PREROUTING -i $inetDev -p $proto --dport $port
    -j DNAT --to-destination $targetIP:$port
iptables -A FORWARD -i $netDev -p $proto --dport $port
    -d $targetIP -j ACCEPT 

Where: $inetDev = Device for incomming packets $proto = Protocol: tcp, udp, or icmp $port = The port you want to forward $targetIP = The target server

Simple iptables firewall

My firewall

Automatic iptables using the redhat init script

When the system boots, the firewall configuation is restored from:


This file can be updated by using the command

iptables-save > /etc/sysconfig/iptables

Enable the script at boot time using

chkconfig --add iptables

Other init script operations:

service iptables start      # Apply /etc/sysconfig/iptables
service iptables stop       # Admit all packets (remove firewall)
service iptables panic      # Stop all incoming packets
service iptables restart    # Reload the tables
service iptables save       # Does iptables-save for you
service iptables status     # Display the tables

Common kernel settings for a firewall

IMPORTANT: Changing the value of ip_forward resets many other parameters to their default values. Your script should always set the value of ip_forward first!

Bash commands to configure the kernel:

echo 1 > /proc/sys/net/ipv4/ip_forward
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts
echo 1 > /proc/sys/net/ipv4/icmp_ignore_bogus_error_responses
echo 1 > /proc/sys/net/ipv4/tcp_syncookies
echo 1 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/all/accept_source_route
echo 0 > /proc/sys/net/ipv4/conf/all/accept_redirects

Alternatively, the /proc settings may be configured in the file /etc/sysctl.conf:

net.ipv4.ip_forward = 1
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0

At boot time, sysctl.conf is loaded by /etc/rc.d/rc.sysinit


View the startup messages


Slow down the boot process so you can see what happens

Add 'confirm' (no quotes) to the lilo command line:

Example, At the lilo promp:

LILO: vmLinuz confirm

Display all system version information

uname -a

Display only the kernel version string

uname -r

Specify the root device on a boot floppy

rdev /dev/fd0 /dev/hda7

Show the root device for an image file

rdev anImageFile

Set the root device for an image file

rdev anImageFile /dev/hda7

Add a device entry

mknod /dev/name type major minor
Where type is p b c or u

Make a ramdisk root file system image with support for PCMCIA

pcinitrd --all myInitrdFile

Mount a RAM disk root file system image so you can poke around inside

mount -t ext2 -o loop myInitrdFile /mnt/initrd

You have to gunzip compressed images first.

Core dump file size

ulimit -c <size>

You can disable core dumps by putting "ulimit -c 0" in:


Controlling PCMCIA slots

cardctl { suspend, resume, status, eject, insert } slot#
cardinfo        # X interface for cardctl

Copy raw kernel image to floppy device (obscure way)

dd if=/boot/vmlinuz of=/dev/fd0 bs=8192

DOS command to boot with a compressed RAM disk root file system

loadlin vmlinuz initrd=myGZippedFileSystemImage

Change a dynamic kernel parameter (example)

echo anInteger > /proc/sys/kernel/file_max

Update module dependancies after editing /etc/modules.conf

depmod -a

Tell lilo you have edited lilo.conf


Tell the kernel to flush the write-behind cache


Write something in the system log (Great for system script debugging)

logger -t MyProgram "This is a message"

Also see "man initlog" for debugging init.d scripts.

Building a new kernel

Update /usr/src/linux symbolic link to point at sources. Go into /usr/src/linux Backup .config to a safe place if you want to keep a copy.

make mrproper (Will delete old .config)

make xconfig (Fill in the blanks and write the .config file)
OR Copy in an old .config file and do:
make oldconfig

Edit the Makefile to bump the version number!

make dep clean bzImage install ;
make modules modules_install

If your root device has a modular driver you will need an initial ram disk at boot time. For kernel/module version set xx.yy.zz use:

mkinitrd /boot/initrd-xx.yy.zz xx.yy.zz 

This will build a ramdisk file system image that contains all the loadable modules for block devices described in your /etc/conf.modules file. See also pcinitrd for PCMCIA boot devices.

Add another entry for your old kernel to lilo.conf & run lilo. Move any modules you don't build (like dpc) Some versions of gcc are not compatible with some kernels. Redhat supplies a "kgcc" for these systems.


OBSOLETE: This is part of the kernel make process now! Preserve the Redhat-modified /etc/pcmcia/network script. In the pcmcia-cs source directory:

make clean config

Answer the questions: Symbols from the source tree and don't say yes to the plug & play bios question.

make all install

Restore the redhat version of /etc/pcmcia/network

Patch a kernel

Put the patch file in /usr/src (above 'linux') and cd there.


patch -s -p0 < patchfile

Test a patch before you apply

Add the --dry-run option

Copy raw kernel image to make a bootable floppy device

cp zImage /dev/fd0

Cross compiling a kernel

Build cross versions of binutils and gcc:


ARCH := ppc
CROSS_COMPILE =powerpc-linux-

Re-lilo a linux boot partition that is not the running system

The need for this arrises when you forget to lilo a new kernel.

Boot from a CD or floppy, mount the target Linux partition. Then:

chroot linuxPartition lilo


Redefine the backspace/delete key

Used when telneting to unusual systems

stty erase <press a key>

Show the keycodes as you press keys


Turn on autorepeat (Sometimes it goes away...)

xset r

Restore default backspace key operation

xmodmap -e "keycode 22 = BackSpace"

Restore default delete key operation

xmodmap -e "keycode 107 = Delete"

Logical volumes


Physical Volume - A whole disk or a partition on a disk.

Volume Group - A collection of physical volumes.

Logical volume - A "partition" on a Volume Group.

Getting started

If LVM has never been used on a system, first run vgscan to create the /dev directory and other structures.

Each partition must have a partition type of 0x8E. (Use fdisk)

(This does not apply if you are using a whole disk.)

Specifying size

For the commands that follow, the size may have a suffix:

None    filesystem blocks
s   512 byte sectors
K   Kilobytes
M   Megabytes
G   Gigabytes

Define each physical volume

pvcreate /dev/hdb   # A whole disk
pvcreate /dev/hda3  # A partition 

An error may be reported if you try to create a physical volume from a whole disk that had partitions defined. To destroy the partition table for a whole disk:

dd if=/dev/zero of=/dev/hdb bs=1K count=1
blockdev --rereadpt /dev/hdb

Create a volume group using several physical volumes

vgcreate myVG /dev/hdb /dev/hda3

Note: If you are using devfs, you must use the whole physical name not just the symbolic link in /dev. For example:


Extend a volume group by adding another physical volume

vgextend /dev/myVG /dev/hda5

Reduce a volume group by removing a physical volume

This can be done live, but first you have to make sure all the extents in use on the physical volume are moved to other physical volumes in the group. For example, to move everything off partition hda3:

pvmove /dev/hda3

Now it is safe to remove the physical volume:

vgreduce /dev/myVG /dev/hda3

Remove a volume group

Make sure everything is unmounted, then:

vgremove myVG

Create a logical volume

lvcreate --size 200M --name myVol myVG

You can now use this logical volume like a normal partition

mkfs -t ext2 /dev/myVG/myVol
mount -t ext2 /dev/myVG/myVol /mnt/myMP

Reduce the size of a mounted logical volume and filesystem

lvreduce -r --size newSize /dev/myVB/myVol 

Extend the size of a mounted logical volume and filesystem

lvextend -r --size newSize /dev/myVB/myVol 

Move space from one mounted logical volume to another

lvreduce -r --size -someSize /dev/myVB/mySource
lvextend -r --size +someSize /dev/myVB/myDest 

Activate a volume group

vgchange -a y myVG

Deactive (dismount) a volume group

vgchange -a n myVG

Activate all volume groups at boot time

vgchange -a y

Remove a logical volume

umount /mnt/myMP
lvchange --available n /dev/myVG/myVol
lvremove /dev/myVG/myVol

Remove a volume group

Make sure all the logical volumes are unmounted!

vgchange --available n /dev/myVG 
vgremove /dev/myVG


A snapshot lets you do a backup of the instantanious state of a logical volume. You create a snapshot, back it up, and then delete the snapshot. Conceptually, the snapshot is a copy of the whole drive frozen in time.

How to do an rsync backup of "myVol" using a snapshot:

lvcreate --size 200M --snapshot --name snapVol /dev/myVG/myVol
mount -t ext2 /dev/myVG/snapVol /mnt/snap
rsync -a --delete /mnt/snap/ /mnt/backups/myVol
umount /mnt/snap
lvremove /dev/myVG/snapVol

The neat thing about this is that the size of snapVol can be much smaller than the size of myVol - The snapVol really contains only the changes being made to myVol while the snapVol exists. When you remove the snapVol, the changes somehow get "put back" on myVol. (I'm not sure if this is exactly how it works, but this is how it appears to work.)


pvscan                        # Display all physcial volumes
lvscan                        # Display all logical volumes
pvdisplay /dev/hda4           # Display the state of a physical volume
vgdisplay /dev/myVG           # Display the state of a volume group
lvdisplay /dev/vg1/archVol    # Display the state of a logical volume

Leave out the parameter and the xxdisplay commands will show everything.

My server layout

pvcreate /dev/hdb
vgcreate vg1 /dev/hdb
lvcreate --size 30G --name backVol vg1
lvcreate --size 40G --name archVol vg1
lvcreate --size  4G --name tempVol vg1
mkfs -t ext2 -j /dev/vg1/backVol
mkfs -t ext2 -j /dev/vg1/archVol
mkfs -t ext2 /dev/vg1/tempVol

pvcreate /dev/hda4
vgcreate vg2 /dev/hda4
lvcreate --size 5G  --name homeVol vg2
lvcreate --size 9G  --name wwwVol vg2
lvcreate --size 1G  --name spoolVol vg2
lvcreate --size 3G  --name tempVol vg2
mkfs -t ext2 -j /dev/vg2/homeVol
mkfs -t ext2 -j /dev/vg2/wwwVol
mkfs -t ext2 -j /dev/vg2/spoolVol
mkfs -t ext2 /dev/vg2/tempVol


Syslog - The good old days

tail -f /var/log/messages   # Tail the log to stdout
vi /var/log/messages        # Inspect the log with an editor

logger This line is appended to the log.
logger -t MYTAG This line gets a MYTAG: prefix

Journald - The new way

Display the entire journal for all available boots


Display the journal without the pager

journcalctl --no-pager

Capture the most recent boot in a text file

journcalctl -b --no-pager > /root/journal.txt

Tail the active log

journalctl -f

Display the journcal since the most recent boot

journalctl -b

List all available boots

journalctl --list-boots

Display the journal for the nTh boot

journalctl -b n

(n=1 is the oldest)

Display the journal for the previous nTh boot

journalctl -b -n

(n=-1 is previous)

Show messages that have a specified tag

journalctl -t tagName

Show messages only from a specified systemd service

journalctl -u serviceName

Common tag names


There are many other options, but it's often easier to grep the output when looking for details. Example:

journcalctl -b --no-pager | grep <whatever>

Write to syslog from shell

echo "This is message 1" | system-cat
echo "This is message 2" | system-cat -t mything

From the previous example, journald -f will show:

Feb 05 07:14:44 server unknown[2655]: This is message 1
Feb 05 07:14:55 server mything[1335]: This is message 2



Install the server and client rpms.

rpm -i mysql-server...
rpm -i mysql-...

Configure for autostart at boot time

chkconfig --del mysqld  # To clean up   
chkconfig --add mysqld  # Add to the runlevels

Start the service immediately

service mysqld start

Set the root password for the first time

mysqladmin password firstPassword

Change the root password after installation

Using mysqladmin:

mysqladmin --password=oldPassword password newPassword

The mysqladmin statements show here assume you are logged in as root. Otherwise add the parameter: --user=root

Alternative method using the mysql client:

update mysql.user set password=password('newpassword') where user='root';
flush privileges; 

Key concept: mysql usernames and passwords have nothing to do with Linux usernames and passwords: You must explicitly authorize all mysql users. (See the GRANT command below.)

Login to the command line interface as a user

mysql --user=myName --password=xxxyyy

If you don't specify the database user name,
mysql will try to connect using your linux user name.

Specify a default username and password

If you don't specify a username or password on the mysql command line, the values (if present) will be take from the configuration file.

Edit: /etc/my.cnf

Add (or edit) this section:


Show all existing databases

show databases ; 

If you are not logged in as the mysql administrator, you will only see the databases you have privileges to access.

Create a new database

It is the usual practice that only the mysql administrator creates new databases. From within mysql, this command line adds a new database:

create database databaseName ; 

A new database can also be created from the shell:

mysqladmin --password=password create databaseName

Delete a database

From inside mysql:

drop database databaseName ;

From the shell:

mysqladmin --password=password drop databaseName

1) You can't drop a database that some program is using.

2) On some versions of MySQL, deleting a database is more involved. When you try to drop a database, the "show databases" command will show that the database is still there. This occurs because some files are left in the top-level database directory. On Redhat/Fedora installations, the top-level database directories are located in /var/lib/mysql. After the first "drop database" fails, delete all the debris in the top-level database directory. A second "drop database" command will now succeed.

Add a user

Access privileges are assigned to a username/hostname combination. The syntax looks like an email address: "username@hostname".

Adding a user simply means allowing a username@hostname to perform certain operations on all or part of one or more databases.

The most typical case is to assign all privileges to some user who manages the database. If this username and hostname are new, this operation "adds" the new user:

grant all privileges
    on databaseName.*
    to username@localhost
    identified by 'aPassword' ;

The wild card * in the example above refers to all table names. (Even though the database may not have any tables yet.)

The "grant" command may be used multiple times to allow access from other hosts or to assign different privileges to different tables for the same user.

If a user must be able to grant access to other users, the grant command must be used again with a special option:

grant grant option on databaseName.* to username@localhost ;

A user can only grant privileges to others that they already have on the database.

Remove a user

Removing a user means removing the privileges of the username@hostname from all or part of a database:

revoke all privileges on *.* from username@localhost

If you are sure that a username@hostname has been revoked on all databases, you can purge the user from the mysql database:

delete from mysql.user where user='username' and host='hostname' ;

flush privileges ;

Show all users allowed to access a database

select host,user from mysql.db where db="databaseName" ;

Show all users and the databases they can access

select host,user,db from mysql.db ;

Show all mysql users

select host,user,password from mysql.user ;

Change a password

set password for user@somehost.somewhere=password('newpassword') ;

Run a script to configure a database

mysql --password=xxxyyy dataBaseName < configFile.sql

Select a database to use

use dataBaseName ;

Show the tables defined in the database

show tables ;

Describe a table (Show the column names and types)

describe tableName ; 
show columns from tableName ;

Create a new table in the current database

create table pet
(   name VARCHAR(20),
    owner VARCHAR(20),
    species VARCHAR(20)
) ;

Common data types

    Fixed-length character string.
    Size is specified in parenthesis.
    Unused positions are padded with spaces.

    Variable-length character string.
    Max size is specified in parenthesis.
    Limit is 255 bytes. (1 byte size field)

    A large block of variable-sized text.
    Limit is 65535 bytes. (2 byte size field)

    4 byte signed integer value.

    4 byte floating point value

    Date value

    Time value 


Each column is defined by a name, data type and optional constraint.

Example constraints:

not null
primary key
default <default_value>

Adding rows to a table from the command line

Note the use of NUL and quotes around string values.

insert into pet values
(   'Puffball',
) ;

Adding rows to a table from a text file

load data local infile "pet.txt" into table pet ;

Table text file format has tab delimited fields

Note the use of \N for null values.

Fido    Mary    \N

Inserting only selected column values

insert into pet (name, owner) values ('Goober', 'George') ;

Inserting selected columns from another table

insert into pet select (name, owner) from oldpet ;

Copy a row

insert into pet(owner, species) select owner, species from table where name="Puffball" ; 

Note that we must leave out the 'name' column or we'll have a duplicate. To fix the name (which will be null) use:

update pet set name="Marvin" where name is null ;

Deleting a row

delete from pet where name = 'Puffball' ;

Delete all rows

delete from pet

Deleting a table and all the data

drop table tableName

Modify an existing row

update tableName set columnName1=value1, columnName2=value2,...
where optionalConditions ; 

update pet set species="alien" where name="Leo" ;

Modify rows using values and conditions from multiple tables

update table1, table2,...,tableN
set table1.column1=table2.column2,...
where optionalConditions ;

update new,old set 
where and old.type="Email";

Modify a table

alter table tableName add newColumnName dataType
alter table tableName add newColumnName dataType first
alter table tableName add newColumnName dataType after otherColumnName
alter table tableName drop columnName  
alter table tableName modify columnName newDataType
alter table tableName modify columnName dataType first
alter table tableName modify columnName dataType after otherColumnName
alter table tableName change oldColumnName newColumnName dataType
alter table oldTableName rename newTableName 
alter table tableName alter columnName set default someValue

Change the column order

alter table tableName modify column columnName dataType after otherColumnName
alter table tableName modify column columnName dataType before otherColumnName
alter table tableName modify column columnName dataType first

This is not-destructive, but you must supply the correct dataType for the column.

Looking things up in the database

select <what to select> from <which table> where <conditions>

<what to select> a list of columns or * for all columns

select * from pet

Reload the whole table from a text file

set autocommit=1;  # Used for quick re-create of the table
delete from pet;
load data local infile "pet.txt" into table pet ;


select * from pet where name = "Bowser" ;
select * from pet where species = "dog" and owner = "smith" ;
select name, birth from pet;
select owner from pet ;
select name, owner from pet where species in ('dog', 'cat') ;
select distinct owner from pet ;
select name, birth from pet order by owner ;
select name, birth from pet order by birth desc ;
select name, species, birth from pet order by species, birth desc ;
select, pet.age, employee.salary, employee.title
    from pet, employee where = "Bugsy";

Enable remote access

The configuration file must be changed so the daemon mysqld will listen on a tcp interface rather than just the local socket.


In the example shown above, the dns name must resolve to an ip address other than

You'll need to restart the service:

service mysqld restart

Next the individual database users must be granted network access:

grant all privileges
    on databaseName.*
    identified by 'aPassword' ;

In this expression, is the machine where the database connection and queries will originate.

Testing remote access

From the remote system (with mysql client installed) execute:

mysql -u remoteUser -h remoteHostIP -p

The -p will make it prompt for the remoteUser's password.

Backup a database

mysqldump --user=userName --password=aPassword  \
    dbName > backupFile.sql

Dump a database to an xml file

mysqldump --user=userName --password=aPassword --xml \
    dbName > backupFile.xml

Restore a backup

Create an empty database with the same name and privileges.

Next: use yourDatabase ; source backupFile.sql ;

Or from the shell:

mysql --user=userName --password=aPassword --host=hostName \
    dbName < backupFile.sql

Reset the root password

Create a text file with two lines:

UPDATE mysql.user SET Password=PASSWORD('myNewPassword') WHERE User='root';

Save this as:


Stop the sql server.

Restart the server from the command line using this form:

mysqld --init-file=mysql-reset.sql

The name of the server will vary. Examples:

Windows: mysqld.exe
Linux: mysqld_safe

Now restart the server in the usual way.

Weirdness with localhost

After performing a grant to someuser@localhost, you may find that an external application configured to access the database will not be able to connect.

Many Linux configurations will have an /etc/hosts file that contains: myname.mydomain myalias localhost.localdomain localhost

When DNS (named) is not configured and running, the /etc/hosts file is used for forward and reverse lookups. It appears that many programs do some sort of security checking before connecting to MySQL by looking up "localhost" and then doing a reverse lookup on the result. The reverse lookup on "" using the /etc/hosts file shown above will yield: "". This string gets used when connecting to MySQL, which fails because it doesn't match the string "localhost" in the SQL grant expression.

To fix this (only for machines without DNS), I suggest that
/etc/hosts contain: localhost localhost.localdomain myname myname.mydomain

In other words, make sure localhost is the first name. A better solution is to run DNS... (See the DNS section)

Dropping a gyped up database

After attempting to drop a database, you see an error of the form:

MySQL "Error dropping database (can't rmdir" "errno: 41"

First, locate the data directory for your database using the command:

select @@datadir ;

Go to that location. Enter the directory for your problem database. Delete the files indide (not the parent directory!) Back at the MySQL command line, drop the database.



Install the server and client rpms

yum install postgresql
yum install postgresql-server

Initialize the database

service postgresql initdb

Configure for autostart at boot time

chkconfig postgresql on

Start the service

service postgresql start

Run the client


Meta commands

\h      Help with SQL
\?      Help with psql
\q      Quit
\d      SHOW TABLES 
\d table    SHOW COLUMNS

SQL expressions to show databases, tables, and columns

SELECT yourDatabase FROM pg_database ;

SELECT table_name FROM information_schema.tables
    WHERE table_schema='public';

SELECT column_name FROM information_schema.columns
    WHERE table_name='yourTable'
    ORDER BY ordinal_position ;

The ORDER BY clause is optional and will display the columns in the order they were defined.

Log file


Tutorials and manual


Start/stop a network device (Old way)

ifup <interface>
ifdown <interface> 

These commands are scripts that automatically set up all the ip parameters and take care of special cases such as PPP, PPPoE, DHCP, firewalls and others.

In Redhat-like systems, extra implicit parameters go in:


Show or configure interface parameters

ifconfig        # Show params for active interfaces
ifconfig -a     # Show params including inactive interfaces
ifconfig <interface>    # Show params for a specific interface

ifconfig <interface> \  # Set params and start the interface
    address <ipaddress> \
    netmask <mask> \
    broadcast <address> \
    metric <ametric>

The ifconfig command directly configures and starts the interface. It is up to you to take care of routing and other issues.

Show and modify routing tables

route -n        # List numbers, not names
route add default <dev> # Add a default route
route delete <dev>  # Remove a route

Export NFS files systems after editing /etc/exports

exportfs -r

Display TCP/IP traffic

Display available interfaces:

tcpdump -D

Show all traffic from all interfaces

tcpdump -i all

Show all traffic on a specific interface:

tcpdump -i eth0 

Show input and output associated with a specific host:

tcpdump host <host>

Only input from the host:

tcpdump src <host>

Only output to the host

tcpdump dst <host>

When using src or dst, you may also specify a port:

tcpdump src <hostIP> port 80

The first parameter to tcpdump can be the name of a protocol:

tcpdump <protocol> host <hostIP>

<protocol> may be: tcp, udp, arp, icmp

Network addresses

tcpdump -i eth0 dst net

Special addresses

tcpdump -i eth0 broadcast
tcpdump -i eth0 multicast

If you run this from a remote session, you will want to ignore your own terminal traffic:

tcpdump -i eth0 host not <myAddress> 

Or: (to ignore ssh traffic)

tcpdump -i eth0 port not 22

Don't resolve ip names:

tcpdump -n ...

Port ranges:

tcpdump dst host <hostIP> dst portrange 1-80

Logical expressions:

tcpdump "icmp or udp" 
tcpdump "dst host <hostIP> and (dst port 80 or dst port 443)"
tcpdump "broadcast or multicast"

Capture a specific number of bytes from each packet (default is 68)

tcpdump -s 23 ...

Capture all of the packet (instead of 68 bytes)

tcpdump -s 0 ...

You can send output to a file:

tcpdump ... -w aFile.txt

You can play the file back into tcpdump:

tcpdump -r aFile.txt

Less verbosity:

tcpdump -q ...

More verbosity:

tcpdump -v
tcpdump -vv
tcpdump -vvv

The interface will expose more information if it operates in promiscuous mode:

ipconfig eth0 promisc

You will want to turn this off after debugging:

ipconfig eth0 -promisc

Display the hostname


Change the hostname

hostnamectl set-hostname

Configure a tftp directory path

Add the path as a parameter to the tftp daemon in inetd.conf

Run a command on another computer

ssh user@remoteMachine anyCommand

Any text output from the command will be displayed locally. You must have appropriate keys configured. See the SSH section for details.

Return the ip information about a host

host hostName
dig hostName
nslookup hostName <dnsServerName>
ping hostName
ping ipAddress

Show all connections

netstat -vat

Show only external internet connections

netstat -n --inet

Show numerical ports, tcp connections, associated processes

netstat -ntp

Show which processes on localhost are listening for connections

netstat -tupl

Show which ports on any host are listening for connections

nmap -sT hostName

Show listening ports and protocols

Example - List sshd ports:

ss -anp | grep LISTEN | grep sshd

Test availability and permissions to bind a port

Bind the port with a process:

nc -l myPort &

Check to see if it worked:

netstat -nlp | grep myPort

Kill the process when finished:

killall nec

Obtain and install network configuration from a DHCP server

dhclient -nw

Show or configure a wireless interface

iwconfig           # Show params for active interfaces
iwconfig eth0 essid GOOB   # Set the network name to GOOB  
iwconfig eth0 key 43224598a34bc2d457e2  # Specify a hex WEP key 
iwconfig eth0 key s:ThisIsAnAsciiPassphrase

Show or modify ethernet connection settings

Show all settings:

ethtool eth0

Show the speed:

ethtool eth0 | grep Speed 

Show duplex setting:

ethtool eth0 | grep Duplex

Modify duplex setting:

ethtool -s eth0 duplex half
ethtool -s eth0 duplex full 

Change several settings at once:

ethtool -s eth0 speed 100 autoneg off

Network Manager

Show general status

nmcli general status

List all configured connections

nmcli connection show

List all active connections

nmcli connection show --active

Temporarily start a connection

nmcli connection up <A CONNECTION>

Temporarily stop a connection

nmcli connection down <A CONNECTION>

Get wifi status

nmcli radio wifi

Turn wifi on

nmcli radio wifi on

Turn wifi off

nmcli radio wifi off

List available wifi access points

nmcli device wifi list

Rescan for wifi access points

nmcli device wifi rescan

Connect to an access point

nmcli device wifi connect <SSID NAME> password <YOUR PASSWORD>

Show all network devices

nmcli device status

Disconnect a device

nmcli device disconnect <A DEVICE>

Connect a device

nmcli device connect <A DEVICE>

Disconnect a device

nmcli device disconnect <dev>

Configure a static ip address

nmcli connection add <A CONNECTION> \
    ipv4.method static \
    ipv4.addresses \
    ipv4.gateway \

Configure a dynamic (dhcp) address

nmcli connection modify <A CONNECTION> \
    ipv4.method auto

Specify automatic connection activation

nmcli connection modify <A CONNECTION> connection.autoconnect yes

Specify manual connection activation

nmcli connection modify <A CONNECTION> connection.autoconnect no

Rename a connection

nmcli connection modify <A CONNECTION> con-name <A NEW NAME>

"con-name" is equivalent to ""

Create a new connection from scratch

nmcli connection add type ethernet \
    ifname <A DEVICE> \
    con-name <A CONNECTION \

Create a new connection interactively

nmcli --ask connection add

Create a vpn connection

    nmcli con add type vpn ifname murgurk \
            vpn-type org.freedesktop.NetworkManager.l2tp \
            vpn.secrets "password=myPassword" \
          , \
                    ipsec-enabled=yes, \
                    user=hugh, \
                    mru=1400, \
                    mtu=1400, \

This example creates an L2TP/IPSEC vpn, but many other types are supported.

- A NetworkManager plugin is required for each type.
- The will have different key=value sets for each vpn-type.

Example: For PPTP, install the plugin:


And the vpn-type will be:


Rename an ethernet interface (device)

The fashonable trend in linux is giving ethernet devices ever-more bizarre and unmemorable names. There are no doubt very good reasons. To rename a device:

nmcli con mod <A CONNECTION> connection.interface-name <A NEW DEVICE NAME>
  1. You must reboot.
  2. The process will FAIL if nmcli doesn't know the MAC address.

This is a rare problem, but it's possible for nmcli to "loose" the MAC address associated with a device. Make sure nmcli knows the MAC address before attempting to rename a device:

nmcli con show <A CONNECTION> | grep 802-3-ethernet.mac-address

If you don't see a MAC address, get it using lower-level tools:

ip link show <CURRENT DEVICE NAME>

Associate the MAC address with the nmcli device:

nmcli con mod <A CONNECTION> 802-3-ethernet.mac-address "xx:yy:zz..."

Now you can proceed with renaming the device. (And reboot.)


Fedora Networking CLI


Create an rsa key set

openssl genrsa -des3 -out server.key 1024 

Create an open version of the key

openssl rsa -in server.key -out

This is the key file required by apache and sendmail.

Create a certificate signing request

This is essentially your certificate in the unsigned form.

openssl req -new -key server.key -config openssl.conf -out server.csr

You get pestered for a description of your business. The important thing is the "Common Name": That is the domain name you want certified.

Common name example: 

Sign the certificate

This step uses your key to sign the certificate: (An alternative is to pay to have an agency sign it. See below.)

openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

You can install this in apache, sendmail and other applications. The clients will complain that the certificate cannot be verified because there is no chain of trust. Most applications will let you add the certificate to the local database anyway. Then you won't be troubled.

View your certificate

openssl x509 -in server.crt -noout -text 

Create a certificate authority (CA) to sign your certificate

This is an alternative way to sign your server.csr. It introduces the concept of a certificate authority.

A "real" purchased certificate will be signed by an authority already known to the client operating system such as Thwait, Verisign, or GoDaddy.

Creating your own certificate authority won't save you from having the client applications nag the user about an untrusted certificate. But the client can choose to install the new certificate authority as "trusted." This has the small advantage that all other certificates you sign with the same CA will be accepted by that client without complaints.

Create an rsa key set for the new certificate authority

openssl genrsa -des3 -out server-ca.key 4096

The common name you use CANNOT be the same as the name used in the certificates you want to sign.


Create a CA certificate

openssl req -new -x509 -days 365 -key server-ca.key -out server-ca.crt 

Sign your server.csr using server-ca.crt

openssl x509 -req -days 365 -in server.csr -CA server-ca.crt -CAkey server-ca.key -set_serial 01 -out server.crt

This replaces your old self-signed server.crt

Use a real certificate authority

Proceed as above to create a certificate signing request. Then pay an agency to sign it and send it back. I've used GoDaddy, which is easy and inexpensive. You get back two files:

The first is your certificate, signed with your private key and signed again by godaddy's private key. Someone who wants to deal with you can decrypt the certificate using godaddy's public key followed by your public key. The proves that you signed it and that godaddy (hopefully) took pains to verify your identity.

The gd_bundle.crt file contains the certificate authority path for godaddy.

Test a commercial certificate for validity

openssl verify -CAfile gd_bundle.crt

Test a certificate with the openssl server

Run the openssl server:

openssl s_server -cert server.crt -key server.key -www

If the server starts quietly, all is probably well. Visit the server on your local LAN with the URL:


In the url, "yourserver" should be the name the cert certifies. You should see a page full of information about openssl and your certificate.

Test an ssl server from the client side

Run the server side program. (whatever...) On the client side:

openssl s_client -connect -crlf

Now you can type plain text commands and see the responses.


Print the partition table

sgdisk -p /dev/sda

List partition type codes

sgdisk --list-types

Create partition 1

sgdisk -n 1:0:200Mib -t 1:EF00 -c 1:EFI /dev/sda

-n a:b:c

    a:  Partition number
    b:  Start location (0 => next available)
    c:  End location (0 => end of disk)  

-t a:c
    a:  Partition number
    c:  Partition type code

-c a:n
    a:  Partition number
    n:  Name of the partition (label)

Delete partition 1

sgdisk -d 1 /dev/sda

Destroy the partition table

sgdisk -Z /dev/sda

Backup a partition table layout

sfdisk -d /dev/sda > myLayout.dat

Restore a parition table layout

sfdisk -f /dev/sda < myLayout.dat

Replicate the partition table of sda onto sdb

sfdisk -d /dev/sda | sfdisk -f /dev/sdb


Create a patch file that transforms files

oldFile     # Path to the unmodified file
newFile     # Path to the modified file

diff -u oldFile newFile > patchFile 

-u  Use uniified output format

Create a patch file that transforms directories

oldPath     # Path to the unmodified files
newPath     # Path to the modified files

diff -urN oldPath newPath > patchFile

-u  Use unified format
-r  Perform diff recursively
-N  Support creating new files

Apply a patch file

For -p0, you want to be in the same place you made the diff.

patch -u -s -pN < patchFile


patch -u -s -pN -i path/patchFile

-u  Use unified format
-s  Silent
-pN Remove first N components of file path names
-d x    Switch to the directory named by x 

For individual file patches, -p0 is used if you're in the same directory as the unpatched file.

For directory patches, -p1 is used to apply the patch from inside the as-yet unpatched directory.


Install CPAN

Perl has it's own "module" (package) manager called CPAN. It is only necessary to use CPAN if there is no perl-XXXX rpm available, so first try using

yum list perl-XXXX

To install CPAN:

yum install perl-CPAN

I found that perl wanted this as well:

yum install perl-YAML

Install modules using CPAN

One-line command:

perl -MCPAN -e 'install XXXX::YYYY'

If you have several modules to install or want confirmations you can enter the CPAN shell:

perl -MCPAN -e shell

Ask for confirmations:

o conf prerequisites_policy ask

Install the module(s)

install XXXX::YYYY 
install PPPP::QQQQ

List installed modules

perldoc perllocal

Building and installing a package by hand

You need to do this if you have downloaded and upacked a package by hand. Navigate into the directory and execute:

perl Makefile.PL;
make test
make install


Print a file on the default printer

lpr myfile

Print a file on a selected printer

lpr -P printer myfile

Print a file with options

lpr myfile -o page-left=60

Units are points = 1/72"

Show a list of available printers

lpstat -p

Show the default printer

lpstat -d

Set the default printer for the current user

lpoptions -d LaserJet

Set the default printer for everyone

lpadmin -d LaserJet

Show what's on the print queue


Remove a job from the print queue

lprm nn

Remove all jobs queued by the user

lprm -

Control the printers (has help for commands)


Web interface for CUPS


Configure a remote Windows printer

Determine the remote printer name:

smbclient -L hostname -U%

If a username and password are required by the host, use:

smbclient -L hostname -U username%password

(In this case, the printer was called "Deskjet")

1) Device: Windows Printer via Samba 2) URI: smb://username:password@sparksvaio/Deskjet 3) Driver: HP New Deskjet Series Cups v1.1 (en)

Configure a local printer-port printer

1) Device: Parallel Port #1 (Hewlett-Packard HP LaserJet 4000 Series) 
2) Driver: HP LaserJet Series CUPS v1.1 (en)

CUPS directory for manufacturer's ppd files


CUPS ppd files added by me


These came from the sourceforge project sponsored by HP. The hp970Cse.pdd requires foomatic which requires a TON of perl stuff. If you don't want all this, the cups built-in "New Deskjet" works fine.

Fixing the the Samba rec_read bad magic 0x0 error

This is caused by a bug that has been in Samba for many years. It is evidently nearly impossible to fix in the Samba code. Fortunately, there is an easy work-around to clear up the problem. Stop the samba service and delete all the .tbd files in the printer cache: service smb stop rm -rf /var/cache/samba/printer/*.tbd service smb start

Configure printers on a Linksys print server

1) Select LPD/LPR Protocol. 2) Device URIs for each port:


3) Select the drivers

HP New Deskjet Series Cups v1.1 (en)
HP LaserJet 4000 Series  PS (en) 

Dealing with Vista/Windows 7 connection errors

Newer versions of Windows refuse to connect to some-but-not-all Linux/CUPS printers. The error message includes the code: 0x000006d1. The fix is not obvious: Using the Add Printer dialog:

Add a local printer.
Select "Create a new port".
Select "Local port".
For the port name, enter the Samba path, e.g.:


Select the right driver in the usual way.


Show the current process list

ps ax

Kill a process by name

killall name

Kill a process by id number

kill pid

Kill a process that is being difficult

kill -s 9 pid

Run a command in the background

command &

Put an active command into the background

First break with control Z, then


List all the jobs you have running


Bring a job back to the forground


Stop a background job


Suspend a backgroud job


Fix terminal that has fonts garbled by a binary dump

Just type: V O

Start a process detached from session

nohup command > /dev/null 2>&1 &


cc file1.c file2.c file3.c -o program

Compile for subsequent linking

cc -c file.c -o file.o
ld file1.o file2.o file3.o -o result

Show the libraries used by a program

ldd <program>

List all the symbols defined by an object file

nm <objfile>

Ask dynamic linker to scan for new libraries


Create a dynamicaly linkable library

An ".so" library can be used with dlopen, dlclose, dlsym to link with a library while a program is running.

Example library mylib.c:

int myFunction(int a, int b)
{   return a + b ;

Create the dynamic library:

cc -rdynamic -c myLib.c -o myLib.o
ld -shared myLib.o -o

Client program demo.c:

#include <dlfcn.h>
#include <stdio.h>

int main()
{   int p1 = 1 ;
    int p2 = 2 ;

    void *mylib = dlopen("./", RTLD_LAZY) ;
    int (*myFunc)() = (int(*)())dlsym(myLib, "myFunction") ;
    int result = (*myFunc)(p1, p2) ; 
    printf("Result: %d\n", result) ;

Compile and run the demo:

cc -ldl demo.c 


Linux software RAID levels

linear: Combines several disks into one big disk: "JBOD" (Just a Big Old Disk) raid0: Striping - Blocks for a file are spread out on all the disks. Used to get speed. No safety. raid1: Mirroring - Two (or more) drives operate in parallel with duplicate data. Used to get safety. No extra speed. raid4: Block-level striping with a dedicated parity disk. Uses less space than mirroring. Can recover from one drive failure. raid5: Block-level striping with distributed parity. Like raid4, but parity information is distributed between all drives. Can recover from one drive failure. Faster than raid4. raid6: Block-level striping with double distributed parity. Can recover from 2 failed drives. raid10: A stripe (RAID0) of mirrors (RAID1) Used to balance safety and speed. raid01: A mirror (RAID1) of stripes (RAID0). Used to balance safety and speed.


In this example, we create a RAID 1 group using two drives. Each drive needs to have a partition of the same size and type.

The first step is to use fdisk to create the partitions and set their partition types. For reasons beyond the scope of this tutorial, it's best to use partition type 0xDA "Non-FS data", rather than the old standard, 0xFD "Linux RAID autodetect".

For this example we will assemble two partitions:


You can verify the partition types using:

fdisk -l /dev/sda
fdisk -l /dev/sdb 

If you need to change the partition types, run fdisk without the "-l" and you'll get a menu with obvious options.

Create the RAID device

mdadm --create /dev/md0 --level=raid1 --raid-devices=2 /dev/sda1 /dev/sdb1

This command creates a mirror md0 by combining the two partitions.

Create a file system on the device

mkfs -t ext4 /dev/md0 

The /dev/md0 device behaves like any other drive.

Assign a filesystem label

e2label /dev/md0 mrbig

We will use this label when mounting the array in /etc/fstab, rather than the device name for reasons explained below.

Mounting the file system

To mount from the command line:

mount -t ext3 /dev/md0 /mnt/myarray

Or you could use the label:

mount -t ext3 -L mrbig /mnt/myarray

To mount in /etc/fstab:

LABEL=mrbig    /mnt/myarray   ext4  defaults 0 2

It's better to use a LABEL because the raid array may not be assembled when the startup scripts first attempt to process fstab. Most linux distributions have startup scripts that will defer mounting labeled devices because the label obviously can't be read until the device is operational.

Check the status of all RAID arrays

cat /proc/mdstat

Check the status a specific RAID array

mdadm --detail /dev/md0

Check the status of a RAID component (drive or partition)

mdadm --examine /dev/some_part

RAID at boot time

On most Linux systems, one of the startup scripts will run the command:

mdadm --assemble --scan

Or more concisely:

mdadm -As

The mdadm program will search for disk drives and parititions that are parts of raid arrays and assemble them automatically. They are recognized by a special raid superblock which is independent of the regular file system-dependant superblock.

The mdadm program will also look at a configuration file when assembling arrays:


It is possible to specify everything in this file, rather than relying on automatic assembly. To create this file, first assemble the RAID array by hand as shown above. Then run the command:

mdadm --examine --scan > /etc/mdadm.conf

Here's an example mdadm.conf created with this command:

ARRAY /dev/md0 UUID=915ee2a0:945b381d:30f19119:18fab9e7

The UUID is unique for this array. It functions at the device level much like an e2fs label does at the filesystem level: It allows the array to be identified regardless of the device names assigned to the components.

You can see more details using:

mdadm --detail --scan --verbose


ARRAY   /dev/md0  \
    level=raid1 \
    num-devices=2 \
    metadata=1.2 \
    UUID=915ee2a0:945b381d:30f19119:18fab9e7 \

It's not a good idea to use this format for your mdadm.conf because device names can change when drives are added or removed. It is, however, useful to keep a copy of this information somewhere to help with recovery if the raid array gets broken:

You can assemble a specific array using a command of the form:

mdadm --assemble --scan /dev/md0 --uuid=915ee2a0:945b381d:30f19119:18fab9e7 

Obviously, you have to know the UUID.

Modern Linux distributions usually scan and assemble raid arrays in their normal startup scripts. It isn't necessary to use /etc/mdadm.conf at all unless you have some reason to override this mechanism or provide additional information. Everything linux needs is in the raid superblock written on each device or partition.

Configure email notifications

Add this line to /etc/mdadm.conf:


If anything goes wrong, you'll get a message. I use an /etc/mdadm.conf file with just this one line.


I like to use both. I put LVM on top of RAID: RAID provides physical safety and LVM provides the flexibilty of virtual partitions.

See the Logical Volumes section for details.


Start a scrub:

echo check > /sys/block/md0/md/sync_action

Check scrub progress:

cat /proc/mdstat

Stop a scrub early:

echo idle > /sys/block/md0/md/sync_action

Check the bad block count:

cat /sys/block/md0/md/mismatch_cnt

Repairing problems

When you get a report about a mismatch count, for example on device md1:

Show the bad block count:

cat /sys/block/md1/md/mismatch_cnt

Start a repair:

echo repair >/sys/block/md1/md/sync_action

Check progress:

cat /proc/mdstat

You can watch continuously:

watch cat /proc/mdstat

When finished, start a scrub:

echo check > /sys/block/md0/md/sync_action

Check progress:

cat /proc/mdstat

When finshed, verify:

cat /sys/block/md1/md/mismatch_cnt

(Should be zero now...)

Removing a defective disk drive

If a disk has problems, first mark it as "failed":

mdadm --manage /dev/md0 --fail /dev/sdb1

Then you can remove it from the array:

mdadm --manage /dev/md0 --remove /dev/sdb1

Bring a replacement disk back online

Just add the new disk:

mdadm --manage /dev/md0 --add /dev/sdb1

The disk will being synchronizing immediately.

Adding a new disk drive for more space

Your supposed to be able to do this with everything online and mounted.

For this example, we'll assume the new drive name is /dev/sde1 and that we have a linear (JBOD) RAID array at /dev/md0.

Using fdisk, create one partition that fills the new drive. Make the partition type "fd", which is the code for "Linux RAID autodetect".

Check the status of the array to make sure it's happy:

cat /proc/mdstat    

Check the filesystem on the array to make sure it's happy:

e2fsck -f /dev/md0

Add the new drive:

mdadm --grow --add /dev/md0 /dev/sde1

Expand the filesystem to fill the new space:

resize2fs /dev/md0

If you didn't do it live, remount md0.

Getting rid of a raid array

Because linux can identify and assemble raid arrays without an mdadm.conf file, you can get into trouble if you add a random old disk drive to your system that happens to have a raid superblock. Especially if it was a replacement part of an array that has the same UUID. For this reason, it's a good idea to erase the raid superblocks when an array is no longer needed or if you want to redeploy the drives in some other configuration.

First unmount all filesystem on the array and remove them from /etc/fstab and/or /etc/auto.mount.

Next, stop the array:

mdadm --stop /dev/md0

Scan to display the drives or partitions used in the array:

mdadm --scan --details --verbose

Now you can remove the array:

mdadm --remove /dev/md0

To re-use the drives for some other purpose, you need to remove the raid id information. This is done by zeroing the raid superblocks on the drives. For example, if the scan shows these partitions as part of your raid array:

/dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2

Use this command:

mdadm --zero-superblock /dev/sd[bcde]2 

Now all the drives are free to use however you like.

Getting rid of a dmraid array

The dmraid format is/was used for an older method of doing software RAID. At boot time, most linux distributions will recognized dmraid superblocks (not the same as mdadm RAID) and try to build a raid array. To resuse these drives, you must first erase the dmraid information:

dmraid -E -r /dev/your_drive

Note that "your_drive" should be the name of the whole device, not a partition.

The dmraid utility supports multiple raid formats. In rare cases, you may need to specify which format you want to remove:

dmraid -E -r -f pdc /dev/your_drive


dmraid -E -r -f nvidia /dev/your_drive

More concepts...

There are commands to stop, start, replace, grow, add, and delete drives. All while the raid array is running. To learn more:

man mdadm

Regular expressions


^       Beginning of the line 
$       End of the line 
<       Left word boundary
>       Right word boundary


.       Any single character except eol
x*      Zero or more x's (maximal)
x+      One or more x's (maximal)
x?      Zero or one x's (maximal)
x*?     Zero or more (minimal)
x+?     One or more (minimal)
x??     Zero or one (minimal)

Character classes

[abcdef]    Any of the enclosed characters
[a-z]       Any in the range of characters
[^a-e]      Any char except a-e
[^abcdef]   Not any of the characters


(expression)    Grouping an expression
\c      Escape a meta character c like *+. etc.
exp1|exp2   Matches expression1 or expression 2. 


Router model

The configuration examples shown in this section are for the 3Com OfficeConnect Remote 812 ADSL Router. It's one of the more capable consumer-grade products although now obsolete. Configuring other desktop routers involves very similar concepts.

Router URL

Most routers are configured using a built-in webserver. After adding the static IP of the router to your DNS, you can visit it from any client on your LAN. Example:

Global Settings

UNCHECK: Enable Bridging
CHECK: Enable IP Routing

Local LAN configuration

IP Address & DHCP:  
    Rip:  None
    Use this network as DHCP: No

DNS: Disable
IP Static Routes: None
IPX: All off

Filter Configuration

No filters

Remote site profile

This is the main setup for the ADSL connection. I have one remote site profile called "Citizens".

Remote Site Name:

CHECK: Enable Remote Site

Network Service:

PPP over ATM (PPPoA)
User Name:
Password: yyyyy 

VC Parameters:

VPI: 0 
VCI: 35

Quality of Service:

Unspecified Bit Rate

UNCHECK: Enable Bridging
CHECK: Enable IP Routing
UNCHECK: Enable IPX Routing

Address Translation:

Default Workstation:
Static Ports: (See below)

Routing Information:

CHECK: Use this connection as default gateway
RIP: None

Static IP Routes:



CHECK: Verify packets can be routed back
CHECK: Enable protect files and printers

IPX Stuff:

Turn all this off. 

Port forwarding setup

TCP Ports:

21  ftp
22  ssh
25  smtp
80  http
443 https
465 smtps
993 imaps
1723    pptp

UDP Ports:

53  domain


Install or remove a package

rpm -i package.rpm  # Install a package
rpm -U package.rpm  # Update an installed package
rpm -F package.rpm  # Freshen (Update only if installed)
rpm -e packageName  # Remove a package

Query the rpm database

rpm -qi         # Describe an installed package              
rpm -qa         # List all installed packages
rpm -qf afile       # See which package installed a file
rpm -qR package     # Find out what a package needs
rpm -qa --last          # List by installation time

List the contents of an rpm file

rpm -qlp package.rpm

List packages by the source Linux distribution

rpm -qai | grep Dist | awk -F': ' '{print $3}' | sort | uniq -c

Build a binary rpm using a source rpm

rpmbuild --rebuild your.src.rpm
The result is in /usr/src/redhat/RPMS/i386

Build a new source rpm from an installed source rpm

rpm -i xxxx.src.rpm

You can now tamper with the tgz in /usr/src/redhat/SOURCES

rpmbuild -bs /usr/src/redhat/SPECS/xxxx.spec

The result is in /usr/src/redhat/SRPMS

Create a binary rpm from a tar.gz that contains a .spec

rpmbuild -tb yourpackage.tar.gz

Install rpm on an empty linux partition mounted on 'mp'

rpm --root mp --initdb

Create a cpio archive from an rpm and write to an archive

rpm2cpio rpmFile > archive.cpio

Expand a cpio archive

cpio -mid < archive.cpio

Unpack an rpm on one step

rpm2cpio rpmFile | cpio -mid

Use query formats

The whole format is one "string" Each tag specification looks like this: %{NAME} You usually want a newline at the end:

rpm -q xmms --qf "%{SIZE}\n"

Between the "%" and the opening brace "{" you can specify field sizes, or any other C printf formatting chars. Positive integers select right alignment in the field. Negative integers select left alignment in the field:

rpm -qa --qf "%-30{NAME} %10{SIZE}\n"

Some header tags select arrays of values. Use square brackets to iterate over the set. You can specify more than one array tag inside the query:

rpm -q xmms --qf "[%-50{FILENAMES} %10{FILESIZES}\n]"

Normally, all tags inside square brackets must be array tags. If you want to print a fixed tag as a label on each line, add an "=" char to the fixed-tag name:

rpm -q xmms -qf "[%{=NAME} %{FILENAMES}\n]"

Display a list of all rpms sorted by size:

rpm -qa --qf "%-50{NAME} %10{SIZE}\n" | sort -nk 2,2

Display a list of all "devel" packages sorted by size:

rpm -qa | grep devel | \
xargs rpm -q --qf "%-50{NAME} %10{SIZE}\n" | \
sort -nk 2,2 

List all the available header tags for query formats

rpm --querytags

Show the value of a header element

rpm -q packageName --qf "%{SIZE}\n"

List the sizes of selected packages

rpm -qa | grep devel | xargs rpm -q --qf "%{NAME} %{SIZE}\n"

Fix a hoarked rpm database

Symptom: All rpm commands "hang up"

Find and kill all processes running rpm or up2date:

ps ax | grep rpm
ps ax | grep up2date

(Kill them by hand)

Remove all rpm database lock files:

rm -f /var/lib/rpm/__db*

This usually gets things going. If not:

First make a backup of the database:

cp -a /var/lib/rpm /var/lib/rpm.copy

Then rebuild the database

rpm --rebuilddb

This takes some time, but if it hangs forever, repeat the "Find and kill rpm" step and proceed with:

cd /var/lib/rpm
db_verify Packages

(You may need to install db4-utils)

If db_verify reports errors, try:

cp Packages Packages.backup
db_dump Packages.backup | db_load Packages
rpm --rebuilddb

If all these steps fail, you are in big do-do.

Fix signature verification errors

Recent versions of Redhat require signature verification when processing packages. If you havn't imported the Redhat GPG signature, you will get errors of the form:

warning: ... V3 DSA signature: NOKEY, key ID ...

To fix this, first obtain a copy of the file RPM-GPG-KEY. If you are creating your own rpm-based distribution, the file is widely available on the web.

On a Redhat system, it can be found using:

find /usr -name RPM-GPG-KEY

When you have the file, execute the following expression:

rpm --import RPM-GPG-KEY    

Use RPM to verify all packages

rpm -Va

The code letters: S file Size differs M Mode differs (includes permissions and file type) 5 MD5 sum differs D Device major/minor number mis-match L readLink(2) path mis-match U User ownership differs G Group ownership differs T mTime differs c A configuration file

A streamlined report that ignores date-only changes:

rpm -Va | grep -v  ".......T"

To make this a cron job that mails the result:

rpm -Va | grep -v ".......T" | mail myself@mydomain

To skim off acceptable changes

rpm -Va | grep -v ".......T" | grep -vf rpmChanges | \
    mail myself@mydomain

Append any new acceptable changes to the rpmChanges file.


List samba shares

smbclient -L -U%

List samba shares using alternative credentials

smbclient -L -U myUserName%myPassword

Add a samba user

smbpasswd -a NewUserName

Delete a samba user

smbpasswd -x NewUserName

List samba users

pbedit -L

Minimal smb.conf

    workgroup = WORKGROUP
    netbios name = SERVER
    security = user
    map to guest = bad user

Add shared home directories to smb.conf

    comment = Home Directories
    valid users = %S, %D%w%S
    browseable = Yes
    read only = No
    inherit acls = Yes

Add printing to smb.conf

    printing = cups
    printcap name = cups
    load printers = yes
    cups options = raw

    comment = All Printers
    path = /var/tmp
    printable = Yes
    create mask = 0600
    browseable = No

    comment = Printer Drivers
    path = /var/lib/samba/drivers
    write list = root
    create mask = 0664
    directory mask = 0775

Add a shared directory to smb.conf

    path = /mnt/something
    browseable = yes
    writable = yes

Enable usershares in smb.conf

    usershare path = /var/lib/samba/usershares
    usershare max shares = 100
    usershare allow guests = yes
    usershare owner only = no

Create a usershare

net usershare add myShareName /some/path

Remote path to a usershare


Delete a usershare

net usershare delete myShareName

List usershares

net usershare list

List samba usershare details

net usershare info

Check samba configuration files


Obtain a list of default parameter values

cd /etc/samba
mv smb.conf smb.conf.backup
testparm -sv > smb.conf.defaults
mv smb.conf.backup smb.conf


Find the scsi device that controls your scanner


For this example, we will assume that /dev/sg0 is the result.

Make a new user & group for the scanner

useradd saned

Give this group access to the scanner device

chown root:saned /dev/sg0
chmod g+rw /dev/sg1

Add an entry to /etc/services

sane-port     6566/tcp saned   # SANE network scanner daemon

Add an entry to /etc/xinet.d

service sane-port
    socket_type = stream
    server = /usr/sbin/saned
    protocol = tcp
    user = saned
    group = saned
    wait = no
    disable = no

You will need to verify the location of the saned program on your system. Use "which saned" and modify the xinet.d file shown above appropriately.

Specify allowed hosts



Append your allowed hosts (names, ip numbers, or subnets) Example for a local subnet:

Eliminate unused backends

This is not strictly necessary, but it may prevent some module loading errors. Edit:


Remove everything but the entry for your scanner type and "net." The "v41", for example, causes the char-major-81 error.

UPDATE: None of this section applies to Fedora core II.

Tell xinetd to reload the configuration files

service xinetd restart


Most of the search command use patterns that are described in the section [Regular Expressions[(#Regular Expressions)

Find a pattern in a file

grep pattern file

Make the search case-insenstiv

grep -i pattern file

Search recursively through multiple directories and files

grep -r pattern startDirectory

Confine a search to files with a specified extension

grep -ri --include="*.txt" pattern startDirectory

Find files whose names match a pattern

find startDirectory -name pattern

If startDirectory isn't specified, find starts in the current directory. By default, the search looks recursively down into all subdirectories.

The pattern usually contains one or more * wildcards.

You can pipeline the output of find directly to any command that acceptss a stream of filenames. For example:

    find path | grep <pattern> 

This is equivalent to "grep -r pattern path"

Find files and apply a command to each file

find startDirectory -name "pattern" -exec command arguments \;

The arguments are passed to the command. You can insert the matched filename anywere among the arguments using {}. For example, to make an extended directory listing of files with the extension "mp3":

find -name *.mp3" -exec ls -l {} \;

You can use -ok instead of -exec to get a prompt to confirm the operation for each file.

You can stream filenames to xargs as an alternative to -exec:

find startDirectory -name pattern | xargs <command> 

When using xargs, the file name always appears after the command, so this form is less general than using -exec.

Example: Change the permissions on all mp3 files at or below the current directory:

find -name *.mp3 | xargs chmod 644

The example above will fail if the file names contains spaces. (Which music files often do.) To deal with that, we pipeline the file names though a sed expression that puts quotation marks around each name:

find -name *.mp3 | sed 's/^.*$/"&"/' | xargs chmod 644

Finding specific file types

The find command has a "-type t" option where t is one of:

d   Directory   
f   Regular file
... Many others

Find a file using the locate database

Unlike "find", The locate command depends on a database updated by a system service that runs periodically. On most linux systems, this is done daily. Consequently, locate only finds files created on previous days. By using a database, locate is much faster than find, especially when you have no idea where to look for the file.

Basic form:

locate pattern

For example, if you are looking for files that contain the string "resolv.conf" anywhere in their name:

locate resolv.conf 

In other words, locate works "as if" you had used:

locate *resolv.conf*

Display the path to an executable file

which command



Sed operates on files or streams that contain lines of text. The output is the result of applying a command to lines that match a pattern. By far the most common commands are substitution and deletion.

Learning sed

Sed has a well-deserved reputation for being a write-only programming language. There are entire books and many web sites devoted to sed. Some specialize in demonstrating unbelievably obscure expressions.

Most of the effort to master Sed is associated with learning to write pattern expressions. The introductory sections that follow cover the basics. After the introduction, 15 examples are presented. If you take time to understand them, you will become reasonably proficient.

Testing sed expressions

Most of the examples shown below can be tested by sending through a single line:

echo 'test string' | sed 'some expression'

Some examples only make sense when applied to a whole file. In those cases you can test the expression using one of these forms:

cat testFile | sed 'some expression'


sed 'some expression' testFile

Example command line formats

sed 'expression' infile >outfile
sed 'expression' <infile >outfile
echo "some text" | sed 'expression'

Applying a sequence of sed commands

sed -e 'expression1' -e 'expression2'

Or the shorter form:

sed 'expression1;expression2;'

Sed script files

A sequence of sed expressions can be stored in a file. Each line of the file is a sed expression without the bounding quotes. The script file may contain comments that begin with "#" as in a bash script. Usage:

    sed -f scriptFile inputFile


    cat inputFile -f scriptFile


Patterns may be string literals, character sets or special symbols. Special symbols must be escaped using a backslash:

? $ . [ ] \ / ^

Common patterns:

mystring        - A literal
^               - Beginning of the line
$               - End of the line
.               - Any single character
\n              - Newline
.*              - Zero or more characters
.+              - One or more characters
.?              - Zero or one characters

(The * + or ? may be used after any construct)

Grouping is done using parentheses:

(abc)+          - One or more instances of abc
(a \| b)        - a or b 

Character sets:

[pqr]       - Any one of p q or r   
[a-z]           - Any lower case letter
[^a-z]          - Any non lower case letter
[a-z]*          - Any number of lower case letters
[a-zA-Z]*       - Any number of mixed lower and upper case letters

Character classes and their equivalent sets:

[[:alnum:]]  - [A-Za-z0-9]     Alphanumeric characters
[[:alpha:]]  - [A-Za-z]        Alphabetic characters
[[:blank:]]  - [ \x09]         Space or tab characters only
[[:cntrl:]]  - [\x00-\x19\x7F] Control characters
[[:digit:]]  - [0-9]           Numeric characters
[[:graph:]]  - [!-~]           Printable and visible characters
[[:lower:]]  - [a-z]           Lower-case alphabetic characters
[[:print:]]  - [ -~]           Printable (non-Control) characters
[[:punct:]]  - [!-/:-@[-`{-~]  Punctuation characters
[[:space:]]  - [ \t\v\f]       All whitespace chars
[[:upper:]]  - [A-Z]           Upper-case alphabetic characters
[[:xdigit:]] - [0-9a-fA-F]     Hexadecimal digit characters


To simply print lines that contain a pattern:

sed -n '/pattern/p'

The -n tells sed to only print lines that contain the pattern. By default, sed will print every line.


A basic principle of sed is the phrase: "On each line". Consider that a prefix to each comment below.

Substitute only the first instance of old with new:

sed 's/old/new/'

Substitute all instances of old with new:

sed 's/old/new/g'

Substitute 3rd instance of old with new:

sed 's/old/new/3' 

Substitute old with new on lines that contain red:

sed '/red/s/old/new/g' 

Remove leading whitespace:

sed 's/[ \t]*//'

Remove trailing whitespace:

sed 's/[ \t]*$//'


Delete lines that contain a pattern:

sed '/pattern/d' 

Output lines that contain a pattern:

sed '/pattern/!d'

Delete all blank lines:

sed '/^$/d'

Delete extra blank lines (multiple to one)

sed '/^$/N;/^\n$/D'

Using the value of a pattern

The "&" symbol inserts the whole pattern matched:

Add a prefix to every line:

sed 's/.*/myPrefix&/'

Add a suffix to every line:

sed 's/.*/&mySuffix/'

Put quotes around a line:

sed 's/^.*$/"&"/'

Quoting lines is great for processing file names that may contain spaces:

find * -name *.mp3 | sed 's/^.*$/"&"/' | xargs chmod 644

(Change the ownership of all mp3 files.)

Using capture groups

The literal values matched in pattern expressions may be captured in escaped parenthesis:

\(any pattern\)

Multiple capture groups may be used. On the substitution side, the value of each capture group may be inserted:

\1 for the first capture
\2 for the second capture, etc.

The expression:

echo "his name was Fred" | sed 's/his name was \(.*\)/Name: \1/' 


Name: Fred

Editing in-place

Sed normally edits the input text and outputs new text. If you want the new text to replace the old text in an existing file, use -i to modify the input file. For example, to make a global substitution in multiple text files:

sed -i 's/oldThing/newThing/g' *.txt

It is wise to preview the change on one file before using -i. It is wise to use -b on Windows to prevent the files from being converted to Unix line terminator format.

Obscure file operations

The 23rd line:

sed '23q;d'

The last line:

sed '$!d'

Number the lines:

sed = myfile.txt | sed 'N;s/\n/\t/'

Reverse the order of lines:

sed '1!G;h;$!d' 

Remove trailing blank lines:

sed ':a;/^\n*$/{$d;N;ba;}'

Append a final blank line:

sed '$G'

Script to 'normalize' trailing blank lines in place:

sed -i ':a;/^\n*$/{$d;N;ba;}' myfile
sed -i '$G' myfile


Good intentions

Whenever I install a new Linux system, I always try to see how long I can live with SELinux. I know it's a really good idea. Especially on an internet-facing server. My record is 4 days.

Disable SELinux

Edit: /etc/selinux/config

Change: SELINUX=disabled


Obliterate SELinux

This is only necessary if the little dot that appears after the file permissions when running "ls -l" really bothers you:

find / -print0 | xargs -0 -n 1 setfattr -h -x security.selinux

Obviously, you shouldn't do this if you ever plan to turn on SELinux.

Authorize a warning

If you're sure the warning is from something ok, proceed:

cat /var/log/audit/audit.log | audit2allow -M results

This will produce two files:


Review the contents of results.te: In some cases, you simply have to restore a label. Specific instructions are given.

If new rules are required, run:

semodule -i results.pp


The lm_sensors package handles all motherboard temperature sensors. The command line program is:


To see a continuous temperature monitor that updates every 2 seconds:

watch -n 2 sensors


SysVinit vs Systemd

SysVinit is the old way of managing system services. Systemd is the new way of managing system services. Most linux distributions provide an emulation layer so users familiar with SysVinit can work in a familiar way.

See the section "Systemd" following this one (Services).

Control individual services

Services or 'daemons' are programs that run in the background, usually without any user interaction.

They implement system functions such as logging, network servers, and many other housekeeping tasks.

To start a service by hand:

service <serviceName> <selector>

Typical selectors are: start, stop, restart, status.

If you run the command without a selector, it will display a list of possible selectors.

Run levels identify groups of system services

The operating system can run in different modes called run levels. Each runlevel determines a set of services to run and a set of services to stop.

Run levels are identified by small integers. The group of services associated with each run level is conventional:

0   Halt
1   Single user
2   Multiuser, no networking, local additions
3   Multiuser, networking, local additions
4   Multiuser, networking, no local additions
5   Same as 3 plus X Windows Login
6   Reboot

Show the current run level

who -r

Change the run level of the system immediately

telinit newLevelNumber 

Change the run level the system will use after reboot

This is done by editing the file:


Inside, you will find an expression that looks like this:


In the example shown above, "3" is the run level used at boot time. If you wanted to have an X-Windows splash screen with a login dialog, you would change this number to "5".

Configuring runlevels by hand

For each runlevel, we need to specify which services start and which services stop. We also need to specify the order in which services start or stop to allow for interdependencies.

A collection of directories and symbolic links are used to perform these functions. The Linux boot process uses these links to start or stop the appropriate services at boot time or when you explicitly switch the run level.

A directory exists for each run level X:


Each run level directory contains symbolic links. The links all point to the service control files found in:


The name of the link begins with the letter "S" if the service should start. The name of the link begins with "K" if the service should stop. (Think "killed.") The start and stop links for a given service point to the same file.

The link names also determine the order of starting or stopping: Following the S or K is a two-character integer that determines the order of execution relative to the other links in the directory. Higher numbers make the service start later.

After the ordering digits, the service name appears. For example, the following link will start networking at relative step 10 of runlevel 3:

/etc/rc.d/rc3.d/S10network -> ../init.d/network

Networking gets turned off in runlevel 1, so we find this link:

/etc/rc.d/rc1.d/K90network -> ../init.d/network

When a service is installed, a start or stop link should should be created in every run level directory. Here's a complete example for the web server httpd:

Starting: /etc/rc.d/rc3.d/S85httpd -> ../init.d/httpd /etc/rc.d/rc4.d/S85httpd -> ../init.d/httpd /etc/rc.d/rc5.d/S85httpd -> ../init.d/httpd

Stopping: /etc/rc.d/rc0.d/S15httpd -> ../init.d/httpd /etc/rc.d/rc1.d/S15httpd -> ../init.d/httpd /etc/rc.d/rc2.d/S15httpd -> ../init.d/httpd /etc/rc.d/rc6.d/S15httpd -> ../init.d/httpd

It is important idea to keep the links complimentary: If you create start links on levels 2 and 5, you should create kill links on levels 0,1,3,4, and 6.

It is clearly a pain to do all this correctly by hand.

Configuring runlevels with chkconfig

The chkconfig command helps you maintain run level links. It doesn't start or stop services, it only creates or deletes the appropriate symbolic links in the run level directories.

The chkconfig command obtains run level and starting order information from a special comment found inside each service control file. A typical comment in a service control file looks like this:

# chkconfig: 2345 90 60

This was extracted from my /etc/rc.d/init.d/crond control file. The comment suggests that the crond service should start on runlevels 2345 at relative position 90. By the complimentary priciple, it should have kill links on levels 0, 1 and 6 at relative position 60.

Install both start and kill links for a newly installed service:

chkconfig --add serviceName

Remove all start and kill links for a service at all run levels.

chkconfig --del serviceName

Some service control files will have a minus character for the list of run levels. For example, my Samba control file (smb) contains:

# chkconfig - 91 35

To install a new service like this you first use:

chkconfig --add serviceName

This will put kill links on every level.

Then you specify the levels where you want the service to run:

Add start links and remove kill links from specified levels:

chkconfig --level levelString serviceName on

Add kill links and remove start links from specified levels:

chkconfig --level levelString serviceName off

If you don't use the "--level levelString" option, the default levels 2345 will be used.

Example to start Samba at runlevels 345:

chkconfig --level 345 smb on

It often happens that people try to maintain the links by hand and get everything messed up. To clean house when you are uncertain about a service configuration, first get rid of all the links using:

chkconfig --del serviceName



yum install smartmontools

Check to see if a drive supports SMART

smartctl -i /dev/sda

Enable SMART for a drive

smartctl -s on /dev/sda

Display time required for testing

smartctl -c /dev/sda

Run a short test

smartctl -t short /dev/sda

Run a long test (most accurate)

smartctl -t long /dev/sda

Display stats

smartctl -l selftest /dev/sda

Display overall health of a drive

smartctl -H /dev/sda

Display detailed SMART info for a drive

smartctl -a /dev/sda

Determine if a disk is spinning without waking it up

smartctl --nocheck standby -i /dev/sda | grep "Power mode"

Not spinning modes:


Spinning modes:


Using the smartd daemon

To send fault notices by email, sendmail or exim must be installed:

    dnf install exim

Edit /etc/aliases to forward root email to yourself.

Start and enable the service:

systemctl enable --now smartd

The default configuration will scan all the disks every 30 minutes unless they are in standby. If smartd encounters the same disk in standby 10 times in a row, it will wake it up and scan anyway.



Concepts and rules

Be sure to log in as yourself. (not root!) Train with more ham than spam.

Train with spam

sa-learn --no-sync --spam --mbox <mboxFile>

Train with ham

sa-learn --no-sync --ham --mbox <mboxFile>

Sync after training session

sa-learn --sync

Display sample counts

sa-learn --dump magic

Where to put user rules


Where to put shared rules


Specify options and parameters

required_hits 5.0
use_bayes 1
use_pyzor 1
use_razor2 1
bayes_auto_learn 1
allow_user_rules 1
ok_locales en ja
report_safe 0
allow_user_rules 1

Configure a whitelist

whitelist_from * 

Configure a blacklist

blacklist_from *

Create custom header rules

header    H_NINJA_1 From =~ /\.ninja/i
score     H_NINJA_1 10.0
describe  H_NINJA_1 From server contains dot ninja 

header   H_002 Subject =~ /acne/i
score    H_002 0.5
describe H_002 Acne cures

Create custom body rules

body     H_UNLIM /Get Unlimited access/i
score    H_UNLIM 0.5
describe H_UNLIM Get Unlimited access

Check for configuration errors

spamassassin -D --lint 2>&1 | more

Capture configuration check

spamassassin -D --lint 2>&1 2> results.txt

Test one file

spamassassin -t -D < someFile.eml 2>&1

Check for add-on module failure

spamassassin -D --lint 2>&1 | grep -i failed

Use MailSpike

header RCVD_IN_MSPIKE_BL eval:check_rbl('mspike-lastexternal', '')
tflags RCVD_IN_MSPIKE_BL net
header RCVD_IN_MSPIKE_WL eval:check_rbl('mspike-lastexternal', '')
tflags RCVD_IN_MSPIKE_WL net
score RCVD_IN_MSPIKE_WL -2.1

Adjust rule scores

Examples only. Not recommendations.

score   RCVD_IN_SORBS_ZOMBIE    3.5
score   RCVD_IN_SORBS_DUL       2.5
score   URIBL_RHS_DOB           3.8

Backup the database

sa-learn --backup >myState.sadatabase

Restore a database backup

sa-learn --restore myState.sadatabase

Erase the database

sa-learn --clear


List all running services


Show status of a service

systemctl status foo 


systemctl start foo
systemctl stop foo
systemctl restart foo

Enable/disable at boot time

systemctl enable foo
systemctl disable foo

Enable at boot time and start immediately

systemctl enable --now foo

Check boot-time enabled status

systemctl is-enabled foo.service; echo $? 

The value returned is a script status:

0 => enabled 
1 => disabled

Targets and runlevels

Systemd uses "targets" instead of runlevels.

Some retro symbolic links are provided as well: -> ->

Switch runlevels immediately

systemctl isolate
systemctl isolate

Show the boot run level

systemctl get-default

Set the boot run level

systemctl set-default <target>

Common boot run level targets

Systemd         Old SysV
poweroff           0
rescue             1
multi-user         3
graphical          5
reboot             6

List all units

systemctl list-units

List units that failed

systemctl list-units --state=failed

List active units

systemctl list-units --state=active

Make an incremental change and reload a service

This will open a blank file where you can add lines or override exiting lines in an existing service file:

systemctl edit foo

Note that some systemd script directives are incremental. For example, to modify the "ExecStart" line, your modification file must first clear the old definition:

ExecStart=your new version

When you exit the editor, it will create a new directory and file:


Exiting the editor will also cause systemd to reload the modified script as if you had executed:

systemctl daemon-reload

To remove the effect of making an incremental change, remove the entire directory:

rm -rf /etc/systemd/system/foo.service.d

And reload the scripts:

systemctl daemon-reload

Edit the entire systemd script

This will open the existing script in an editor session:

systemctl edit --full foo

When you exit the editor, a override script file will be created in:


Exiting the editor will also cause systemd to reload the modified script as if you had executed:

systemctl daemon-reload

To undo the effect of this edit, simply remove the file:

rm /etc/systemd/system/foo.service

And reload the scripts:

systemctl daemon-reload

Because it's a little hard to see what you're doing with an incremental change, you might consider using --full all the time. The reason for not doing that is the possibility that there are other incremental modifications for the same script. In that case, your change should be included with the others, rather than replacing the entire unit file. Check for this possiblitiy by examining the files in:


User services

As the name suggests, these are services that regular users can control.

The service file "myThing" (for example) must be installed by root to the path:


After that, any user can execute:

systemctl --user start myThing
systemctl --user stop myThing
systemctl --user enable myThing

Services enabled by a user will start when the user logs in and stop when the user logs out.

The log entries for a user service can be displayed:

journalctl -b --user-unit myThing


Copy files to and from remote hosts. An ssh relationship must be configured before using scp. (see next section)

Copy a remote file to a local directory:

scp user@remoteIP:remoteFile localDirectory

Copy a local file to a remote directory:

scp localFile user@remoteIP:/remoteDirectory

Entire directories can be copied using -r:

scp -r localDirectory user@remoteIP:remoteParentDirectory

Copy files between two remote hosts:

scp sourceUser@sourceIP:sourcePath destUser@destIP:destPath

The general syntax is:

scp srcUser@srcMachine:srcFilePath destUser@destMachine:destFilePath

If the path names are not absolute, they are relative to the login directories for the designated users.


The concept

Secure Shell (ssh) lets you connect to a remote host and start a shell session just like Telnet. Unlike Telnet, ssh uses cryptography to log in and protect the data flow between you and the remote host.

Setting up ssh access is conceptually involved, but once this is done, ssh is very easy to use. For example: To start a shell session on a remote host you simply type:

Login using your current user name:

ssh remoteHostIpName

Specify the remote user name:

ssh -l userName remoteHostIpName

Or use 'email' notation:

ssh userName@remoteHostIpName 

You can run a command on a remote system and see the results locally:

ssh userName@remoteHost ls

SSH can perform many other marvels such as port forwarding: This lets you channel tcp/ip traffic between any selected client and server port through the secure connection. A common use of this feature is to run remote X-Windows programs and have them display on the client automatically.

The following sections deal with understanding and configuring basic ssh access.

RSA cryptography

SSH supports several encryption mechanisms, but one of the best is based on the RSA public key system.

To use RSA, you need a pair of numerical keys. One key is public: You can pass it out to your friends or publish it in a public directory. The other key is private and must be keep secret.

RSA is a Good Thing™ because it works without ever exchanging private keys over an insecure communication channel, e.g. the internet. It also supports signatures: A person who recieves a message can verify that only you could have sent the message.

Create your own set of RSA keys

Home directory and set restricted permissions:

mkdir .ssh

Or using numerical permissions:

chmod 700 .ssh

Run ssh-keygen to create your public and private key files.

ssh-keygen -t rsa -C "A comment"

The program will propose default filenames for your, public and private key files, which you should accept:


You will also be asked for a passphrase. If you specify a passphrase, you will need to enter it whenever ssh or other programs want to use your private key.

The comment parameter is optional. If you don't supply a comment using "-C", the default is a string derived from your login name and the name of your host formatted like an email address:


The comment appears as plain text in your public key string. When examining an authorization file on a remote server, this text helps you remember who is authorized.

Once you have a key set, you can freely distribute copies of your file to anyone who wants to send you secure messages.

The ssh-keygen program will create the .ssh directory and key files with the correct permissions. But sometimes things get messed up. These commands will fix everything:

cd ~/
chmod u=rwx,g-rwx,o-rwx .ssh 
cd .ssh
chmod u=rw,g-rwx,o-rwx id_rsa 
chmod u=rw,g=r,o=r 

Or if you're of the old school:

cd ~/
chmod 700 .ssh
cd .ssh 
chmod 600 id_rsa
chmod 644 

Enable ssh access to a remote account

You must setup your client ssh keys as decribed above. They will be in the hidden .ssh directory in your home directory on the client machine.

Email, ftp or otherwise copy your file to your home directory on the remote machine. To avoid confusion, we rename the file "".

You must append the contents of this file to the authorized_keys file in the .ssh directory at the top-level of your remote home directory.

To do this, you need to log into your remote account by some other means or ask someone who has access to do this for you. This command will append your key to the authorized_keys file:

cat >> .ssh/authorized_keys

If you're creating a new .ssh/authorized_keys file, you must set the permissions or remote access will be denied:

chmod u=rw,g-rwx,o-rwx .ssh/authorized_keys

If some other user such as "root" does this for you, they also need to make sure that you own the file:

chown yourName:yourGroupName .ssh/authorized_keys

Similarly, the remote .ssh directory must have the correct permissions and owner:

chmod u=rwx,g-rwx,o-rwx .ssh
chown yourUserName:yourGroupName .ssh

Here's a quick check on how the .ssh directory should look:

ls -ld .ssh

drwx------ 2 you you 4096 2008-02-27 13:58 .ssh

ls -l .ssh

-rw------- 1 you you 1727 2007-08-04 07:15 authorized_keys
-rw------- 1 you you  887 2004-07-16 03:48 id_rsa
-rw-r--r-- 1 you you  221 2004-07-16 03:48
-rw-r--r-- 1 you you 2553 2008-02-25 10:55 known_hosts

The above listing shows the known_hosts file, which is automatically created and/or updated whenever remote clients connect to this account.

Eliminate the need for passwords

With ssh configured properly, you don't need to remember or type passwords when logging into a remote system. To achieve this marvel, edit:


Find this keyword and change the value as shown:

PasswordAuthentication no

Per host configuration

By adding a "config" file to your .ssh directory, different configuation options and defaults can be set for each host you commonly use. Here is an example .ssh/config file:


    User myusername
    ForwardX11 yes
    ForwardX11Trusted yes


    User otherUserName

By specifying a username as shown above, the command line for remote login becomes very simple:


Most of the options you can specify system-wide for the ssh client in /etc/ssh/ssh_config may alternatively go in your local .ssh/config file, elminating the need to modify the system defaults.

Permissions for the config file should be 644.

Dealing with virtual machines

If your linux system is running inside a virtualmachine, you'll need to add an option the the .sss/config file for all users on that machine:

Host *

Without this fix, you'll see this error when you try to connect to a remote machine:

client_loop: send disconnect: Broken pipe

Creating a host key set

An entire host machine may have a key set. The public part of this key is kept on remote servers to authorize access by the entire machine. Many services can be configured to use host-level authorization.

Host keys should be located in:


The automatic installers for many Linux distributions create the host key files in /etc/ssh automatically.

To create them by hand, run ssh-keygen and specify the path names shown above. Passphrases are not normally used with host keys.


Create a repository on the server

svnadmin create myproject

Populate the repository with files

svn import [localDirectory] repoURL

Checkout the repository

svn checkout repoURL [localDir]

List changes since last commit

svn status

Show the log

svn log 

Note: The "svn log" won't list your most recent
commits until you perform an "svn update". 
Many subversion GUI clients do this automatically 
after each commit.

Show diff changes for all or selected files

svn diff <optional filename>

Revert the context directory

svn revert

Revert one file or subdirectory

svn revert aFile

Add a file or directory

svn add aFile

Remove a file or directory

svn rm aFile

Move a file or directory

svn mv fromPath toPath

Create a directory

svn mkdir aDirectory

Commit your changes

Each comment bumps the revsion number;

svn commit -m "This is why I did the deed." 

Bring the local files up to date

svn update

Show the log between dates

svn log -r {2006-11-20}:{2006-11-29}

Show the changes for a file or directory

svn blame aFile

Create a tag

svn copy ^/trunk ^/tags/MyTagName -m "Why I did this"

Create a branch

svn copy ^/trunk ^/branches/MyBranchName -m "Why I did this"

Switch to a branch

svn switch ^/branches/MyBranchName

Show all tags or branches

svn list ^/branches
svn list ^/tags

Working in a branch, merge all changes from trunk

svn merge ^/trunk

Working in trunk, merage all changes from a branch

svn merge ^/branches/MyBranchName

Dealing with "reintegrate" and branching

Ideas about using --reintegrate and closing out a 
branch are obsolete. You can work on a branch after
after merging it to trunk. Everything just works. 

Resolve conflicts

After running "svn update", sometimes conflicts are reported.

Display a summary of possible actions:

svn resolve

Accept your changes as they were before running "svn update":

svn resolve --accept mine-full 

Accept the changes brought in by "svn update":

svn resolve --accept theirs-full 

The resolve command also accepts an optional path, so you can restrict the action to a particular file.

svn resolve --accept mine-full source/goop.c

Use revision keywords


BASE - Version in your working copy as of your last commit.
HEAD - Most recent version in the remote repository.
COMMITTED - The most recent revision <= BASE, in which an item changed.

Show the last change committed to foo.c

svn diff -r PREV:COMMITTED foo.c

Show the log message for the latest repository commit

svn log -r HEAD

Compare your working copy to the latest version in the repository

svn diff -r HEAD

Compare the unmodified version of foo.c with the latest version of foo.c in the repository

svn diff -r BASE:HEAD foo.c

Show all commit logs for the current versioned directory since you last updated

svn log -r BASE:HEAD

Rewind the last change on foo.c, decreasing foo.c's working revision

svn update -r PREV foo.c

Compare the unmodified version of foo.c with the way foo.c looked in revision 14

svn diff -r BASE:14 foo.c

Compare base and working copy ignoring whitespace and newlines

svn diff myThing -x --ignore-eol-style -x --ignore-all-space

Enable adding or editing log comments after commit

This must be done on a per-repository basis on the server. I like to make the change in my template repository, which I simply copy when starting new projects.

cd /var/www/svn/yourRepoName

You should see top-level directories: conf, dav, db, hooks, locks.

cd hooks
cp pre-revprop-change.tmpl pre-revprop-change
chmod a+x pre-revprop-change 

The hook scripts must be "runnable" which means that the volume itself must have execute permissions. Some website operators like to mount the /var/www directory on volume with execute permission disabled. If you prefer that arrangement, the /var/www/svn directory must be mounted on another volume. This is easy to do if you run some kind of volume management software: LVM, ZFS, etc.

Backup the repository

If the svn server root is at /svn and your project repository is in /svn/MyProject: (This is the server location of your project database, not one of your checkouts.)

cd /svn
svnadmin dump MyProject > MyProject.svndump

Restore a backup

cd /svn
rm -rf MyProject   # If there's an old one there.
svnadmin create MyProject
svnadmin load MyProject < MyProject.svndump

Change the local copy to reference a new URL

svn switch --relocate


Switch to another user account

su <username>

Become superuser

sudo -i

Editing the /etc/sudoers file


Enable superuser switch with no password in /etc/sudoers



Interactive spelling check and correction

aspell -c myFile.txt

Check spelling of one word: script version

echo $1 | ispell -a | sed -n -e '/^\&/p' -e '/^\#/p'

Put this expression in a shell script on your PATH.

Cut out part of lines cols n-m

cut -c n-m path

Cut out part of lines n-eol

cut -c n- path

Reverse the order of lines in a file

tac myfile.txt > myfileReversed.txt 

Remember this by "tac" = "cat" reversed.


Update the clock from a time server (Three steps)

Get time from a remote server:

rdate -u -p -s

Or more simply:


Then move it into hardware:

hwclock --systohc

You can also pull the time out of hardware to set the system clock:

hwclock --hctosys <opt>

The <opt> may be --localtime or --utc. For localtime, you
need to have an /etc/localtime file which can be a copy or
link to zoneinfo file. (These are in /usr/share/zoneinfo)

It's also possible to apply an incremental adjustment to the clock:

hwclock --adjust

The startup scripts normally do something like this:

hwclock --hctosys
hwclock --adjust

Schedule a command for later execution

Examples using a specific time:

at 10:25pm
at 1am Dec 20
at 2pm tomorrow
at midnight tomorrow

Examples using relative time:

at now + 10 minutes
at 4pm + 3 days
at 4pm + 3 weeks

A prompt will appear for you to enter commands. Finish with EOF (control D)

Show your pending jobs:


Remove a job:

atrm <job number> 

Send a reminder to your cellphone

at 6am Mar 17
mail -s "Meeting at 10am in Room 101"
Don't forget to bring the rats!

Using 'at' from inside a bash script

at 3am <<-EOF
    service tomcat restart

Start a timed server as the master clock (put in rc.local)

timed -M -F localhost

Start a timed client


Use cron for periodic script execution

Use a bash script in one of these directories:


Using crontab

Each user has a private crontab file. On Redhat/Fedora systems the file is created with this path:


The file won't exist until a cron job is scheduled.

To edit your crontab file:

crontab -e

Crontab file format:

Min(0-59) Hour(0-23) Date(1-31) Month(1-12) Day(0-6 Sun-Sat) Command

Use a * character for "every." This command lists the root directory to a a file at 9AM every Monday:

0 9 * * 1 ls /root > /root/listing.txt


Prompt for new password


Change your login shell program


Shut down and reboot or halt

shutdown -r now
shutdown -h now

Adding or removing users

useradd userName
userdel name 

In Redhat Land, useradd also creates and adds the new user to a new unique group with the same name.

Adding or removing groups

groupadd name
groupdel name

Changing passwords

passwd user

Adding or removing users from a group

gpasswd -a userName groupName
gpasswd -d userName groupName

Show group memberships for a user

groups userName

Change all sorts of stuff at once

usermod loginName \
    -g newLoginGroup
    -G newGroup1,...,newGroupN
    -l newLoginName
    -d newHomeDirectory
    -u newUID

Using -G, the user will be removed from any group not listed.

Using -l, the user still has their old home directory.

You can't change the login name of a user who is currently logged in.

See man page for more options.

Change a username and the associated group and home directory

First, make sure the user is logged out, then:

usermod -l newName oldname
groupmod -n newName oldName
usermod -d /home/newName -m newName

Log into a remote system with no password

THIS SECTION IS OBSOLETE. Please use ssh instead.

rlogin remoteIP

The .rhosts file must be in the remote login directory. It contains the ipNames of allowed users.

You can add a local username if not the same as remote.

The .rhosts file must have read privilages only for owner.

/etc/xinetd.d/rlogin must not be disabled.

If you want to rlogin from a root account /etc/securetty must have an entry for "rlogin".

Update: This method is obsolete and dangerous. Please see the SSH section for a safe alternative.

Forgotten password

Concept: Boot the system using the bash shell as the startup application. This will bypass the usual system initialization and login process. Then run passwd to set a new root password.

The procedure varies depending on the boot loader. Example using Grub:

Hit "e" on the boot menu. Select the line that begins with "kernel" Hit "e" again. Add this string to the end of the line:


Press "Enter", then "b" to boot the system. At the bash prompt:

mount /proc
mount / -o rw,remount

At this point, you will be prompted to enter a new password. Next, remount the root file system read-only to flush the cache:

mount / -o ro,remount

Now control-alt-delete to reboot.

Enable auto login

This is for Gnome3 based desktop environments that use gdm.

Edit: /etc/gdm/custom.conf

Add these lines to the [daemon] section:


Enable auto unlock keyring

This is irresponsible, dangerous and not P.C. - Worse, it's an ugly hack.


dnf install gnome-python2-gnomekeyring

Create this file:


With this content:

Description=Thwart the keyring service

ExecStart=python -c "import gnomekeyring;gnomekeyring.unlock_sync(None, 'PASSWORD');"


Login as the user associated with PASSWORD and execute:

systemctl --user enable --now keyring.service

That will install a shortcut in:


Vi Text Editor

Cursor motion

In most environments, the arrow keys work.

j       Down one line.
k       Up one line.
h       Cursor left.
l       Cursor right. (lower case L) 
<spacebar>  Cursor right.
<backspace> Cursor left.
<control>d  Down one page. 
<control>u  Up one page.

Going places

$       Go to end of line.
.       Go to beginning of line.
G       Go to end of file.
1G      Go to start of file.
nG      Go to absolute line number n.
/pattern/G  Go to next line that contains pattern


i       Insert text at cursor.
a       Insert text after cursor.
<ESC>       Leave insert mode. 
P       Insert last thing deleted.


dd      Delete current line.
x       Delete current char at cursor.
<bs>        Delete previous char.
J       Delete line break. 
        (Join current line with next line)


Marks are denoted by single letters.

The "current line" is the line that contains the cursor.

A selection is a block of lines bounded by a mark and the current line.

ma      Set a mark "a" at current line.
d'a     Cut lines from mark "a" to current line.
y'a     Copy ("Yank")lines from mark "a" to current line.
P       Paste deleted or yanked lines before the current line. 
V       Go to visual mode: move cursor to select lines
        then use d or y.


Indenting commands operate on selections as described above.

>'a     Indent right from mark "a" to cursor.
<'a     Indent left from mark "a" to cursor.

It's easier to use visual mode followed by < or >.


:r <filename>   Read in a file at current location.
:w      Write everything to current file.
:w <filename>   Write everything to selected file. 
:w! <filename>  Overwrite existing file.


:wq     Write current file and quit.
:wq!        Write current file and quit. Override protections.
:q      Quit but prompt if changes were made. 
:q!     Quit and don't save changes.

Bash commands

:!<any cmd> Show result of Bash command.
:r !<cmd>   Insert result of Bash command.


:s/old/new      Substitute irst instance on the current line.
:s/old/new/g        Substitute all instances on the current line.
:%s/old/new/g       Throughout the whole file
:'a,'bs/old/new     Between marks "a" and "b".
:'<,'>s/old/new     Inside visual mode selection (V).
:*s/old/new     Abbreviated visual mode selection.

Other line number designations:

.       Current line.
$       Last line.
/pattern/   Next line that contains pattern.
\?      Next line with previously used pattern.


Output container

As a rule, .mkv is a good choice because it handles chapters, multiple
tracks, subtitles, etc. correctly and in a highly portable manner.

Display all supported input file extensions

ffmpeg -demuxers

To see details about one of the entries listed:

ffmpeg --h demuxer=<entry>

Display all supported output file extensions

ffmpeg -muxers

To see details about one of the entries listed:

ffmpeg --h muxer=<entry>

ffmpeg copy options

Copy the first video, audio, and subitle track without re-encoding:

-c copy

Unfortunately, "-c copy" is rarely satisfactory...

Copy all video, audio, and subtitle tracks with re-encodeing:

-vcodec copy -acodec copy -scodec copy

Display information about a video

ffmpeg -i myVideo.flv

Extract the audio track and convert to mp3

ffmpeg -i myVideo.flv -ab 128k myAudio.mp3

Extract the first audio track and preserve existing encoding

ffmpeg -i myVideo.flv -acodec copy

You have to specify the audio file extension: xxx: .wav, .mp3, etc.

Extract the first audio track and specify a new encoding

ffmpeg -i myVideo.avi -vn -ar 44100 -ac 2 -ab 192k -f mp3 mySound.mp3

Extract a specific audio track, specify a new encoding and mix down to stereo

ffmpeg -i myVideo.mkv -map 0:a:N -c:a aac -ac 2 myVideoStereo.mkv

The value of N selects the audio track (0,1,2...) The output file extension can be a container or a muxing entry of the correct type. In this case, the aac codec goes to a file with extension ".adts"

Convert (re-encode) a video to almost any other format

ffmpeg -i myVideo.flv

Choosing "xxx" determines the container format.

Change the container without re-encoding

ffmpeg -i -vcodec copy -acodec copy -scodec copy myNewVideo.yyy

xxx and yyy should name any of the well-known containers such as mkv, mp4, etc.

Concatenate two or more tracks

The most flexible result requires using a file that contains the track file names:

    file track1.mp4
    file track2.mp4

Then use: (for example)

ffmpeg -f concat -i mylist.txt -vcodec copy -acodec copy -scodec copy Result.mkv

Display available codecs

ffmepg -codecs

Convert a video and specify the codecs

ffmepg -i myVideo.flv -acodec acodec -vcodec vcodec

Combine video and srt subititle tracks

ffmpeg -i myVideo.mp4 -i -c copy -c:s srt -metadata:s:s:0 language=eng myVideo.mkv

Combine video and multiple srt subtitle tracks

ffmpeg -i myMovie.mp4 -i -i -map 0 -map 1 -map 2 \
    -c copy -c:s srt -metadata:s:s:0 language=eng -metadata:s:s:1 language=chi myMovie.mkv

Remove all subtitle tracks

ffmpeg -i myVideo.mkv -c copy -sn myVideoNoSub.mkv

Remove closed caption subtitles

ffmpeg -i input.mkv -c copy -bsf:v "filter_units=remove_types=6" output.mkv

Extract the first subtitle track as a .sup

ffmpeg -i myVideo.mkv myVideo.sup

Extract the first subtitle track as a .srt

ffmpeg -i myVideo.mkv

Extract a specific subtitle track

ffmpeg -i myVideo.mkv -c copy -map 0:s:0

-c copy     Do not convert the selected tracks
-map 0:s:0  First track
-map 0:s:1  Second track

Combine video and sup subtitle tracks

This appears to work, but the subtitles are out of sync :-(

ffmpeg -i myVideo.mkv -i mySubtitles.sup -c copy -map 0 -map 1 -metadata:s:s:0 language=eng myResult.mkv

Display subtitle track information

ffprobe -v error -of json myVideo.mkv -of json -show_entries "stream=index:stream_tags=language" -select_streams s

Output part of a video without re-encoding

ffmpeg -i input.mp4 -ss beginTime -to endTime -c copy output.mkv

You can omit the "-to" clause to specify the end of the video. The complete time format is:


ffmpeg will perform the cuts at the nearest keyframes.

Show the keyframe times

ffprobe -loglevel error -select_streams v:0 -show_entries packet=pts_time,flags \
-of csv=print_section=0 myVideo.mkv | awk -F',' '/K/ {print $1}'

Create webm video

ffmpeg -i "MyMovie.avi" -vcodec libvpx -acodec libvorbis "MyMovie.webm"

Rotate a video to 90 degrees by changing the metadata

ffmpeg -i myVideo.mkv -c copy -metadata:s:v:0 rotate=90 myNewVideo.mkv 

Note: The "rotate" value is absolute. If nothing seems to happen, your video may already be rotated. Try zero first.

Rotate a video by transposing the data

ffmpeg -i myVideo.mkv -vf "transpose=N" myNewVideo.mkv

transpose=0  90 Counter-clockwise and vertical flip (default)
transpose=1  90 Clockwise
transpose=2  90 Counter-clockwise
transpose=3  90 Clockwise and vertical flip

To avoid re-encoding the audio, add -codec:a

Create an animated gif

ffmpeg -i myVideo.mkv myAnimation.gif

Create an mpg for NTSC DVD

ffmpeg -i myVideo.avi -target ntsc-dvd -ps 2000000000 -aspect 16:9 myVideo.mpeg

Create an mpg for PAL DVD

ffmpeg -i myVideo.avi -target pal-dvd -ps 2000000000 -aspect 16:9 myVideo.mpeg

Create an mpg for NTSC VCD

ffmpeg -i myVideo.avi -target ntsc-vcd myVideo.mpg

Create an mpg for PAL VCD

ffmpeg -i original.avi -target pal-vcd result.mpg

Resize a video

ffmpeg -i original.mp4 -filter:v scale=720:-1 -c:a copy result.mp4

(The -1 means choose the height to preserve aspect ratio)

Crop a video

ffmpeg -i original.mkv -filter:v "crop=width:height:x:y" result.mkv

Rotate an AVI movie 90 degrees clockwise

mencoder \
    -vf rotate=1 \
    -ovc lavc -lavcopts vcodec=wmv2 \
    -oac copy \
    INPUT.avi -o OUTPUT.avi

Flip a quicktime movie and convert to h264 inside an avi container

mencoder \
    -vf flip \
    -ovc x264 \
    -oac pcm \ -o result.avi

Concatenate video files

mencoder \
    -oac copy \
    -oav copy \
    -o input1.flv ...

X Windows

Start X windows and specify bits per pixel

startx -- -bpp 24

Start X windows and specify a layout

startx -- -layout myLayout

Layouts are defined in /etc/X11/XF86Config

Start X with a specific monitor dots-per-inch setting

startx -- -dpi 80   # My Hitachi monitor
startx -- -dpi 95   # My Tecra flat panel

You can do this with a config file .xserverrc in home dir:

exec X -dpi 80

Then just "startx" as usual.

Start X and record the messages so you can see what happened

The startx messages are automatically recorded in:


If you want to explicity redirect the messages from startx:

startx > myXDebug.txt 2>&1

Display info about the active X display


Show properties of an X window


Send X output of one program to another machine

<Any X command> -display <targetIP>:0

Send all X output to another machine

export DISPLAY=targetIPnameOrNumber:0.0

Set the default cursor

xsetroot -cursor_name left_ptr



Show X events (including keys)


Show X user prefs settings

xset -q

Allow some other machine to draw on your x display

xhost +<other machine name or ip number>

Put this command in your .xinitrc to make it permanent

Run xterm on another machine & exec a command

xterm -display <ip>:0 -e <command>

Make XF86Config use the xfs font server

Use FontPath "unix/:-1" (Redhat 6.x)

Update: "unix/:7100"    (Redhat 7.x and other Linux systems)

Add a TrueType font directory (Requires FreeType package)

cd theFontDirectory
ttmkfdir > fonts.scale 
chkfontpath --add `pwd`
service xfs reload

Note: Redhat runs ttmkfdir and mkfontdir on every directory known to xfs in the xfs startup script. These fonts become known when you run chkfontpath.

Add a font to the Redhat anti-aliasing system

Put the new font file in: /usr/share/fonts
Or in the per-user directory: ~/.fonts Then run:

fc-cache <directory>

List the fonts X knows about


Show local font server info

fsinfo -server unix/:-1

Example /etc/X11/xdm/Xservers for a one-display system

:0 local /usr/X11R6/bin/X

Show the status of X video support


Install the NVIDIA binary drivers

yum install kmod-nvidia

Use kdm to support remote X terminals (or Cygwin)

You need to edit a bunch of files on the server:

In the file:


Make sure access is enabled as shown:


In the file:


Comment out the line:

* CHOOSER BROADCAST #any indirect host can get a chooser

Add lines to the end of the file with the ip name or number of each client:

Note: If you use ip numbers, they must be reversable to names. You can do this by adding a definition to hosts or by running dns.

In the file


If-and-only-if your server runs headless, comment out this line:

:0 local /usr/X11R6/bin/X

In the file:


If you want automatic startup of kdm or xdm, on the server, change the default runlevel:


In the file:


If you don't start kdm using inittab, add this entry to rc.local:


In the file:


If you have more than one desktop system installed, this entry selects the one that will be used for remote and local logins: (Use KDM for kde or GDM for Gnome.)


In your iptables firewall setup script you must allow xdmcp:

iptables -A udpChain -p udp --dport xdmcp -j ACCEPT ]]

Remote access with SSH RSA security

Newer linux distributions are configured to require SSH authorization for remote X clients. In this document, see "SSH access with RSA keys" for details about creating and using keys.

When using RSA, you still need the ip name or number of each client machine in the server's Xaccess file.

The X server has a file that contains the SSH public keys of each user and/or entire client machines that are allowed to connect:


If you create this file, you must set the permissions:

chmod u+rw,g-rwx,o-rwx /usr/share/config/kdm/kdmkeys

You don't need to authorize the whole client if you only want to allow selected users on that client.

Public keys are copied or mailed from the client machines. A special public and private key set may be created for the whole host. It is kept in:


You append the contents of this file to the server's kdmkeys file to authorized everybody on the whole client.

Public key files for individual users are found in:


Simply append the contents of this file to the server's kdmkeys file to authorize this user.

With all the setup completed, you can login to the remote machine using ssh and run X-Windows programs. The display will be automagically sent back to your machine!

UPDATE: Newer Redhat/Fedora systems need some additional setup on the client side: In the file /etc/ssh/ssh_config you must add these directives:

FORWARDX11Trusted yes

Without these changes, you would have to login to the server using ssh with the "-Y" switch to enable access by a trusted host.


DNF. is the successor to yum on Redhat-like distributions.

Install package and all required dependencies

yum install <packageName(s)>

Remove packages

yum remove <packageName(s)>

Obtain and install updates for all installed packages

yum update  

The downloaded files are in /var/cache/yum

List available updates without installing them

yum check-update

Display information about a package

yum info <packageName>

List the installed repositories

yum info repolist

Install a new repository

You can edit or create files in:


Alternatively, most yum repositories have and associated rpm file you can install or remove.

Command line options

Install relative to an alternative root filesystem:


Sometimes different repositories contain packages with conflicting names or build attributes. It is necessary to resolve this to avoid installing or updating with the wrong package. Repositories can be enabled or disabled by editing their yum.repos.d files. Otherwise, these settings can be overridden on the command line:



List installed packages that depend on a package

repoquery --whatrequires --installed somePackage

List packages required by a package'

yum deplist somePackage


To get started see: ZFS Without Tears

Create pools

zpool create myPool sdb sdc sdd sde
zpool create myPool mirror sdb sdc
zpool create myPool raidz sdb sdc sde sde
zpool create myPool raidz2 sdb sdc sde sdd

Frequently used zpool create options

Specify 4k sectors: -o ashift=12
Don't mount the pool:   -m none

Export a pool (take it offline)

zpool export myPool

Show pools available for import

zpool import

Import a specific pool

zpool import myPool

Import a pool and change the pool name:

zpool import myPool myNewName

Import a pool and mount under an alternative directory

zpool import myPool -o altroot=/someDirectory

Repair and verify checksums

zpool scrub myPool

Clear device errors

zpool clear myPool

Create datasets

zfs create myPool/myDataset

Turn off last-access time recording

zfs set atime=off myPool/myDataset

Enable extended attributes

zfs set xattr=sa myPool/myDataset

Specify or change mountpoints

Specify a mountpoint and mount the filesystem:

zfs set mountpoint=/mnt/myMountpoint myPool/myDataset

Specify a legacy mountpoint

zfs set mountpoint=legacy myPool/myDataset

The /etc/fstab entry for a legacy mount point:

myPool/myDataset  /mnt/myMountpoint  zfs defaults 0 0

The pool itself is a dataset and the "-m path" option is equivalent to:

zfs set mountpoint=path myPool
zfs set mountpoint=none myPool

Enable compression

zfs set compression=on myPool/myDataset

Take a snapshot

zfs snapshot myPool/myDataset@mySnapshot

Rollback a dataset

zfs rollback myPool/myDataset@mySnapshot

Rollback options

-f : Unmount the file system if necessary
-r : Delete all dependent snapshots
-R : Delete all dependent clones

Replicate pools or datasets

zfs snapshot myPool/myDataset@now
zfs send myPool/myDataset@now | zfs receive -d myOtherPool

Perform incremental replication

zfs rename myPool/myDataset@now myPoo/myDataset@then
zfs snapshot myPool/myDataset@now
zfs send -i myPool/myDataset@then myPool/myDataset@now \
    | zfs receive -uFd myOtherPool

Send options

-i : Incremental
-R : Create a replication package with all snapshots and clones

Receive options

-u : Don't mount anything created on the receiving side
-F : First rollback to the most recent snapshot
-d : Discard pool name from each path element
-x mountpoint : Don't mount any filesystems

Replicate between hosts

zfs send myPool@now | ssh destHost zfs receive -d myDestPool

Configure samba sharing

Enable usershares in smb.conf:

usershare path = /var/lib/samba/usershares
usershare max shares = 100
usershare allow guests = yes
usershare owner only = no

Create the usershares directory by hand

cd /var/lib/samba
mkdir usershares
chmod o+t usershares
chmod g+w usershares

Restart samba and load the shares

systemctl restart smb
zfs share -a

To share a dataset

zfs set sharesmb=on myPool/myDataset

Clients will see the share at


The owner and properties of the myDataset directory on the server must be compatible with access by samba.