SSH
To enable remote logins with ssh
apt-get install openssh-server
Then you can login with:
$ ssh efossnet@proxy.dream.edu.et
To verify the host key fingerprint of a machine:
$ ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key.pub
Note: you need to verify it before logging in!
More information at http://www.securityfocus.com/infocus/1806
Example ssh usages
To log in:
$ ssh efossnet@proxy
To run a command in the remote computer:
$ ssh efossnet@proxy "cat /etc/hosts"
To copy a file to the remote computer:
$ scp Desktop/july-18.tar.gz efossnet@proxy:
To copy a file from the remote computer:
$ scp efossnet@proxy:july-18.tar.gz /tmp/
Beware of brute-force login attempts
Warning about SSH: there are people who run automated scans for ssh servers and try to login using commonly used easy passwords.
If you have an SSH server on the network, use strong passwords, or if you can
it's even better to disable password authentication: in
/etc/ssh/sshd_config
, add:
PasswordAuthentication no
To log in using public/private keys:
-
Create your key:
ssh-keygen -t rsa
-
Copy your public key to the machine where you want to log in:
ssh-copy-id -i .ssh/id_rsa.pub efossnet@proxy
-
Now you can ssh using your RSA key
If you use ssh often, read these:
- http://mah.everybody.org/docs/ssh
- http://www.securityfocus.com/infocus/1812
- http://www.sshkeychain.org/mirrors/SSH-with-Keys-HOWTO/SSH-with-Keys-HOWTO-6.html
proxy
Problems we had today with the proxy:
ssl does not work
Reason: squid tries to directly connect to the ssl server, but the AAU network wants us to go through their proxy.
Ideal solution: none. There is no way to tell squid to use a parent proxy for SSL connections.
Solution: update the documentation for the Dream university users telling to setup a different proxy for SSL connections.
Longer term solution: get the AAU network admins to enable outgoing SSL connections from the Dream university proxy.
Other things that can be done:
- report a bug on squid reporting the need and requesting the feature
- download squid source code and implement the feature ourselves, then submit the patch to the squid people
Browsing normal pages returns an error of 'Connection refused'.
In the logs, the line is:
1153294204.912 887 192.168.0.200 TCP_MISS/503 1441 GET http://www.google.com.et/search? - NONE/- text/html
That "/503" is one of the HTTP error codes.
Explanation of the error codes:
- http://www.w3.org/Protocols/HTTP/HTRESP.html
- http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
- http://offline.web.cern.ch/offline/web/http_error_codes.html
Reason: the other proxy is refusing connections from our proxy.
Solution: none so far. Will need to get in touch with the admins of the other proxy to try to find out why it refuses connection to our proxy, and how we can fix the problem.
postfix on smtp.dream.edu.et
Basic information is at http://www.postfix.org/basic.html.
Difference between mail name and smarthost:
- The mail name is the name of the mail server you're setting up (TODO: need more details on what's it used for)
- The smarthost is the name of the mail server that will relay mail for you.
Quick way to send test mails:
apt-get install mailx
echo ciao | mail efossnet@localhost
To configure a workstation not to do any mail delivery locally and send all
mail produced locally to smtp.dream.edu.et
:
- install postfix choosing "Satellite system"
- put smtp.dream.edu.et as a smarthost.
To setup a webmail: apt-get install squirrelmail
(on a working apache
setup).
To setup mailing lists: apt-get install mailman
, then follow the
instructions in /usr/share/doc
.
Mail server issues we encountered
When a mail is sent to efossnet@localhost, the system tries to send it to efossnet@yoseph.org
Investigation:
- "yoseph.org" does not appear anywhere in /etc or /var/spool/postfix
- postfix configuration has been reloaded
- postfix logs show that the mail has been 'forwarded'
Cause: the user efossnet had forgotten that he or she had setup a .forward file in the home directory.
Solution:
rm ~efossnet/.forward
Apache
To add a new website:
cd /etc/apache2/sites-available
sudo cp default course
-
sudo vi course
:- Remove the first line
- Add a
ServerName
directive with the address of your server: ServerName course.dream.edu.et - Customize the rest as needed: you at least want to remove the support
for browsing
/usr/share/doc
and you want to use a different document root.
-
sudo a2ensite course
sudo /etc/init.d/apache2 reload
More VIM
Undo: u
(in command mode)
Redo: ^R
(in command mode)
You can undo and redo multiple times.
To recover a lost password for root or for the ubuntu admin user
Boot with a live CD, mount the system on the hard disk (the live CD usually
does it automatically), then edit the file /etc/shadow
, removing the
password:
enrico:$1$3AJfasjJFHa234dfh230:13343:0:99999:7:::
becomes:
enrico::13343:0:99999:7:::
You can edit the file because, in the live CD system, you can always become root.
After you do this, reboot the system: you can log in without password, and set
yourself a new password using the command passwd
.
Installing packages not on the CDs
To get a package for installing when offline:
apt-get --print-uris install dnsmasq
- Manually download the packages at the URLs that it gives you
Otherwise, apt-get --download-only install dnsmasq
will download the
package for you in /var/cache/apt/archives
.
You can install various previously downloaded debian packages with:
dpkg -i *.deb
Backups
There are various ways:
dump
(for ext2/ext3 file systems) orxfsdump
(for xfs file systems).
Makes a low-level dump of the file system.
It must be used for every different partition.
It makes the most exact backup possible, including inode numbers.
It can do full and incremental backups.
To see the type of the filesystems, use 'mount' with no parameters.
To restore: restore
or xfsrestore
.
tar
Filesystem independent.
It can work accross partitions.
It correctly backups permissions and hard links.
It can do full and incremental backups.
Example:
tar lzcpf backup.tar.gz /home /var /etc /usr/local
tar lzcpf root.tar.gz /
To restore:
tar zxpf backup.tar.gz
faubackup
Filesystem independent.
Uses hard drive as backup storage.
Always incremental.
It cannot do compression.
Unchanged files in new backups are just links to old backups, and do not occupy space.
Any old backup can be deleted at any time without compromising the others.
It can be used to provided a "yesterday's files" service to users (both locally and exported as a read-only samba share...).
To restore, just copy the files from the backup area.
amanda
apt-get install amanda-client amanda-server
It is a network backup system.
It can do full and incremental backups.
You can have a backup server which handles the storage and various backup clients that send the files to backup to the server.
It takes some studying to set up.
To restore: it has its own tool.
Some data requires exporting before backing it up:
- To save the list of installed packages and the answer to configuration
questions:
dpkg --get-selections > pkglist debconf-get-selections > pkgconfig
To restore:
dpkg --set-selections < list
debconf-set-selections < pkgconfig
apt-get dselect-upgrade
If you do this, they you only need to backup /etc
, /home
,
/usr/local
, /var
.
- To save the contents of a MySQL database:
mysqldump name-of-database | gzip > name-of-database.dump.gz
To restore:
zcat name-of-database.dump.gz | mysql
You can schedule these dumps to be made one hour before the time you make backups.
Scheduling tasks
As a user:
crontab -e
As root: add a file in one of the /etc/cron.*
directories.
In cron.{hourly,daily,weekly,monthly} you put scripts.
In the other directories you put crontab files (man 5 crontab).
If the system is turned off during normal maintainance hours, you can do two things:
- Change /etc/crontab to use different maintanance hours
- Install anacron (it's installed by default in ubuntu)
For scheduling one-shot tasks, use at(1):
$ at 17:40
echo "Please tell Enrico that the lesson is finished" | mail efossnet@dream.edu.et
^D
When and how to automate
- First, you manage to do it yourself
- Then, you document it
- Then, you automate it
Start at step 1 and go to 2 or 3 if/when you actually need it.
(credits to sto@debian.org: he's the one from which I heard it for the first time, said so well).
Interesting programs to schedule during maintanance
rkhunter
,chkrootkit
checksecurity
debsecan
tiger
Important keys to know in a Unix terminal
These are special keys that work on Unix terminals:
^C
: interrupt (sends SIGTERM)^\
: interrupt (send SIGQUIT)^D
: end of input^S
: stop scrolling^Q
: resume scrolling
Therefore, if the terminal looks like it got stuck, try hitting ^Q
.
Problems we had today with postfix
- Problem: mail to
efossnet@dream.edu.et
is accepted only if sent locally.
Reason:
$ host -t mx dream.edu.et
Host dream.edu.et not found: 3(NXDOMAIN)
Solution: tell dnsmasq to handle a MX record also for dream.edu.et:
mx-host=dream.edu.et,smtp.dream.edu.et,50
- The problem not solved with the previous solution.
Reason: postfix was making complaints which mentioned localhost as a domain name.
Solution: fixed by changing 'myhostname' in main.cf to something different than localhost.
Note: solved by luck. Investigate why this happened.
Problems found yesterday and today
- there is no way to tell squid to use another proxy for SSL connections: it only does them directly
- if you want to configure evolution to get mail from /var/mail/user, you need to explicitly enter the path. It would be trivially easier if evolution presented a good default, since it's easy to compute. It would also be useful if below the "Path" entry there were some text telling what path is being requested: the mail spool? the evolution mail storage?
- In Evolution: IMAP or IMAPv4r1? What is the difference? Why should I care?
apt-get --print-uris
doesn't print the URIs if the package is in the local cache, and there seems to be no way to have it do it.- in
/etc/apache2/sites-available/default
, is theNameVirtualHost *
directive appropriate there? It gets in the way when using 'default' as a template for new sites.
Otherwise, one can add a new (disabled) site that can be used as a template
for new sites instead of default
.
- the default comments put by
crontab -e
are not that easy to read.