Welcome to Linux Support and Sun Help
Search LinuxSupport

Linux FAQ

Multi core CPUs

With the low cost of dual- and quad- core processors, I am interested in how Linux will be able to access and use this technology now and in the near future. I have done some looking, and while some information is available, I was unable to determine how it would benefit me on a Linux desktop computer. Also is there one particular distro or version of Linux better at implementing the new processors than other versions?


Linux has supported multiple processors (or processor cores) for some years: I am writing this on a Core2Duo system (although Kate doesn't really need all the power of both cores to check even my spelling). Most distros support multiple processors out of the box, the important criterion is that the kernel has been build with SMP (Symmetric multiprocessing) support. Some distros enable this in their generic kernel, others have a separate SMP-enabled kernel that is used when the installer detects more than one CPU core. There are a number of easy checks for SMP support; does the output from
 cat /proc/cpuinfo
shows a figure for "cpu cores"? Try running top in a terminal and pressing 1 (that's the number one, not the letter L), which should toggle the CPU usage display between showing an overall figure and individual CPU loads. Running
 zgrep SMP /proc/config.gz
will show CONFIG_SMP=y if SMP support is enabled, providing your kernel has support for /proc/config.gz, otherwise you'll see an error that tells you nothing.

SMP support in the kernel means improved multitasking, as programs can be run on whichever processor has the least load, but most individual programs still use only one processor. However, some CPU-intensive programs can split their load between more than one CPU core. For example, the -threads option for ffmpeg will split a video transcoding task across the given number of processors for a substantial reduction in processing time. Software compilation can also benefit, as most programs have a lot of smaller files that need to be compiled and this can be done in parallel. By setting the MAKEOPTS environment in your profile to -j3 for a dual core system, or -j5 for quad core, programs will usually compile much faster. Note that the number used here is usually one more than the number of CPUs (-j2 is used for single processor systems) to ensure the processors are loaded most effectively.

If you are going to load the system with intensive tasks like software compilation or video transcoding, try using the nice command to keep your desktop responsive while this is going on.


Random upgrade crashes

I have a Compaq Deskpro 650MHz with 128MB RAM and a 10GB hard disk. I am running Mandrake 10.0 on it, which is quite out of date, so I have been trying to upgrade to ­ or install from scratch ­ Mandriva 2007. The computer seems to crash every time mostly during install. Sometimes I can install but when I try to boot the system, everything locks up too. I get a lot of text on the screen that makes me think that it's something with the kernel. I have tried other distros; Fedora, Ubuntu, Slackware and Gentoo, they all have the same problem. Is there a way to install newer distros (even Mandrake 10.1 doesn't work) and make them work?


The fact that you experience problems with various distros, and they don't always appear at the same point, indicates that this may be a hardware problem. The most common of these are faulty memory, overheating and a substandard PSU. Before you try any of these, I recommend you try running the installers in text mode. The graphical installers require a lot of memory, far more than it needs to run the installed distro, because they load everything into a huge ramdisk so it is still available when you change CDs. A text mode install reduces the memory requirements substantially.

If that fails, you need to check for the previously mentioned problems. Testing memory is easy, most distro install discs include memtest as an option on the boot menu. Select this and let is run for as long as possible. You need to let it make at least two passes, running overnight is best. Overheating can be caused by failing fans or a build up of crud (sorry to use such a technical term here) in the fans, heat sinks or vents. Use one of those cans of compressed air to blow everything clear. A failing power supply can also cause random reboots and lockups, but the only way to test it is to try a replacement.

If none of that works, make a note of the last dozen or so lines of text on the screen when it fails. While some of these messages come from the kernel, many are from the various programs that are run when a system boots. Knowing the content of these messages will help to pinpoint the problem. If the messages are not consistent, you almost certainly have a hardware fault.


Playing my DVDs

My work takes me to many locations worldwide and when I have some free time, like many other people, I enjoy watching DVD movies. I am running Fedora 13 on my T60 Thinkpad and installed VLC and all other plugins from the Livna repository but I am only able to watch non-encrypted DVDs. I do have libdvdcss installed but am still unable to watch the majority of films that come my way, I have heard that the Matshita DVD drive will not play encrypted disks. Is this true? If not, is there a software fix that could cure this problem?


Your drive is locked to a specific region and will only play encrypted discs from that region. The DVD Consortium divided the world into six regions and a drive can only play discs of the region it is set to. The regions are numbered in the following way:

  1. North America (USA and Canada)
  2. Europe, Middle East, South Africa and Japan
  3. Southeast Asia, Taiwan, Korea
  4. Latin America, Australia, New Zealand
  5. Former Soviet Union (Russia, Ukraine, etc), rest of Africa, India
  6. China

It is possible to change the region of most drives, by using something like regionset (http://linvdr.org/projects/regionset) but there is a catch. Most drives only allow you to change the region setting four or five times, then it stays locked to the last set region, which isn't much use if you are travelling from region to region and buying DVDs on your travels. With a laptop, you don't even have the option of swapping the drive for a region-free one. Some drives can have their firmware updated to either allow unlimited region changes or even work with all regions. However, this is not true for all Matshita drives, so you may be out of luck. There is information on many different drives at www.rpc1.org.

If it is not possible to update the firmware on your drive (some drives use encrypted firmware) you are left with the option of ripping your DVDs to another format on another computer, one with a region-free DVD drive. This would work well, and saves you taking the DVDs on your travels as you can store the ripped files on your hard drive, but it only works with DVDs you have before you travel, you still cannot play DVDs you buy abroad. dvd::rip (www.exit1.org/dvdrip) is a good choice for this.


OpenOffice.org and Java

I have installed OpenOffice.org 2.3 from LXF99 as per the instructions in the magazine. While most parts seem to run without any issues, when I try to open my Base files they demand an updated Java runtime which seems to have disappeared. I have tried downloading and reinstalling Java from Sun but I still can't access my database files ­ I get a message asking me to locate Java via Tools >> Options, but I can't locate any Java installation.


OpenOffice.org needs Java for the database and help systems, but the rest of the suite will work without it. The best way to install Java is through your distro's package manager. The download from Sun will work, but keeping as much software as possible inside the package management system reduces dependency and conflict problems later on.

Getting OpenOffice.org to work with Java can be less than intuitive. Select Options from the Tool menu and pick OpenOffice.org >Java. Enable Java by ticking the `Use a Java runtime environment' box then wait for it to scan your system for suitable Java installations. This can take from a few seconds to a minute, depending on the speed of your system. This is Java, so it seems appropriate to make a cup of coffee while you are waiting. Eventually, you should see a list of your Java Runtime Environment (JRE) installations, select one and press OK. You need to quit and restart OpenOffice.org for the change to take effect.

If OpenOffice.org fails to find your JRE, you can click the Add button and give the path to it manually. This should be something like /opt/ sun-jdk-1.6.0.12/jre/bin. If you have installed through your distro's package manager (not the OpenOffice.org package manager), you can generally use that to view the contents of a package, which will tell you where it is installed.


Batch processing users

I have been running CentOS since version 3 in the school where I work. I have input 500 pupils into the system one by one. I have set up a domain logon with Samba. Is there any way to bulk input so many user accounts in one go?


Of course, there is, this is one of the areas in which textual command line tools excel, in being able to use data from one file or program in another. Are you trying to create system users or Samba users, or both? Use the useradd command to create system users and smbpasswd to create Samba users. Both of these commands can be used in a script to read a list of user names from a file. Put your new user names in a file called newusers, one per line, like this
 username1 password1 real name1
 username2 password2 real name 2
 ...
You can create this file manually or extract the information from another source, such as using sed or awk on an existing file or running an SQL query to pull the information from your pupil database.

Then run this to add them as system users.
  cat newusers | while read u p n
  do
    useradd --comment "$n" --password $(pwcrypt $p) --create-home $u
  done
You may need to install pwcrypt before you can do this, it is a command line interface to the system crypt() function. You could use the crypt command from the older cli-crypt package, but pwcrypt is more up to date and considered a better option nowadays. If you want to force the user to change their password immediately, which is generally a good idea, add
  passwd -e $u
after the useradd command. This expires their password, so they need to change it when they next login. To set the Samba passwords, use smbpasswd like this
  echo -e "$p\n$p" | smbpasswd -as $u
The -s option tells smbpasswd to accept the password from standard input instead of prompting for it, the echo command is in that format because smbpasswd still requires the password to be given twice when used like this. So all you need to add the users to both the system and Samba is
  cat newusers | while read u p n
  do
    useradd --comment "$n" --password $(pwcrypt $p) --create-home $u
    passwd -e $u
    echo -e "$p\n$p" | smbpasswd -as $u
  done
You must add the users to the system password file before trying to add them to Samba, or smbpasswd will complain. Similarly, when you delete users, run smbpasswd -x username before userdel username. NB


Hidden Hostname

I have a Linux box which I use as a server for testing web development projects (it's in my DMZ, so I use a dedicated machine rather than my main machine). When it was running Fedora Core 7, Windows machines were able to access the machine by its hostname. All seemed well. Now, I upgraded the machine to Fedora 12 and its hostname is unavailable ­ pinging webdev (the server name) no longer works but I can still access it through its IP address.


I suspect you're setting the address statically instead of using DHCP. When you let the computer request an address via DHCP, the DHCP server keeps track of the hostname and IP address, as DHCP servers generally act as local name servers too, so any other computer can resolve the hostname to an IP address.

You have three choices, the first is to set the IP address by DHCP. I understand you may want a fixed address for this box, but many DHCP servers have an option to map specific MAC addresses or hostnames to given IP addresses, so you get an effectively static address but working within the DHCP framework. You set the hostname to be sent to the DHCP server in the network-config window.

Your second option is to record the IP address and hostname in the hosts file of every other computer on the network. The file is /etc/hosts on the Linux systems and C:\windows\hosts on (yes, you guessed it) Windows. The format of the file is one line per IP address containing the address, full hostname and then any aliases, all separated by white space, like this
 192.168.1.27 webserver.example.com webserver
 192.168.1.43 mail.example.com mail ftp.example.com ftp
The third option is to run your own local DNS server. This is nowhere near as complicated as it sounds, as long as you don't try to set up a full- blown Internet DNS server like Bind. Dnsmasq, which I use on my home network, is available from www.thekelleys.org.uk/dnsmasq and is very easy to set up. Its default setup is to use the /etc/ hosts file from the computer running it to provide local DNS resolution and pass other requests to the name servers listed in /etc/resolv.conf. Just install it on one computer on your network (not the one in the DMZ) and set all the others to use that computer as their DNS server.

Dnsmasq can work as a DHCP server too (SmoothWall Express uses it) and it is a lot more configurable than built-in DHCP servers of most routers. If you have a computer that is always on, using this to provide DNS and DHCP for your network makes life easier; make sure you disable the DHCP server in the router to avoid conflicts. The dnsmasq configuration file is well documented (see man page and comments in the file) with sensible defaults. It may need no configuration at all for you, but if it does, the options are explained clearly. NB


Locked out

My main hard disk is partitioned as a dual boot system ­ Windows XP on one and Ubuntu (installed from LXFS07) on the other. I then added a second hard disk with experiment failed for me. I can no longer use sudo ­ when I try I get a message saying that my user name is not on the sudoers file and my attempt to use it will be reported to Big Brother. In addition the items on the Ubuntu desktop System menus whose use requires root privileges (like Disk Manager and Apt-get) have disappeared.

I can get root privileges by booting into the Recovery Mode but I do not know whether the problem can be overcome by editing /etc/ sudoers with visudo. The explanation in Ubuntu Unleashed (p294-5) looks risky to me ­ I am getting too old for this sort of excitement..

Perhaps I should reinstall Ubuntu, but that is a blunt instrument ­ it would be better to solve the problem with a few deft keystrokes. But what does a new installation do? Does it overwrite existing files and destroy the various configuration files? If it inherits the old config files I will be no further forward.


A new installation will overwrite your system configuration files. Not only the ones you have inadvertently broken, but also the ones that are working perfectly, so it should be a last resort.

The problem appears to be that your user is no longer a member of the admin group, probably due to incautious use of usermod, or that the admin group no longer has sudo rights. To check the latter, check that /etc/sudoers contains
  %admin ALL=(ALL) ALL
and add it with visudo if not. To see whether you are in the admin group, run id or groups in a terminal, as your normal user. If admin is not listed, you need to boot in Recovery Mode and add yourself to the admin group with
  gpasswd -a username admin
Unlike usermod, gpasswd adds groups to your user without affecting your existing groups. To make your FAT partitions available when you boot, you will need to add a line to /etc/fstab for each one.
 /dev/hdb1 /mnt/shared1 vfat uid=john,gid=john,umask=022 0 0
The first three fields should be clear, the device, mount point and filesystem type. The options field is the key: uid and gid set the user and group that will "own" the mounted filesystem, umask sets the permissions (remember that FAT has no concept of owners or permissions). The umask is subtracted from 666 for files and 777 for directories, so this sets files to 644 and directories to 755 (writable by you, readable by everyone).

This assumes that your username and group are the same, which is how Ubuntu usually sets things up. If you changed your primary group when fiddling with usermod and friends ­ id shows your primary group ­ change it back with
 usermod --gid groupname username
substituting the appropriate values for the user and group names. Incidentally, the Big Brother reference only means that the attempt to use sudo will be recorded in the system log file for the administrator to read. You won't be forced to spend three months in a camera-infested house with a dozen strangers who become even more strange by the day, being gawped at on live TV. PH


Get smart

Would you be able to suggest a couple of programs that read the S.M.A.R.T. signals from hard drives? Keeping an eye on hard drive health is a good thing. I run FreeBSD, so the ability to run natively would be nice, but as you know FreeBSD is able to run most Linux programs directly.


The programs that you are looking for are in the smartmontools suite, and the instructions provided here for their operation apply to Linux as well as FreeBSD.

The source is available from http://smartmontools.sourceforge.net and compiles on FreeBSD as well as Linux. The two programs are smartctl and smartd. Smartctl will run a number of tests on your drives, for example
 smartctl --all /dev/hda
will show all S.M.A.R.T. information on the first drive (these tools are run with a drive name, not a partition name). Smartd is a daemon that will monitor your drives and report any problems to the system log, and mail you if you give it your address. S.M.A.R.T. ­or Self-Monitoring Analysis and Reporting Technology to give it its full name ­ can detect problems leading to failures and even report changes in temperature. The latter can cause excessive entries in the syslog as smartd reports every temperature fluctuation. Unfortunately, in the first few days of operation, I found this filled my daily logwatch mails with a lot of noise. Adding a suitable line to /etc/smartd.conf fixed this irritation, such as
 /dev/sda -d sat -a -I 194 -I 231 -I 9 -W 5 -m me@mydomain.com
The -I 194 -I 231 -I 9 -W 5 tells it to report only changes of five degrees and the minimum and maximum temperatures for the day. If changes of five degrees happen often, you have a potential problem, so this is a useful setting. The first part of the line, -d sat -a, specifies that this is a SATA drive and to run all tests. NB


Buggy BIOS

I bought what I thought was the greatest money-for-value-wise PC, and I still think it is; one thing bothers me though. When I start up the PC and select Ubuntu from Grub, a message is printed telling me that there is MP-BIOS bug 8254 and some timer is not connected. Also, almost all the DVDs from your magazine fail to start up without a noapic option on the boot command line. Booting LXFDVD95 shows the text

 MP-BIOS bug: 8254 timer not connected to IO_APIC
 Kernel panic - not syncing: OI_APIC + timer doesn't work!
I searched on Google and Ubuntu's help, but all I could find was a bare bone description that my timer doesn't work. I am guessing it has to do something with my NVIDIA 7300LE (I knew that cheap things turn to be expensive in the end), but what exactly? Another fact that could help probably. Every thing I tried in 3D is very fragile and buggy on my machine. Do I need to buy a better video card?!


This is not caused by your video card, but may well be the cause of your video problems. APIC (Advanced Programmable Interrupt Controller) handles timing and interrupts for various components on your motherboard, including disk controllers and video card slots. It is common for computers to have ACPI controllers that break the specifications, many manufacturers consider "it works with Windows" to be an acceptable alternative to following the standards. You have already discovered that you need to append the keyword noapic to the boot parameters with live CDs, but you also need to do this when booting from your hard disk.

Before you do that, check the manufacturers website for a BIOS update, it may be that this has already been fixed in a later firmware. If not, you need to alter the boot menu to always use noapic when booting. Ubuntu doesn't include a configuration program for the boot process, so you will have to edit the configuration file manually. Press Alt-F2 and type
 sudo gedit /boot/grub/menu.lst
This will open the boot menu configuration file in an editor. Most of the lines start with a #, these are comments that you can ignore. Go to the first line starting with title; this is the first option on the boot menu. You need to change the first kernel line below this, add noapic to the end of that line, making sure there is a space between the previous last word and noapic, and save the file. When you reboot, the BIOS error message should not appea and your 3D graphics should be more stable. You may notice other improvements, because buggy APIC firmware can cause all sorts of things from poor disk drive performance to clocks running at the wrong speed. NB


That syncing feeling

I would like to switch to Linux, but I fear that syncing with my PDA with Microsoft Outlook may not work. Also I have Money for PPC on my PDA and that is important to me so does GnuCash provide a similar feature?


SynCE (www.synce.org/index.php/SynCE-Wiki) is a framework that allows Linux software to synchronise with Windows Pocket PC devices. This works, with varying degrees of user-friendliness and success, with various programs. One of the easiest to sync with is the KDE PIM suite of KMail, Kontact, KAddressBook and KOrganiser. To do this you need to install the synce-kde package, which comes with most distros, although not all of them install it by default. After installation, run the package manager and install synce-kde if it is not already marked as installed. Then you will be able to sync mail and contacts.

Of course, this means you will need to be running a system based on the KDE desktop, such as Mandriva, Kubuntu, PCLinuxOS or SUSE. Syncing your financial records is another matter. GnuCash is able to import standard QIF accounts files, but not export them. However, KMyMoney (http://kmymoney2.sourceforge.net) does offer QIF import and export, so you should be able to import files from your PDA and then transfer them back after making modifications. Unless you have some formal bookkeeping training or accountancy experience, you'll probably find KMyMoney easier to learn than GnuCash. KMyMoney is also a KDE program and should be available with any of the previously mentioned distros. NV


URL rewriting

I am writing a website that uses .png images throughout ­ most of these images use transparency. It works great on all the latest browsers but (as expected) not IE6. To compensate for this I've created .gif versions of each of the graphics (as well as a custom style sheet) which should load in place of the .png images if the user is still on IE6.

To achieve this I want to use mod_rewrite and .htaccess to make it transparent ­ so that images/png/image1.png is rewritten as images/gif/image1.gif. This is my .htaccess file:

 RewriteEngine On
 RewriteCond %{HTTP_USER_AGENT} "MSIE 6"
 RewriteRule /images/png/([A-Za-z0-9])+\.png$ /
 images/gif/$1+\.gif
 RewriteCond %{HTTP_USER_AGENT} "MSIE 6"
 RewriteRule css/style.css css/iestyle.css
The CSS rewrite works perfectly but the image replacement (png to gif) doesn't.

OnlyTheTony, from the forums


You have the right idea in using mod_rewrite to change the URLs. It is falling over because you are using a + to join strings, but mod_rewrite works with regular expressions, where + is a pattern matching character, not an operator.

With regular expressions, you don't need to join strings, instead you use parentheses to mark the parts you want unchanged and $1, $2... to include them in the destination, as you have done, and everything is either text or regular expression characters. So to replace the last occurrence of foo in a string with bar, you would use
 /(.*)foo(.*)/$1bar$2/
In your case, you want to change anything starting with images/png and ending in .png, replacing both occurrences of png with gif. You can do this by replacing your first RewriteRule line with one of
 RewriteRule /images/png/(.*)\.png$ /images/
 gif/$1\.gif
 RewriteRule /(.*)/png/(.*)\.png$ /$1/gif/$2\.gif
The first is easier to read, but the second will work with images in other directories too. NB


NTFS running repairs

I have a external hard-drive, NTFS-formatted. I need to defragment it but I don't want to lose the data on it. Can you defragment NTFS on Linux? I run Ubuntu Feisty Fawn on an old PC2800 computer.

churst1, from the forums


The short answer is no, not really. Why is the drive using NTFS in the first place? If it contains a Windows bootable installation, any attempt you make to defragment it in Linux will most likely render it unbootable in Windows. But if is does contain Windows, why not use that to defragment the drive ­ Windows can be useful for more than playing games. If the drive is used purely for data, then you can severely reduce fragmentation by copying all the data off, reformatting the drive and copying the data back. This requires an NTFS filesystem with full write support, either the commercial Paragon NTFS for Linux application that we reviewed last month, or NTFS-3G, which is included in the Ubuntu repositories. You'll also need the ntfsprogs package, so fire up Synaptics and install both of those.

Now you can do the whole job by opening a terminal, changing to a directory with enough space to hold the contents of the NTFS drive and running
 tar cf ntfs.tar /mnt/ntfs && umount /mnt/ntfs &&
 mkntfs /dev/sda1 && mount /dev/sda1 /mnt/ntfs -t
 ntfs-3g && tar xf ntfs.tar -C /mnt/ntfs
This is all on one line. The two tar commands and mkntfs take a while, so chaining the commands together like this means you don't need to babysit the machine, yet each command will only be executed if the previous one was successful (you don't want to reformat the drive if copying the data failed). This example assumes your drive is at / dev/sda1 and mounted on /mnt/ntfs. Make sure you change these to the correct paths suitable for your machine before you run it.

If you are short of space to save the contents, you can create a compressed archive, but this will take longer, particularly when copying for the drive. Do this with
 tar czf ntfs.tar.gz /mnt/ntfs && umount /mnt/ntfs
 && mkntfs /dev/sda1 && mount /dev/sda1 /mnt/ntfs
 -t ntfs-3g && tar xf ntfs.tar.gz -C /mnt/ntfs
Again, all on one line. If you are using NTFS so the drive is readable in Windows (why else would you use it?) and you will only use it with your own Windows computers, a better solution would be to format the disk as ext2 and install the ext2 driver from www.fs-driver.org on your Windows computer(s). Then you no longer have to worry about disk fragmentation and you will get better disk performance in Linux. The above commands will do this is you replace mkntfs with mke2fs and remove
 -t ntfs-3g
from the mount command. NB


Lightweight distro needed

I am looking for an OS suitable for an AMD K-6/200. I thought NetBSD might be a good choice, but it turns out the basic install results in an OS that is command line only. Xfree86 (not Xorg) needs to be set up separately. I'm disabled and the extra effort is a problem for me. Is there an `easy' version, like PC-BSD or Desktop BSD are easy versions of FreeBSD?

I had tried DSL on a P2/400 machine and didn't care for it, but I just discovered DSL-N. This has a real word processor! How much net performance do you end up gaining if you install Gnome or KDE on either NetBSD or DSL-N? Fedora Core, with Gnome, runs infuriatingly slow on the P2/400.

Gary Prichard


A K-6/200 is slow by current standards, so you'll need a lightweight distro to give reasonable performance. Most importantly, you will need a lightweight window manager, which definitely excludes Gnome and KDE. Something using Fluxbox, Xfce or IceWM would be far more suitable. As you need word processing, Xfce may be a good choice, as it uses the GTK toolkit, as does AbiWord. With limited resources, choosing a set of applications that use the same toolkit and libraries will help your system run more efficiently.

Speaking of resources, one of the best improvements for any Linux system that needs more speed is more memory. Spending a few pounds/dollars/euros/pieces-of-eight on extra RAM generally gives a greater improvement than spending a similar amount on a faster processor.

There are a number of distros designed for lightweight systems; you have already discovered DSL and DSL-N, but you could also consider Puppy Linux, from www.puppylinux.org. DSL is limited by the stipulation that the ISO image should never exceed 50MB, while Puppy Linux is nearly twice that. This means it includes a lot more, such as the AbiWord word processor and accompanying office software, SeaMonkey (the new name for Mozilla) for web and mail and plenty more. The main drawback of Puppy is that the hard disk installation process is rather convoluted, as this is mainly designed as a live CD system. You could also run it from the CD, using your hard disk only for storage of data and settings.

Another alternative, although a little heavier, is Zenwalk (www.zenwalk.org). If you have the amount of RAM that was typically used on 200MHz machines when new, you will definitely need more to use Zenwalk, but it will give you more features than smaller distros.

Running any OS on a K-6/200 is going to be a compromise between features and performance, but it is definitely possible: doubly so if you add some extra RAM. NB


A job for Ubuntu

When I try to run or install Ubuntu, I get the following message after the splash screen comes up: 'unable to access tty, job control turned off' and am returned to a terminal prompt. Ubuntu apparently is trying to access my floppy drive for some reason because the floppy drive turns on until I get the error message.

David Lawson


It appears that this error is caused by the kernel being unable to find your boot drive, so the floppy drive light comes on because it is trying every device listed in the BIOS. As there are a couple of reported causes of this problem, there's more than one possible solution.

One is to boot from the install disc and edit the fstab of your installed system. If your root partition is on /dev/sda1, the commands you need are
 sudo -i
 mount /dev/sda1 /mnt
 gedit /mnt/etc/fstab
You should see the line that mounts your root partition in fstab, it will look something like
 # /dev/sda1
 UUID=71f72f22-0a14-45b7-9057-f7b0bd9d819c /
 ext3 defaults....
The UUID (Universally Unique IDentifier) should enable Ubuntu to find the root partition, even if your device nodes change (such as adding another disk), but it can cause problems here. Change the UUID=xyz string back to the device node and your system should boot again. The fstab line should now look like:
 # /dev/sda1
 /dev/sda1 / ext3 defaults....
The other solution is more extreme, so only try it if the fstab trick fails. You need to open your computer and disconnect any extra hard and CD/ DVD drives, leaving only your boot drive and the DVD from which you installed ­ turn off the computer first! Disconnect the floppy drive too, removing the power cables from the unneeded devices should be sufficient. Your system should now boot. Then you add the piix module to the ramdisk image that Ubuntu loads when it boots with these terminal commands
 sudo echo piix >>/etc/initramfs-tools/modules
 sudo update-initramfs -u
You should now be able to shut down, reconnect the devices and start up. This bug appears to affect a small number of Ubuntu users, and only those with multiple drives fitted. It has also been reported that when the problem is caused by a floppy drive, it can be circumvented by leaving a disk in the drive, but we were unable to verify this and it sounds like a kludge anyway. NV


Backup services

In reaction to LXF94's Online backup Roundup, I would like to present a small but hopefully solvable problem. I use IBackup, because I can have backups from my own PC (Ubuntu) and from my wife's Windows PC. She can manage her backups without any intervention from me.

The problem is, that often during the backup on my PC, which is performed by cron, the connection drops. When that happens the stunnel I created collapses, which is devastating for the backup, and I end up with a backup that was partially copied to the IBackup server. Is a way to recover from such a disconnect, or even to actively reconnect, without losing what you are doing?

The IBackup server does not allow setting the time and datestamps of the copied files, causing the files all to have the time and date of copying. For that reason I copy tarred files and lose the rsync ability.

This might be incentive enough to switch to Rsync.net, however I will have to copy the files from my wife's PC too. With IBackup she has her own connection and URL.

Guus


If you are using rsync, restarting the backup should be no trouble, because rsync will simply pick up where it left off. The server may be using the time of copying as the timestamp because of your rsync options. You need to call rsync with the --times option to preserve timestamps. The --archive option combines several options useful for backups, including --times. This should remove the need to copy tar archives to the server, and therefore mean that you are copying individual files in the same form that they exist on your original machine, which makes restarting a backup easier.

I tried Rsync.net after reading the article (I used Strongspace at the time) and switched to them completely. Backing up multiple machines is easy as you can do more or less what you want with the available space, so you can create a directory for each machine's backup. Rsync.net uses SSH for rsync transfers, so there's no need for stunnel, and you can use Duplicity to encrypt the data for storage.

An alternative approach is to backup everything to a local disk then sync that to the remote server. This has the advantage that your first line of backups is local ­ making restoration faster ­ but it does mean that the backup computer has to be switched on whenever any computer needs to make a backup.


New disk, old problem

I found that I needed a bigger hard disk, so I plugged one in as hdb, partitioned it as I wanted, copied file systems from the old drive (hda), and tried to boot from the new one. Unfortunately, this operation turned out to be unsuccessful.

I made and copied partitions for /, /boot, / usr, /home, among others. I also make a swap partition. /boot is primary partition 1, marked bootable. I wrote an mbr record, using lilo -M / dev/hdb1. I mounted the new /boot and / partitions, edited the new copy of /etc/lilo.conf, (now in /mnt/hdb5), and ran lilo -C /mnt/ hdb5/etc/lilo.conf -b /dev/hdb1, which appeared to work.

When I try to boot from the new drive, I get through Lilo's boot choice screen, and a fair amount of other stuff, ending with:

 initrd finished
 Freeing unused kernel memory
 Warning: Unable to open an input console
After that, only the reset button on the box will make anything happen. This is "Mandrakelinux release 10.2 (Limited Edition 2005) for i586"

Rodney M Bates


This is not a problem with the bootloader. Once the kernel has loaded, the bootloader's job is done. This error looks like a missing file from /dev, probably /dev/ console. Although the dynamic dev filesystems, like udev and its predecessor devfs, create your device nodes in /dev automatically, there are some that are needed before devfs/udev start up. I suspect that you omitted the contents of /dev when making a copy of your root partition, either by not including it in the copy command, or by excluding all other filesystems when copying (you didn't mention how you copied the filesystems, but cp, rsync and tar all have options to exclude other filesystems).

The contents of your original /dev directory are now hidden because a new, dynamic /dev/ has been mounted on top of them, but as you will see, they are still accessible.
  mkdir /mnt/tmp
  mount --bind / /mnt/tmp
will make your whole root filesystem available from /mnt/tmp, without any of the other filesystems that are mounted at various points. So /mnt/ tmp/home will be empty while /mnt/tmp/dev will contain a few device files. Copy these to dev on your new root partition and your boot error should disappear. The easiest way to ensure your new root filesystem contains exactly the same files as your current one is
 rsync -a --delete /mnt/tmp/ /mnt/newroot/
PH


DVB to DVD

I've got my DVB-T stick working but my wife still won't look at a computer screen; is there some way I can convert files saved from the stream into something that can be played on our DVD player through the television?

towy71, From the LXF forums


DVB and DVDs use two variants of the video codec, MPEG2. DVB uses MPEG2- TS while DVDs use MPEG2-PS; Transport Stream and Program Stream respectively. The main difference being that Transport Stream is designed for use over an unreliable connection, like radio transmission, so it has more redundancy and error correction, resulting in files that are around 30% larger. Transcoding from MPEG2-TS to MPEG2-PS is simple and fast because it only involves the error correction data, the video itself doesn't need to be re-encoded.

There are a number of programs you can use to turn a DVB MPEG into a DVD. One of the simplest, albeit rather slow, is tovid (http://tovid. wikia.com), the todisc command in this package takes a list of video files in almost any format and converts them to a DVD ISO image. If you want a GUI for this, a couple of programs that you may find useful are dvdstyler (www.dvdstyler.de) and qdvdauthor (http://qdvdauthor.sourceforge. net). However, if you only want to create a DVD from a single MPEG2 file, these are overkill, when a shell script will do the job more quickly:
 #!/bin/sh
 mplayer -dumpfile title.audio -dumpaudio $1
 mplayer -dumpfile title.video -dumpvideo $1
 mplex -f 8 -o title.mpg title.{audio,video}
 dvdauthor -x title.xml
 mkisofs -dvd-video -o title.iso dvd
Where title.xml contains:
 <dvdauthor dest="dvd">
 <vmgm /><titleset><titles>
 <pgc><vob file="title.mpg" /></pgc>
 </titles></titleset>
 </dvdauthor>
This separates the audio and video stream, then recombines them with the data necessary for DVD authoring, but without the DVB extras, before creating a DVD file structure and writing that to an ISO image. Before writing the ISO image to a DVD, you can test it with:
 mplayer -dvd-device title.iso dvd://1
You will need mplayer, mjpegtools and dvdauthor installed to do this, all of which will be in your distro's repositories, most are probably already installed. Alternatively, if you use MythTV to record and watch the programs, install the mytharchive plugin which does DVD exports. This application can combine several programmes onto a single disc ­ re-encoding if necessary to fit more on one disc (but though that takes a lot longer, it's worth it if you are going to do this regularly and don't want to become overwhelmed with lots of discs) and offers a choice of menu styles and layouts. This is what I use most of the time. NB


Raid AID

We've set up an Apache Tomcat server with two 500 GB drives using software RAID 1. I made a few changes to some files, restarted the server to test them and found the changes I had made to the files were gone. Some files I had deleted had also reappeared. I checked my mail and had received errors from mdadm.

 A DegradedArray event had been detected on md
 device /dev/md0.
 The /proc/mdstat file currently contains the
 following:
 Personalities : [raid1]
 md1 : active raid1 sda2[0] sdb2[1]
      1959808 blocks [2/2] [UU]
 md0 : active raid1 sda1[0]
      486424000 blocks [2/1] [U_]
 unused devices: <none>
I'm making a backup of all the important information, but if possible I'd like to salvage the server, since the setup was very specific and time consuming. I'm new to the world of Linux administration, and unsure where to start.

Henry Angeles


The contents of /proc/mdstat indicate that a drive has failed on the md0 array (/dev/sdb1?). Your machine will continue to function with a degraded array, but with slightly reduced performance and no safeguard against another disk failure. There are a number of tools available to test the disk, but the safest option is to replace it and rebuild your arrays. This will also mean replacing /dev/sdb2 of course, so the other array will have to be rebuilt too. Fortunately, this is a simple task and largely automatic, but it can take a while. You can also continue to use the computer after replacing the faulty disk while the arrays are being rebuilt, but this will result in noticeably reduced disk performance.

It is easiest if you can add the new disk before removing the old one as this means you can rebuild md0 first, then switch md1 to the new disk at your convenience. Assuming your new disk is added as /dev/sdc, connect it up and reboot. Then partition the disk as you did for sda and sdb, setting the partition types to Linux Raid Autodetect. Now run these commands as root, to remove the faulty disk from the array and add the new one:
 mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1
 mdadm /dev/md0 --add /dev/sdc1
When the new disk is added to the array, the RAID driver will synchronise it with the existing disk. This can take a while, monitor the contents of /proc/ mdstat to follow the progress. When the process is complete you'll have both your arrays working correctly, but using three disks, one of suspect reliability, so repeat the above commands for md1, sdb2 and sdc2 to transfer the other array to the new disk. Now you can power down and remove the faulty disk when it suits you as it is no longer in use.

Needless to say, as with any critical disk operation, you should ensure your data is backed up before you do any of this. You can check the old disk with either smartmontools (http://smartmontools.sourceforge.net) which is probably available in your distro's repositories or check the manufacturer's web site. Most of them provide a diagnostic tool that runs from a bootable floppy disk, which you will need if the disk is to be returned under warranty. If the computer has no floppy drive, most of the diagnostic programs can be run from the Ultimate Boot CD (www.ultimatebootcd.com). NB


Starting over

A few months ago, having installed Mandriva on my system, I replaced it with SUSE 10.2 from your LXFDVD91. I have two internal hard disks both split into two partitions. Windows 2000 shows them as C and D on the 0 disk and F and G on the 1 disk. SUSE is installed on drive F. I also have an external disk which is drive J. When I originally installed Linux, I unfortunately had drive J switched on and unless this is switched on at start-up, cursoring down the available items on the start menu is not possible.

What I wish to do is uninstall Linux from the system completely and start again with another, larger, secondary hard disk. However, nowhere in your tutorial pages or the Help tabs in Linux software does there appear to be any means whereby I can go back to my simple Windows 2000 and HD 0. I suspect that if I wipe the Linux software from HD 1 (drive F in Windows), which I am tempted to do at the moment, there will be no menu appearing at switch on, so I am completely baffled.

I'm hoping that you could please tell me how to uninstall Linux completely, so that I am able to go back to where I was before installing Mandriva. Given the choice at switch on between Win2000 and XP, and not having to have the external drive switched on always I would be very grateful to be able to sleep soundly again.

John Beaven


The boot menu would probably start to work if you left it a while, it would appear that the Grub bootloader is trying to read the missing disk and should time out eventually. You do not need to reinstall to fix this, only edit the bootloader configuration. You should be able to do this from YaST. Boot with the external drive connected then unmount and disconnect or power off the drive. Run YaST, go to System > Boot Loader > Boot Loader Installation and select Propose New Configuration from the popup menu at the bottom right of the window. This should scan your disks (which no longer include the external drive) and set up a new menu for your Windows and SUSE installations. Go into the Section Management tab to make sure everything is as you wish and then click Finish.

If you really want to remove Linux from these disks, select Restore MBR of Hard Disk from the same popup menu, which will replace the bootloader code with whatever was there before you installed SUSE. If this was Windows, fine, but if you went straight from Mandriva to SUSE, this will replace the Mandriva boot code, which you don't want. In this case, you should boot your Windows CD in rescue mode and run fixmbr, which will wipe all Linux bootloader code and replace with the Windows bootloader.

Alternatively, you could simply replace the secondary disk, which probably would mess up your hard disk booting without doing any of the above, and boot straight from the SUSE install disc, install it and let it set up a new boot menu for you, making sure you leave the external drive disconnected this time. SUSE, as with all current Linux distros, is quite capable of detecting the external drive when you connect it after installing the operating system. NB


Safe surfing

I've been toying with getting into Linux for a couple of months now. I tried downloading a distro, but struggled with the amount of technical jargon involved. I've loaded Ubuntu 7.04 and I love it. I'm still struggling to get my head around the fact that it is free and so is a load of other software that came with it, but I'm sure I'll get used to this.

As I'm new to this, I need to double-check that what I am doing is safe and I'm not opening my PC up to external hackers. Are there steps that I should be taking to put in a firewall and virus checking software?

I've installed Ubuntu 7.04 as a dual boot with Windows XP Home edition. On XP I have F-Secure 2007 combined firewall and virus checker. I connect to the internet using an external modem-router via an ethernet cable.

Steve Hall


Viruses are not a real problem with Linux, although it is good to be prepared. The most popular anti-virus program for Linux is ClamAV (www.clamav.net), which is included with Ubuntu and can be installed with the Synaptic package manager. ClamAV detects Windows viruses as well an any targeting Linux, which, combined with the plugins available for most mail programs, means you can also use it to make sure no nasty attachments reach your Windows setup via your Linux mailer.

Firewalling is handled differently on Linux to Windows. The lack of spyware, and the virtual impossibility of embedding it in open source software, means that it concentrates on keeping out intruders. The Linux netfilter software is built into the kernel, so the various firewall programs you see provide more or less easy ways of setting up, testing and applying the filtering rules. There are several packages in the Ubuntu repositories that are well worth looking at, including: Firewall Builder (www.fwbuilder.org), Guarddog (www.simonzone.com/software/guarddog) and Shoreline Firewall (www.shorewall.net). The first is a GTK program that fits in well with the default GNOME desktop while Guarddog is a KDE program. They offer similar features but with a different approach. Shoreline Firewall is a script- based program that is definitely harder to set up the first time but provides more flexibility. Any of these are capable of protecting your system, so try them and see which you like best.

You should also reduce the chances of intruders even reaching your firewall. Your router is the first line of defence, so turn off any port forwarding services you do not need. You should also disable any unnecessary services in Ubuntu's System > Services window, although be careful about what you disable here, some services are needed for normal operation of the computer. If unsure, turn off services individually and keep track of what you have done so you can turn them back on if you experience problems.

Although Linux is inherently more secure than Windows, this should not be relied on, Linux programs can have security holes too. These are usually fixed promptly, so keep your system up to date. The four steps of blocking at the router, disabling unnecessary services, running a firewall and keeping your software updated will mean you can safely use the Internet with confidence. NB


A swap for Vista

I was messing about with trying to create a swap partition on an old flash MP3 player, and I accidentally made a swap file on my Vista partition. I have not switched the swap file on, and used the mkswap command to make the swap file. My Vista still works, although I have to boot in through the recovery partition, so I am guessing it has only affected the start of the drive. Is there any way to reverse the mkswap command to enable me to fix the Vista partition?

I have checked in GParted, and it reports the drive as a swap drive. Fdisk shows the partition as NTFS, which it should be, but there's no * under the boot heading. Does that mean that if I can restore the Vista boot info to the disk, it should work?

celticbhoy, From the forums


If Vista still works, the partition must be OK. It looks like you have changed the partition type setting, probably to Linux Swap, and cleared the boot flag. This means that Windows cannot recognise the partition and that the bootloader does not think it can boot from here. Use whatever partition editor you prefer to set the partition type to NTFS (07) and set the bootable flag for it. I find cfdisk easy for this, and it is on just about every live CD I have ever tried. Boot from a live disc, open a root terminal and run cfdisk with:
 cfdisk /dev/hda
When you've done this, select the partition, press t to set the type and choose NTFS from the list of alternatives, then press b to make it bootable. Finally press W (that is a capital W) to write the changes to the disk. You can also do this with a graphical editor like gparted or qtparted, but I find cfdisk faster for this. You don't even need to wait for a desktop to load if your favourite live disc already has an option to boot straight to a shell prompt (Knoppix users, for instance, can type knoppix 2 at the boot prompt). NV


Can USB Samba?

I have set up a small server running Debian Etch, mainly to use as a fileserver but also eventually for some web-based stuff. I have a USB hard drive that I want to use as shared storage via Samba. My problem is that no matter what I do the drive is always mounting as root. If I set the mount point permissions to 777, user=guest and group=users and then mount it as a normal user, the permissions stay the same but user and group both revert to root. So I still can't write to the drive. If I mount as user root I have no problems accessing locally but in either situation Samba then won't let me write either.

Someone suggested this was maybe a udev issue and that I needed to play with that so that the permissions are altered when it mounts. I'm not up on udev so don't know where to start. The drive is sda with partitions sda1 and sda2.

Andy


Udev only handles creation of the device node (/dev/sda1 or whatever) not the mounting, so this is unlikely to be at fault. It is possible that udev is creating the node with restrictive permissions, but this would only stop users mounting the device (not root) it wouldn't affect the mounted filesystem.

The user mount option doesn't take the name of a user, it simply allows any user to mount that filesystem, nor does it affect the permissions of the filesystem. The solution to your problem depends on the type of filesystem you are using. If this is a Linux-type filesystem that supports user permissions, setting the ownership and permissions of the mount point should suffice, but you have to do this after the filesystem has been mounted, otherwise you only affect the mount point, not the mounted filesystem.

In Windows filesystems, particularly FAT32, you can add the option umask=002 to /etc/fstab to make all files user and group readable and writeable. Then use the uid and gid options to set ownership of all files in the filesystem. You can use numeric values here or user and group names, eg:
 /dev/sda1 /mnt/somewhere vfat umask=002,uid=guest,gid=users 0 0
NB


Remote printing

I'm a teacher in a school and when I started to take care of the computers in the teachers' room they all ran Windows. Now I'm preparing to install Ubuntu on one of them, but the job is difficult because of one 'minor' detail ­ the printer!

They have a PC running Windows Server, connected to a switch. The server is running exclusively to serve the printer. I'm running Ubuntu Feisty Fawn and I can't print. Ubuntu detects the printer, a Samsung CLP-500, and I have installed the drivers, but nothing prints. Do I have to use Samba?

Eduardo Ramalhadeiro


According to the OpenPrinting database at www.linux-foundation.org/en/OpenPrinting, this printer needs the SpliX driver, available from http://splix.sourceforge.net. While this driver works well with some Samsung lasers (it's great with my mono laser), it is only reported as working `partially' with the CLP-500. This appears to be because it is limited to 600dpi printing. Samsung also provides a Linux driver that you can download from http://short.zen.co.uk/?id=792 (the full URL is ridiculously long).

SpliX is included with the current Ubuntu, so it's just a matter of installing it via Synaptic and then picking the right driver in the printer configuration tool.

CUPS can talk to Windows printers ­ it uses the Samba client libraries, so you need Samba installed, but you do not have to configure it yourself. Ubuntu installs Samba by default, so there's nothing you need to do in this respect. All you need to do is install the SpliX package from Synaptics then run New Print in System > Administration > Printers and select the correct printer when asked. NB


gHamachi help

I'm trying to set up gHamachi using the tutorial in your June edition [LXF93]. I followed the instructions on page 86 as far as stage 3, where I clicked on Yes and entered the root password. Then a message appeared: 'TAP/TUN NOT FOUND' I'm using . Ubuntu 6.06. What could be the problem?

Bryan Mitchell


This error is caused by gHamachi being unable to find the tuncfg program, which comes from Hamachi, so you need to install Hamachi too. This is a fairly uninformative error ­ TUN/TAP isn't found because the program used to look for it isn't there. It seems gHamachi treats any error from running tuncfg the same, even if it fails because tuncfg is not there.

You'll need some extra commands to install Hamachi so go into Synaptic and install the build- essential package, which provides everything you need for installing packages from outside Synaptic. Now download the Linux version of Hamachi from http://hamachi.cc. Assuming you saved it to your Desktop directory, open a terminal and type:
 cd Desktop
 tar -xf hamachi-0.9.9.9-20-lnx.tar.gz
 cd hamachi-0.9.9.9-20-lnx
 sudo make install
Now you should be able to run gHamachi and proceed with the tutorial. NV


Visiting Vista

After problems with Vista, a friend has asked me to put Linux on their PC. My PC is running Fedora Core 6, and I've set up a shared drive in Vista so I can pull off the files that my friend needs saved. But I need help getting the files off. I can access the shared drive but when I go to open up the folders to get the files, Linux comes up with a message that it can't read the folders on the Vista PC. Can you access a shared drive in Vista and pull files off it with Linux? I have no problems accessing a shared drive on XP, 2000 or 98 from Linux.

JCFreak, From the LXF forums


You can admit to owning a Vista PC yourself ­ we'll still try to help so there's no need to blame it on a 'friend'... The best way to do this is to use the shell to mount the drive, then you should see clear errors when it fails. Do this as root:
 mkdir -p /mnt/windows
 mount //PCNAME//C /mnt/windows -o
 user=USERNAME
replacing PCNAME with the network name of the Windows computer and USERNAME with the name of the admin user on that computer. After giving the user's password, the C drive should be mounted (assuming that's the drive you're trying to share). Do not try turning off password- protected sharing in the Windows control panel, it actually makes things more difficult, not easier as you might expect. You also need to turn on Public Folder Sharing in the Network And Sharing section of the Windows control panel.

Even with these settings, you'll still be unable to enter and copy some directories. Vista has protected directories inside the user directories, such as USERNAME\PrintHood. However, you should have no difficulties copying your friend's documents and other data files now.

Because you've mounted his shared drive, you can use any file manager you like to do the copying. You haven't said whether you're trying to do this with a direct cable link or over the internet. It should work the same, apart from the speed, but bear in mind that the data won't be encrypted in transit. You may also need to open port 139 in his firewall or router to make a connection over the internet. This also allows anyone else to attempt a connection, so use a good password and close the port as soon as the job is done. If possible, take your computer to his house (or his to yours) and use a local ethernet connection.

Alternatively, you could use the Windows backup program to back up the data to a file or DVD and copy that over to your Fedora Core 6 system. Windows backup files are zip archives that can be unpacked with the Linux unzip command, which is installed on Fedora Core 6. NB


Many monitors

I'm attempting to set the propriety Nvidia driver up for single, dual and twin view, and after much searching, I've finally managed by creating the xorg.conf files directly (as the Nvidia GUI keeps complaining about overlapping meta modes and reporting wrong refresh rates).

But though I now have the three xorg.conf files ready and working ­ one for each view that I need (dual, twin and single) ­ I can't seem to find any information on how to integrate these in a single environment where I can switch between them.

I need to be able to switch between these three types of view on the fly, ideally with a keyboard combination. As it is, I manually stop the X server, swap the xorg.conf file and restart X. I'd guess that I need to merge my three different xorg.conf files into one, but how? And how do I tie restarting the X server with an alternative view to a keyboard press (or any functionality, be it menu, file or whatever ­ as long as it's one-click or as near to as possible)?

I'm using KDE on Fedora Core 6 and would appreciate some guidance on this, but please be gentle ­ so far I've only been on the Linux wagon for a week.

Jyde


You can combine the various portions of the separate xorg.conf files into one, providing you give them different names. The Monitor sections can just be put one after the other, but you'll need to make sure that each of your Screen sections has a different name, with a separate section for each of the layouts. Most of the other entries in xorg.conf are the same for all; things like keyboard, mouse and font settings. Then you create a separate ServerLayout section for each layout, with a different name, so you'd have something like:
 Section "ServerLayout"
  Identifier "SingleScreen"
  Screen         0 "SingleScreen" 0 0
  InputDevice "Mouse0" "CorePointer"
  InputDevice "Keyboard0" "CoreKeyboard"
 EndSection
 Section "ServerLayout"
  Identifier "TwinScreen"
  Screen         0 "TwinScreen" 0 0
  InputDevice "Mouse0" "CorePointer"
  InputDevice "Keyboard0" "CoreKeyboard"
 EndSection
The first ServerLayout is the default, or you can specify it with:
 Section "ServerFlags"
  DefaultServerLayout "SingleScreen"
 EndSection
Now X will start up in single mode by default but can be started in twin mode with:
 startx -- -layout TwinScreen
The '--' means 'end of startx options, pass anything else along to the server' In order to bind this switch to a hotkey, you need a short shell script. Save this script somewhere in your path, say as /usr/local/bin/restartx:
 #!/bin/sh
 if [[ "$(/sbin/runlevel | cut -c3)" == "5" ]]
 then
   sudo /sbin/telinit 3
 else
   sudo killall X
 fi
 sleep 2
 startx -- -layout $1
and make it executable with chmod +x /usr/local/bin/restartx. As some of the script needs to run as root, you'll also have to edit /etc/sudoers, as root, and add this line:
 yourusername ALL = NOPASSWD: /usr/bin/killall X,/sbin/telinit 3
Now you can switch layouts with:
 nohup /usr/local/bin/restartx newlayoutname
The nohup is necessary or the script will be killed when the desktop closes. As you're using KDE, you can bind any commands you want to hotkeys in the Regional & Accessibility/Input Actions section of the Control Centre, so set up one to switch to each layout in your xorg.conf file. Finally, you'll probably want KDE to remember your open applications after switching. To do this, go to Control Centre > KDE Components > Session Manager and select Restore Manually Saved Session. This adds another option to enable you to save your session and you can get the script to do this automatically by inserting this as the second line:
 dcop ksmserver ksmserver saveCurrentSession
This is the only KDE-specific part of this exercise, and you'll find that the rest will work with any desktop. NB


Samba lockout

I'm using a Fedora Core 1 system and thought of upgrading to Core 6. Before doing this I loaded Core 6 onto a separate machine to see how it was configured off the disk. I found that sendmail was set up to deliver mail but I couldn't deliver mail to the box from outside the box. On Google I found that the distro was shipped with the ability to receive mail from external sources turned off. Why?

I also set up some shares in Samba and still have the following problem: if I set up a directory ­ say, /backup ­ with the same permissions and ownership as /var, I can connect to it from another machine and share the contents, create and update as well as remove. If I change the entry from /backup to /var then I'm not able to connect to the directory. I guess I have another pre-shipped parameter to change but which one?

What I want to do is set up the share to access /var/www/html in order to play with HTML and PHP files. All this works fine on the Core 1 system and didn't require changes. I will get to Core 6 sometime but not until I've solved these and other issues in a standalone system.

Just one other point. When I've performed upgrades from Core 1 to Core 5 or 6 the process takes hours so I thought it would be easier and quicker to do a new install and copy the relevant config files and data, but now I'm not so sure.

Tony


It looks like you've opted for security when installing Fedora Core 6. As such, it's been set up to deliver only local mail, which you were able to switch easily enough, and to prevent sensitive directories being shared. While it is possible to alter this so that /var can be shared, you really should reconsider. Blocking the sharing of /var is for a good reason ­ a lot of sensitive information is stored on /var and it's easy to render a system unbootable with a modicum of malice, incompetence or plain carelessness.

The question shouldn't be 'how can I share /var?' but 'do I need to share all of /var?' ­ to which the answer is no. If you want to access /var/www/html remotely, then share only /var/www/html. In doing this, you'll avoid the potential risks associated with sharing /var/log or /var/lib but still be able to do what you want. There are also alternatives to using Samba. If both computers run Linux, you could use NFS to mount /var/www/html on the remote computer. If you're using KDE on the editing computer, you could avoid using any form of remote mounting or directory sharing by using KDE's FISH implementation. This uses SSH to communicate with the remote computer, so putting fish://hostname/var/www/html into Konqueror's (or Krusader's) location bar will load the directory's contents into a file manager window, from where you can load files into a KDE-aware editor.

Going from Fedora Core 1 to Fedora Core 6 is a huge step. Many key components will have changed, so an update is likely to consume more time than the hours required by the package manager when you have to fix other problems. A fresh install is the best approach, but making a jump of a few years in major components is likely to result in differences in the way things work, as you have discovered. MS


Which kernel?

I am trying to set up my system to use my Belkin USB wireless stick with ndiswrapper. The notes tell me I need a certain kernel as a minimum. I'm a new user, so can you tell me how I find this information? Also, can you give me any advice on setting up this item?

Barry Simpson


There are various GUI tools that will tell you which kernel you're running: the KDE Control Centre shows it as `release' on the startup page, or you can use your distro's package manager to find the version of the kernel package (some distros call it `linux'). The simplest way is to open a terminal and type one of:
 uname --kernel-release
 uname -r
You may not need to use ndiswrapper as some Belkin wireless devices have native support. In this case run:
 sudo lsusb
in a terminal to find out more about your device. Then search Google or your distro's forums for information on this device. You may also find details of which driver would be best for you to use at http://qbik.ch/usb/devices. If there's no native driver for your device you'll have to use ndiswrapper. The most important point to remember when doing this is to use the driver that came with the device. Manufacturers have a habit of changing the internals of devices while leaving the model number the same, so a driver for an apparently identical device may be useless. If your distro (you don't mention what you're using) has a tool for configuring wireless devices, use this rather than trying so set it up manually. Some, such as SUSE's Yast, will also set up ndiswrapper for you. MS


Remote confusion

I got into Linux many years ago after installing Red Hat 5.1 on my Amiga 4000. While managing to get to grips with it fairly well, I have never succeeded in getting a remote X session to work. I can log in via SSH and use the shell, but I really want to access my remote machine with X.

My remote machine runs MythTV on Kubuntu, and the one I want to access it from is running Gentoo. I only want to access the desktop for simple administration tasks (not viewing MythTV), so it shouldn't be impossible, but I've got so confused as to which is considered client and server or remote and host that I'm lost! I'm using AMD64 and some don't seem to like it.

Andrew Walker


Andrew, I too started using Linux on an Amiga 4000 (with Red Hat 4.5) ­ things were nowhere near as easy back then as they are now. Remote X access is relatively straightforward, and useful with MythTV because the mythtv-setup program can run on a remote back-end but opens an X window.

The client­server thing can be confusing with X if you are used to the web model of considering the remote machine to be the server and your desktop computer the client. The X server is the program responsible for creating the X display, so it runs on the local machine. The clients are the programs running on that display. So your Gentoo desktop is the server and the programs on the MythTV box are the clients. Running an X program on a remote server over SSH is straightforward and works with the default SSH settings in Gentoo and Kubuntu.

SSH into your Kubuntu machine from your Gentoo box with the -Y option. You can then run X programs and make them open their windows on your Gentoo desktop. For example, doing
[user@gentoo]$ ssh -Y kubuntu
user@kubuntu's password:
[user@kubuntu]$ mythtv-setup
will run the mythtv-setup program from the Kubuntu box on your Gentoo desktop.

You may occasionally find that you cannot log out of the SSH session after running an X program. This can be caused by the program having started other processes that are still running; for example, KMail opens a couple of communication sockets. Run ps in another SSH session to identify these, then kill them and you will get your prompt back.

The other applications you refer to are probably desktop-sharing programs, which mirror or open an X desktop on a remote machine. These require X to be at least installed on the remote computer, and in the case of programs that mirror it, the desktop must be running. As you are using KDE, the simplest of these is KDE's own krfb and krdc. The former is a server, run on the remote computer and configured in the KDE Control Centre. The latter is run on the local box to show the other computer's desktop in a window. Both are installed by default in Kubuntu; you will need to emerge kde-base/krdc on your Gentoo system.

VNC works differently by opening a desktop screen specifically for the remote display, separate from any local desktop screen running. NB


Centos conversion

According to the CentOS website at www.centos.org, CentOS "aims to be 100% binary-compatible" with "a prominent North American enterprise Linux vendor." That got me thinking. Can you point Yum on an honest-to-goodness install of Red Hat to the CentOS repositories? I've noticed when upgrading my CentOS box that a lot of the packages still have the Red Hat name (such as patch_for_foo-RHEL-6.3.2). So it would seem that this could be a way to keep a server up to date after your Red Hat service runs out. I know it would not be the ideal way to do things, but would it work?

John K Nall


This would seem to be possible, according to reports from the CentOS forums, provided you are using equivalent versions, such as going from RHEL 5 to CentOS 5. You have the choice of either using the CentOS repositories instead of the Red Hat ones or converting your installation from Red Hat Enterprise Linux to CentOS.

Before you do anything else, you should make sure you are no longer registered with Red Hat Network. Put this in your Yum configuration to add the CentOS repositories:
 [CentOS5 base]
 name=CentOS-5-Base
 mirrorlist=http://mirrorlist.centos.org/?release=5&arch=$basearch&repo=os
 gpgcheck=1
 enabled=1
 gpgkey=http://mirror.centos.org/centos/RPM-
 GPG-KEY-CentOS-5
 [CentOS5 updates]
 name=CentOS-5-Updates
 mirrorlist=http://mirrorlist.centos.org/?release=5&arch=$basearch&repo=updates
 gpgcheck=1
 enabled=1
 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5
 [CentOS5plus]
 name=CentOS-5-Plus
 mirrorlist=http://mirrorlist.centos.org/?release=5&arch=$basearch&repo=centosplus
 gpgcheck=1
 enabled=1
 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5
Disable your RHEL repositories by changing the enabled=1 line to enabled=0 for each of them. Those settings have gpgcheck turned on, so each package is verified against the CentOS GPG keys before installing. You can install these keys with
 rpm --import http://isoredirect.centos.org/centos/5/os/i386/RPM-GPG-KEY-CentOS-5
If you want to switch over to CentOS completely, you need to install two small packages from the CentOS repositories, either centos- release-5-0.0.el5.centos.2.x86_64.rpm and centos-release-notes-5.0.0-2.x86_64.rpm or centos-release-5-0.0.el5.centos.2.i386.rpm and centos-release-notes-5.0.0-2.i386.rpm, depending on whether you are running a 32-bit or 64-bit system. You should also make sure that you remove any Red Hat *-release-* packages.

You may get conflict warnings from Yum because you still have the RHEL versions of most packages installed. The best long-term solution to this is to install the CentOS packages, turning your system into a pure CentOS one. As you no longer have a RHEL support subscription, there is no benefit in keeping the Red Hat-branded packages installed, and moving over to a pure CentOS system will make it easier if you need support from the CentOS community. NB


Now serving - PHP

I'm having trouble with browsing .php files on my Linux (Mandriva 2007 Free) machine. It keeps trying to open them with KWrite instead of just running them. As I'm currently trying to teach myself PHP, when I'm running an HTML file that calls a PHP process I really don't want to look at the code - I want the PHP to just, well, run.

cynicalsurprise, from the LXF forums


You don't run PHP files from a file manager. PHP is a server-side scripting language, so you need to load the PHP page from a web server into your browser. Locally, they are just text files, and your file manager will perform whatever action it is configured to do on text files ­ in your case to load them into KWrite.

This means you need to run your own web server, which is nowhere near as scary as it sounds. Fire up the Mandriva Control Center, go into the software installation section, type 'mod_php' into the Search box and select apache-mod_ php-5 for installation. This will also install various other packages that you need to serve PHP files. When the installation is complete, go into the System section of the Control Center and select the System Services item. Ensure that httpd (the Apache process) is set to start on boot and, if it is not running now, start it.

Point your browser at http://localhost and you should see the Apache test page, or maybe just an `It works!' page, confirming that you now have a working web server. Now all you need to do is put your PHP files in the web server's DocumentRoot, the directory where it looks for files to serve. Mandriva defaults to using /var/www/html for this, so save the following as /var/www/html/test.php:
 <?php
 phpinfo();
 ?>
Load http://localhost/test.php into your browser and you should see some information about the server and the system running it. If so, Apache is not only installed, it is set up to serve PHP pages and you can continue learning the language. Good luck!

You may run into permissions problems editing files as your normal user for inclusion in the DocumentRoot directory. This can be solved by adding your user to the Apache group and setting the directory to be writable by member of that group, by typing this in a root terminal:
 gpasswd -a yourusername apache
 chgrp apache /var/www/html
 chmod g+w /var/www/html
You will need to log out and back in again for this to take effect. NB


Synced storage

I have just installed a Buffalo LS-250 LinkStation [a networked storage device] on my home network (me running Kubuntu Dapper and three Windows XP machines). I have no problems at all copying files to and from my Dapper laptop and it was very easy to set up.

But! What I would like to do is to sync my laptop with the LinkStation, and I'm not sure how to do it. I've successfully set up Unison between my laptop and one of the Windows XP machines, but I don't know if this is possible with the LinkStation. I've looked at rsync, but that too seems to need a software installation on both the laptop and the LinkStation. A straightforward command line copy would do me, so that I could write a script to copy only new files each way, but rsync now seems to be the default for that.

Also, on the XP machines I can open and edit files on the LinkStation, but Samba only lets me open a copy on the Dapper laptop. Can this be changed?

Richard Moore


You actually have two Linux computers on your network, because the LinkStations run Linux too. There is an active community at http://linkstationwiki.net with plenty of information on the various LinkStation models, including your LinkStation Pro. Of most interest to you will be the replacement firmware project. FreeLink replaces the standard firmware with a Debian variant. This is more extreme than OpenLink but gives more flexibility, although you currently lose the web interface. OpenLink is based on the stock firmware but adds some software. The most interesting of these are SSH and rsync. However, the LS-LG that you have is a new model, and OpenLink did not support this at the time of writing, although that may have changed by the time you read this.

If you don't wish to mess with your firmware, there is a much simpler solution. If you mount the device using Samba you can use rsync without installing anything on the remote machine as you are effectively syncing two local directories.
rsync -avx ~/myfiles/ /mnt/buffalo/myfiles/
You should be able to work with files directly on the device over SMB. As you use KDE, you should try the KIO slave route first, opening a file as smb:// name/path/to/file. Try to browse the files in Konqueror and open them in your editor. If this fails, it is probably down to the share permissions and Samba setup. If you run the programs from a shell, you should be able to gain more information from the error message printed there. For example:
kwrite smb://name/path/to/file
NV


Blank SUSE

I hope there is a simple answer to this simple hardware-related question. Every time that I try to load SUSE 10.2 with my new 19-inch flat-screen monitor, I get the message 'not supported' How do I get over this? The computer works fine with an old 14- inch CRT monitor.

Pete Hollings


A hardware issue that doesn't involve proprietary driver woes? Makes a change! Right, is this a single message right in the middle of your screen with nothing else displayed? If so, it is a message from your monitor telling you that the computer is sending a signal that is out of its normal range. It usually means the computer is trying to display too high a resolution or with too high a frequency. This is caused by the installer incorrectly recognising the monitor, so its idea of what it thinks the monitor can handle is different from the monitor's. There is a simple answer, as this affects only the installer, and that is to force the installer to use a lower resolution. Press the F3 key at the boot menu screen to select a different resolution. Work your way up the menu (lower resolutions are towards the top of the list) until you find a setting that works. As a last resort, you can install in text mode. This is less attractive and takes getting used to, but you end up with an identical installation.

This problem affects only the installation, you will be able to choose suitable video settings to ensure you have a graphical desktop. It may well detect your monitor correctly at this stage. MS


Where is Windows?

I have just installed Mandriva 2005 from your Special edition [Linux Format Special #1] This is the second time I've done this. The first time I could read my Windows hard drives but this time I can't. I appear to be locked out. How can I get access to these disks as I did last time? The previous installation was on another hard drive, which I don't have any more.

Barry Simpson


The solution to this depends on two things: the type of filesystem you are using on your Windows partition and what you mean by "locked out" If you had full read and write access to the Window partition before, it is most likely using the FAT32 filesystem. In that case, if you mean you are able to mount the partition but not write to it, or read into directories, this is a simple permissions problem.

Fire up the Mandriva Control Center, go into the Mount Points section and select Create, Delete And Resize Hard Disk Partitions. Select your Windows partition, go into Expert mode and press the Options button. The box in the middle of the Options window will probably contain 'defaults'. Tick the box labelled Umask=0, followed by OK and Done. You now need to remount the partition to apply the new settings. You could do this by rebooting, but this is Linux, not Windows, so open a terminal and type
  su -c "mount /mnt/windows -o remount"
replacing /mnt/windows with wherever your Windows partition appears. Give the root password and you can now read and write to your Windows partition. The reason for this is the umask=0 that you added to the partition's mount options. The Windows FAT32 filesystem doesn't have any file permissions of its own. This option tells the system to treat all files and directories as readable and writable by everyone.

If your Windows partition uses the NTFS filesystem, the situation is more difficult. While read access for this filesystem has been around for a while, full read/write access has only recently become really usable. Read access can be enabled by following the steps outlined above, but replace the remount command with
 su
 umount /mnt/windows
 chmod 777 /mnt/windows
 mount /mnt/windows
You should now be able to read from ­ but not write to ­ your Windows partition. While it is theoretically possible to enable write support with this distribution, this is rather limited and more trouble that it is worth.

Mandriva 2005 is generally considered to be rather old now, and in the intervening time things have moved on a lot in this area. I recommend you upgrade to the latest release: Mandriva 2007 Spring was on last month's DVD. NB


Taking the plunge

I'm new to Linux, and I have decided to completely wipe Windows XP from my laptop and just have Linux. I am dual-booting XP and Ubuntu; could you please tell me how to remove Windows and just have Ubuntu? How would I expand the Linux partitions to take over the space where Windows XP used to be? As I am a bit of a newbie, would it be easier just to totally format the drive and reinstall Linux?

Pub Bloke, from the LXF forums


To answer your last question first, reinstalling Ubuntu from scratch and taking the option to use the whole disk would indeed be an easy way to do this, but you'd lose your existing setup and data. Removing the Windows partition and allocating the space to Linux would leave your existing Ubuntu setup intact, and you'd learn more about how Linux works in the process.

Removing Windows is easy. The first step is to delete the Windows partition (usually hda1) using the Gnome Partition Editor available from the System > Administration menu. If this isn't available, you should install GParted from the Synaptic package manager.

The Windows partition is usually easy to identify, because the filesystem is NTFS (or possibly FAT), and Linux doesn't use these filesystems. Next click on the unallocated space this leaves and press the New button to create a new Linux partition of type ext3 (the default settings should be correct for this).

Now, with the new partition still highlighted, go to the menus and select Partition > Format To > Ext3 (see screenshot, right). Press Apply to make these changes. The next step is to remove the Windows entry from the boot menu. Open a terminal and type
sudo -i
gedit /boot/grub/menu.lst
to load the boot menu into an editor. Towards the end of the file you'll find a line starting 'title Windows' Delete from this line down to the next blank line and save the file. Your boot menu is now Windows-free.

Adding the space you've just freed up is somewhat less straightforward. Linux partitions can only be resized by moving the end position, yet the space you've freed up lies before the beginning of the Linux partitions, because the Windows partition was the first on the disk. Fortunately, Linux allows you to use multiple partitions ­ in this case we can use the space previously taken by Windows as your home directory (an advantage of this approach is that if you reinstall or switch to a different distro, you can keep your personal files because they're on their own partition). You tell the system to use this partition for home by adding a line to the file /etc/fstab (filesystem table). In the terminal you've just used, type
gedit /etc/fstab
Add the following line and save the file:
/dev/hda1 /home ext3 defaults 0 0
Before you reboot, which will activate the new home partition, you need to copy your existing files across. Still in the terminal, type
mkdir /mnt/tmp
mount /dev/hda /mnt/tmp
mv /home/* /mnt/tmp/
reboot
This mounts the new partition somewhere temporary, moves your home directory over to it and reboots the computer to make the changes permanent. After this, there will be no sign of Windows at the boot menu, and when Ubuntu comes up, the space previously used by Windows will be available for storing your own files. NB


A pesky kids proxy

I've been running a Squid (and SquidGuard) web proxy on my Fedora Core 6 box ever since I read about it in the very first Hardcore Linux tutorial, in LXF75. I've set up SquidGuard blocking rules to protect my children from undesirable content. What this means is that on their (Windows XP) machine, I set the internet to route through my proxy server (192.168.100.100:8080), and all is well.

What concerns me is that my eldest is becoming quite savvy and it won't take him long to realise that if he unticks the box marked Use Proxy Server and switches to a direct connection to the internet, he'll get unfiltered access. Can I force all traffic to go through my (always-on) FC6 machine ­ perhaps by setting up port forwarding on the router (to which only I have the password) ­ so all web traffic has to go through the proxy server and if he switches to a 'direct' connection he will get no internet? If so, how? I've tried redirecting port 80 and 8080 to the IP of my PC but that doesn't seem to work.

Mark, from the LXF forums


By "the internet" I take it you mean the world wide web, which is all that Squid normally handles. However, you can force all internet traffic to go through your FC6 box and then through SquidGuard with three steps.

First, and how you do this depends on your router, you have to configure your router so that it only allows your FC6 box to connect to the internet. The port forwarding you set up only affects incoming connections, so remove that.

Secondly, you need to set your FC6 box up as a default gateway, so all internet traffic (not just web traffic) goes though it. Edit the file /etc/sysctl. conf, as root, and change the line
net.ipv4.ip_forward = 0
to end in 1 instead of 0. Now run
service network restart
You should now reconfigure your children's computer to use the IP address of your FC6 box as its network gateway. Because you have disabled their access via the router, this is now the only way they can connect to the net.

That still leaves the problem of your children removing any proxy setting, so now we use a feature of Squid called transparent proxying. This forces all web requests going through the machine ­ and you've already forced that with the previous steps ­ to go through Squid's proxy and hence through SquidGuard. Edit the Squid configuration file (usually /etc/squid/squid.conf) and find the line(s) starting 'http_port' This probably reads http_port 8080 in your file. Change this to
http_port 80 transparent
The 80 sets it to work on the standard HTTP port. The transparent option makes Squid intercept and handle all requests, regardless of whether the browser is configured to use a proxy server or not. You should either remove the old proxy settings from the browsers or add a line to handle requests to the old 8080 port.
http_port 8080 transparent
There is an alternative way of handling this. You can leave http_port set to 8080 and use an Iptables rule to forward all port 80 requests from addresses that you want to proxy to port 8080. This is more complex but it gives more flexibility, such as allowing some machines to bypass the proxy altogether. There are details on this on the Squid website at www.squid-cache.org.

You could also use Iptables, or one of the many front-ends such as Firestarter, to block outgoing traffic to all but the common ports (such as HTTP, HTTPS, POP3, SMTP and FTP). This will prevent your children from using a remote proxy that works on another port. You could possibly do this on the router; however, implementing it on the FC6 box would allow you to block them but still have unrestricted internet access for yourself. NB


Bringing order to chaos

I have a photo collection that has got out of hand ­ several gigabytes' worth. I need to organise them so I can get a good backup. Do you know of a program that will rename a file based on the EXIF date of the image and change the Modified Date of the file to the same EXIF date? My last attempt at a backup before I wiped my PC managed to set all the file dates to when the DVD was burned.

Also, I've managed to get myself several duplicate images spread across my entire collection (yep, I really messed up), each with different filenames. Any idea how I could sort them (maybe with EXIF data again) without having to look at a few thousand photos?

If it helps, I'm using Fedora Core 6 64-bit and I'm not scared of the command line.

NiceBloke, from the LXF forums


There are several programs capable of working with EXIF data. My favourite is ExifTool (www.sno.phy.queensu. ca/~phil/exiftool). ExifTool can read and manipulate just about any EXIF information, including extracting the Date/Time Original or Create Data EXIF tags. You can use this information to rename the files or change their timestamps. For example:
find -name `*.jpg' | while read PIC; do
DATE=$(exiftool -p `$DateTimeOriginal' $PIC | sed `s/[: ]//g')
touch -t $(echo $DATE | sed `s/\(..$\)/\.\1/') $PIC
mv -i $PIC $(dirname $PIC)/$DATE.jpg
done
The first line finds all *.jpg files in the current directory and below. The next extracts the Date/ Time Original tag from each file (you may need to use Create Data instead, depending on your camera) and removes the spaces and colons. The next line sets the file's timestamp to this date ­ the horrible looking sed regular expression is necessary to insert a dot before the final two characters, because the touch command expects the seconds to be separated from the rest of the time string like this. The final command renames the file, using the -i option to mv in case two files have the same timestamp. This will stop any files being overwritten.

It's also possible to do this with most digital photo management software without going anywhere near a command line ­ DigiKam, KPhotoAlbum, F-Spot and GThumb all have options for manipulating files based on the EXIF data.

The disadvantage of using these programs for this is that they generally only work on a single directory at a time, whereas the above shell commands convert all JPEG files in a directory and all of its sub-directories. If you have several gigabytes of photos in the same directory, your collection is more out of hand than renaming some files will fix!

The solution to your duplicates problem is a program called fdupes (available from http://netdial.caribe.net/~adrian2/fdupes.html or as an RPM for FC6). This compares the contents of files, so it will find duplicates even if they have different names and timestamps.
fdupes --recurse ~/photos
will list all duplicate files in your photos directory. There are also options that you can use to delete the duplicates:
fdupes --recurse --omitfirst --sameline ~/photos | xargs rm
Be careful of any option that automatically deletes files. Run without deletion first so you can see what's going to happen. PH


ISO frustration

I was reading LXF91, and I wanted to use the Jigdo system to create an ISO for Fedora [Essential Linux Info]. The magazine could be clearer with the instructions for this. For example, how to run the command mkiso and what to do if you have the DVD mounted with noexec.

Do I need to copy the filesystem over to a local filesystem and possibly move jigdo-file? I'm not sure if this has been done before. Anyway, I encountered the following command line errors:

    sudo ./mkiso
    Creating FC-6-i386-livecd-1.iso
    general:      Image file: /home/user//FC-6-i386-livecd-1.iso
    general:      Jigdo: /home/user//FC-6-i386-livecd-1.iso.jigdo
    general:      Template: jigdo/FC-6-i386-livecd-1.template
    Skipping object `../..//.mozilla/firefox/
    i4faho56.default/lock' (No such file or directory)
    Found 0 of the 5 files required by the template
    Will not create image or temporary file -
    try again with different input files
    general:        [exit(1)]
    ISO image written to /home/user//FC-6-i386-livecd-1.iso
    Verifying MD5 checksums...
    md5sum: FC-6-i386-livecd-1.iso: No such file or directory
    FC-6-i386-livecd-1.iso: FAILED open or read
    md5sum: WARNING: 1 of 1 listed file could not be read
    Verification failed, or you do not have the md5sum program installed.
    In the latter case, you probably have nothing to worry about.
Please let me know what to do with the incorrect md5sum error, and if I can get around the verification check to get the ISO created.

Todd


Firstly, you don't need to run mkiso as root, because the only place it needs write access is your home directory. Secondly, you should run it from the place where you want to create the images, or provide the destination directory as an argument to the script.

If you cd to the Fedora directory of the DVD and run mkiso, as you have, it will still work and write the ISO image(s) to your home directory, but it won't be able to generate its cache files. This is less important in this case, because you're creating only one ISO image, but it makes a huge difference to the time taken to write multiple ISO images.

If the DVD is mounted with -noexec, you can still run the script with sh, because you're then running sh from your hard disk and the script is simply a data file it loads:
sh /path/to/dvd/Distros/Fedora/mkiso
Now you're running sh from your hard disk and the mkiso script is merely a data file.

To get to your specific problem, the clue is in the line: 'Found 0 of the 5 files required by the template'. For some reason, Jigdo couldn't find any of the files it needed. I suspect this is due to your use of sudo to run mkiso, resulting in the script getting confused about where it was run from and not knowing where to look for the files (the search path is relative to the script's location).

The other possibility is that your DVD has a read error ­ although this is unlikely as all five files failed. Such a damaged disc would probably not work at all.

Skipping the MD5 check is like fixing low oil pressure in your car by disconnecting the warning light! As you can see from the output, the reason the ISO file failed the MD5 check is that there was no file to check. Incidentally, you can also run mkiso -h to see the usage options. MS


Firefox failing

I cannot get Firefox to 'see' the modem connection that I've painstakingly set up. I'm fairly sure that it's working correctly, as running pon from the command line causes the modem to dial out, and poff makes it hang up. However, activating Firefox from the desktop is the problem. The Ethernet connection to broadband works fine, but disabling it and making the modem the default connection brings up the 'server not found' screen. The modem is a Rockwell IQ148 and I'm using Ubuntu Dapper 6.06. I'm trying to set up the computer for my partner, who doesn't have broadband but has been gradually converted from XP by using my machine.

Richard Ayres


This is almost certainly a general problem with your internet connection and not specifically related to Firefox. It sounds like your system is still trying to use the Ethernet connection. Type this in a terminal:
route -n
The line we're interested in is the last one beginning '0.0.0.0' as this is the default route for all non-local connections. I suspect it looks something like this:
0.0.0.0     192.168.0.1 0.0.0.0         UG 0
0      0 eth0
The last two numbers in the second string will probably be different, but if it ends in 'eth0' (or anything but 'ppp0') this is the cause of your troubles. You need to make sure the eth0 settings are purged from your system, especially if you'll no longer be using Ethernet with your broadband provider, by selecting it in the Network Settings window and pressing the Delete button (the middle of the three obscure-looking buttons at the top-right of the window).

Another possibility is that your modem connection hasn't completed. The fact that the modem dials out doesn't guarantee that a connection is made. Try running
sudo plog
after an apparently successful modem connection. This will show you the last few lines of the connection log: it should be obvious if anything has gone awry here. You can also check the status of your network connections with
/sbin/ifconfig -a
If the eth0 interface appears, it shouldn't be marked 'UP' nor have an 'inet addr' entry. Equally, ppp0 should be marked 'UP' and have a valid address. It's also possible that you're connected to your ISP but not able to look up internet domain names. Run these two commands in a terminal:
ping -c 5 www.google.com
ping -c 5 216.239.59.104
The first attempts to contact Google by name, the second bypasses the DNS lookup and goes directly to its IP address. If only the latter works, your DNS information hasn't been correctly set up. You'll need to contact your dial-up ISP and get the addresses of the DNS servers, then put them into the file /etc/resolv.conf. It should look something like:
nameserver 1.2.3.4
nameserver 1.2.4.5
You can either edit the file directly or use the DNS tab of the Network Settings tool. It's possible you still have your broadband ISP's name servers in here. These should be removed. If you're still having problems after all this, post some more information, including the output from the above commands, in the Help section of our forums at www.linuxformat.com. NV


Joomla joy

I'm a very old greybeard who is the owner of the last two years' issues of Linux Format and Ubuntu 6.10. However, I've got one very big problem that I have never been able to understand: I cannot install programs from the enclosed DVD.

I've read 'How to install from source code' in the magazine several times [Essential Linux Info] and also own the following books: The Official Ubuntu Book, Beginning Ubuntu Linux and Ubuntu Unleashed, without understanding what I'm doing wrong. Is it possible that you could help me, and write exactly what I should type in the terminal when I want to install, say, Joomla 1.0.11 from one of your DVDs?

Per Neergaard Mahler


You have picked an unusual example, because Joomla is a web application that needs to be installed to a directory accessible by your web server. Also, it's written in PHP, a scripting language, so it doesn't need compiling. However, here are the steps to install it.

You'll first need a web server installed. The standard one is Apache, so run the Synaptic package manager and make sure that apache and libapache2-mod-php2 are installed. Test Apache by typing 'http://localhost' into your browser.

If you don't get an error, you can proceed to installing Joomla itself. Open a terminal and type
sudo mkdir /var/www/joomla
sudo tar -xf joomla-1.0.11.tar.bz2 -C /var/www/joomla
The first command creates a directory into which to install Joomla. The second command unpacks the Joomla archive into that directory. This presumes that joomla-1.0.11.tar.bz2 is in your current directory, otherwise you'll have to give the full path to the archive. Now load 'http://localhost/ joomla/' into your browser and you'll be taken through the installation and setup process.

As a general rule, for all packages supplied as a tar archive, the initial steps should always be the same ­ unpack the archive and inspect the contents for a file containing installation instructions, usually called README, INSTALL or something equally obvious.

The 'How to install from source code' instructions in the magazine apply to packages using the standard source code installation process, which applies to the vast majority of packages but by no means to all of them ­ as Joomla so ably demonstrates. NB


Shell mail

I'd like to use the shell for my email. Can you tell me how this can be set up? I currently use Ubuntu 6.10.

jmullin, from the LXF forums


Do you mean you want to run a mail client within your shell, or do you want to be able to send mails from shell scripts? There are several terminal-based mail programs, the most popular of which is Mutt (www.mutt. org). Mutt is included in Ubuntu's main repository, so you can install it from Synaptic.

If you want to send emails from a Bash script, the mailx command is the simplest solution, and is probably already installed on your system. This program mails whatever it receives on standard input to a specified address. For example:
echo "Hello World" | -s "Obvious example" me@example.com
The subject of the mail is given with -s (use quotes if it contains spaces), and everything received on standard input forms the body of the mail, so it's good for mailing program output. MS


Racing uncertainty

I have Ubuntu 7.04 Feisty Fawn installed on a Sony VAIO VGN-FJ250P laptop. I'm satisfied with almost all aspects of this distro, with just one or two niggling problems. The one I've been putting the most effort into recently regards OpenGL. It seems not to work on this Linux system. I know that it's supported by the Intel video chipset, because I dual boot with Windows XP Pro, and OpenGL applications run fine there.

One of the affected applications is Planet Penguin Racer, which ran fine on the Live CD but doesn't run when installed on the hard drive. Attempting to run it from the menu produces nothing, while attempting to run it from the command line in a terminal produces the following error message:

    *** ppracer error: Couldn't initialize
    video: Couldn't find matching GLX
    visual (Success)
    Segmentation fault (core dumped)
Jim Smith


The good news is that OpenGL works on your hardware with the Live CD, so the hardware is supported and the software present on the CD. This is a configuration problem, almost certainly in xorg.conf, caused by the installer not setting up your graphics card correctly.

Boot from the Live CD, mount one of your hard disk partitions or a USB pen drive, and copy /etc/X11/xorg.conf to it. Now boot from your hard disk and compare its copy of xorg.conf with the one you just saved.

The most likely cause is that your hard disk version of the file is either using the wrong driver (the Driver line in the Device section of the file) or that the GLX module isn't being loaded. Before you make any changes to this file, save a backup copy: you don't want to make things worse.

The correct driver for your hardware should be i810, although using whatever is in the Live CD version of the file will work. The GLX module is loaded by including this line in the module section of xorg.conf:
Load "glx"
If both of these are set correctly and OpenGL doesn't work, you could work through the two files, looking for differences and trying to identify which one is the cause. Or you could simply replace the installed file with the Live CD version, knowing it will work. NV

Installing from DVD

I am brand new to Linux, and have successfully installed the OpenSUSE 10.2 distro that came on the LXF89 DVD in your magazine. Awesome! So far I am very impressed by what little I've seen in the Linux program. My question, however, will reveal my ignorance: how do I get the other programs on the DVD (like FLPhoto) from the DVD to my computer, and have them become usable? I have found no instruction on moving, installing, incorporating or otherwise getting the programs 'installed' and making them usable. I am sure this is because it is assumed that anyone using the DVD and Linux knows how to do this, but I have no clue.

Michael Mueller


Installation methods vary, according to how the software is packaged. In the example you give, there are two files containing FLPhoto: flphoto-1.3-source.tar.gz and flphoto-1.3-linux-intel.rpm. RPM is the package format used by SUSE, so the latter file is the one you want. You can install it using SUSE's Yast ­ SUSE's 'do everything' administration program ­ by double-clicking on the file. You'll be asked for the root password, which is needed because installing software involves writing files to system directories, then the installer will pop up and you need only to click Install.

If there is no RPM file, you are left with the option of installing from the source code tarball (these files generally end in .tar.gz or .tar.bz2). The Essential Linux Info section of the magazine gives generic instructions on installing from source code on page 78. Installing from source requires a compiler, which SUSE doesn't install by default. It is on the installation DVD though; all you need to do is fire up Yast, click on Software Management, type gcc into the Search box, select only the gcc package and click on Install. The gcc package will also install any other components needed to be able to compile software from source. NB


Optimum Optus

I have broadband with Optus in Australia. I was unable to get Linux to connect to the modem (a Siemens SpeedStream 4200) until I got Gentoo 2006.1. Once it worked I found the difference was that it used dhcpcd and the others used pump or dhclient. Mandriva 2007 offered all three, but only dhcpcd worked. When I checked, I found that dhcpcd uses the -h hostname option.

I haven't been able to get the other programs to work, but I have been able to get other distros (DSL-N, Knoppix and Ubuntu) by mounting the Gentoo partition and running dhcpcd, which brings up the net immediately. The others get what looks like a valid IP address but don't connect or drop the connection before I can use it. I think Optus has a special version of the modem, but it does use other brands of modem. I originally used the Windows setup disk on XP until I found Gentoo worked. What is the difference between the programs?

Peter Sorensen


It seems that this is a fairly common problem with this modem. When used with some DHCP programs, it does exactly what you describe: it gives out an IP address and then drops the connection. There are two possible solutions, the first of which you have already discovered. By using dhcpcd, you can pass a hostname to the modem with the -h option. There is an equivalent option with pump, -u or --hostname, but not with dhclient. It would appear that either the modem or your ISP is very picky about the format of any DHCP requests you send.

Given the apparently flaky nature of the DHCP support in this modem, the second solution would be more reliable: to use static addressing. You need to find the IP address of the modem, which you can do after a successful dhcpcd negotiation, by connecting through Windows or by trial and error. The default address varies according to the ISP it is intended to be used with, but the default for OptusNet should be 10.1.1.1. Once you know the modem's IP address, it is easy to configure your computer's Ethernet interface to use a static address. Pick an address on the same subnet as the modem, say 10.1.1.2 and set the gateway and DNS server addresses to that of the modem (10.1.1.1). The netmask needs to be set to 255.255.255.0.

With a setup like this, you should have no more problems. DHCP is a great time saver when working with larger networks, or when moving from one network to another with a laptop. For a small home network, it is usually simpler to just give each device its own static address. NB


Mail from anywhere

I've been using Thunderbird as a mail client and am very happy with it. I've now acquired two laptops and would like to be able to access my mailbox from all three machines ­ ie for them all to share the same contents. My requirements in a nutshell:

  • I want to download all mail from the ISP once (I don't want to leave it on the ISP).
  • I want to use Thunderbird as the mail client on all machines.
  • All machines should share the same set of mailboxes so that I can, for example, send email from laptop 1 and be able to see the sent emails on laptop 2 and desktop as well.
  • It should be able to run on Mac OS X 10.4 as well as Linux.
  • It should be open source.
I've tried simply sharing out the mailbox directory using Samba, but this doesn't seem to work ­ it seems to screw up index files.

sfinnie, from the LXF forums



There are two ways you can achieve this. One is to use POP3 to collect mail from the server and synchronise the mail storage directories on the two machines. Unison (www.cis.upenn.edu/~bcpierce/unison) is excellent for performing this task, as well as for synchronising any other part of your home directories that you wish to keep up to date on more than one box. Unison is best suited to keeping two computers in sync ­ I use it to keep my laptop and desktop up to date with each other. It uses the rsync protocol to save bandwidth and time but, unlike rsync, it can handle situations where each computer has had files updated since the last sync. Using it with three machines would require a little more effort to begin with, but would certainly be workable.

Your other option, which applies only to email, is to run your own IMAP server on your desktop machine. Here you would run Fetchmail to pull messages from your ISP and store them locally, then point the mail programs on all the computers to the IMAP server on the desktop (you do this on the desktop too, setting the server to localhost). Your mail is stored on the server and so is status information, so when you read a mail from one computer it is marked as 'read' on all of them. Unlike with POP3, with IMAP you leave your mail on the server and can read it from anywhere with an internet connection. Most mail clients have an option to synchronise their local store with the server, so you can also keep local copies of mails for reading when offline.

I prefer to use Dovecot (http://dovecot.org), but the easiest choice is probably to use whichever IMAP server your distro defaults to, as that will be largely set up on installation and have the most support from your distro's forums or mailing lists. For something as straightforward as your needs, you shouldn't have to move very far ­ if at all ­ from the default configuration. The Dovecot wiki, at the above address, has plenty of information on setting it up. One other advantage of this system, if there is more than one person in your household, is that you can get Fetchmail (possibly with the help of Procmail) to sort your mail into separate mailboxes for each user. Then each can access their mail using the same IMAP server (but a different login name of course). NB


Switchdesk or su?

I've finished installing Fedora Core 6, and have downloaded the 200-plus upgrades. I need to get out of root but can't locate the switchdesk command. I don't want to reinstall to get this.

Ron White


You shouldn't be running the desktop as root in the first place! There is never any need to do this, which is why some distros make it difficult to load a root desktop. The system administration programs can all be run from a standard user desktop. When they need root privileges, they will ask you for the root password, then drop root privileges when they no longer need them. If you need to run any other programs as root; open a terminal, type su - to become root then run whatever programs you need from there. This is far safer than running the entire desktop as root, although it goes without saying that you should quit any programs run as root as soon as you have finished with them.

Switchdesk is still available. Select Add/ Remove programs from the Applications menu and type switchdesk into the Search tab ­ you will probably want switchdesk-gui as well as switchdesk. Once it's installed, you can run it from System > Preferences > More Preferences > Desktop Switching Tool. However, this is not the correct way to run programs as root; switchdesk is intended to allow users to switch desktops, hence the name. Keep the root user where they belong: locked in a box only to be let out when needed.

You should rarely need to reinstall a Linux distro. The computer I am using now is three years old, as is the Linux installation running on it ­ it has been frequently updated but never reinstalled. Reinstalling doesn't fix problems, it merely removes the whole environment containing the problem... until the next time it occurs. If you fix the problem itself, instead of wiping the whole system, it should go away forever or, even if it doesn't, be easier to fix the next time it occurs. NB


Use the source

I am very new to Linux, although there do seem to be some similarities to the Amiga of years past. After a few attempts I have finally installed Fedora Core 5, dual booting with Win XP. I have tried to install FreeBasic using your instructions 'How to install from source code' [Essential Linux Info], with no success so far! It does not seem to recognise ./configure and some of your other instructions.

Gordon Owttrim


One significant difference between Linux shells and the Amiga shell is that Linux does not include the current directory in the path by default, whereas AmigaDOS did. The Linux way is more secure, but it means you have to specify the path when running a script or program from the current directory. The current directory is denoted by '.' so ./configure means "run the program or script called configure in the current directory" It should now be clear that the command ./configure only works when the file configure exists in the current directory.

Compiling from source usually involves unpacking the tarball, changing to the directory created by the previous step and running ./ configure, followed by make and make install ­ something like this:
tar xfvz foo-1.2.3.tar.gz
cd foo-1.2.3
./configure
make
make install
While this applies to more than 90% of Linux applications, there are many exceptions. After running cd, look for files called README or INSTALL. These contain specific instructions on compiling and installing that particular application. In the case of FreeBasic, if you want to install from source, you have to do the configure-make-make install dance several times, after downloading two archives. Alternatively, you may have the pre- compiled binary archive ­ FreeBASIC-v0.16b-linux.tar.gz ­ which uses a completely different installation method with its own install script. Read the file readme.txt inside this archive for precise details on installation. We ask you to read the file rather than reproduce the instructions here, because there may be subtle changes in the installation process between versions. The readme.txt file should be considered authoritative.

Always look for installation instructions when installing from an archive (as opposed to using a distro's package manager), as you are executing commands as root that could have an adverse effect on your system if done incorrectly. NB


OOo bother!

Regarding OpenOffice.org 2.1 from LXF90's DVD: I am very new to Linux so I've no idea how obvious the answer to my problem is, and any answer probably needs spelling out to me. Following the instructions in the magazine I tried to install it into OpenSUSE 10.2. Everything went well until I entered

su -c "rpm -ivh *"
which returned the message: 'desktop-integration: not an rpm package (or package manifest): Is a directory'. This is where my scant knowledge fails me. I did try what seemed the obvious course of action and moved the desktop integration folder elsewhere, but that didn't seem to work. I did try tinkering around with some other stuff but I was really stumbling around in the dark.

Nicholas Cater


When the shell sees a * on the command line, it replaces it by all matching files ­ * means "match any string" In this case, it matches all the RPM files and the desktop- integration directory. The solution is to be more specific and use
su -c "rpm -ivh *.rpm"
This now matches anything that ends in .rpm, which is what you need. If you also want to install the RPM files in the desktop-integration directory, extend the command to include these:
su -c "rpm -ivh *.rpm desktop-integration/*.rpm"
Note that adding desktop-integration/* will not work, because not all of the files in that directory are RPM packages and you will end up back at your original error. MS


Modem malaise

I have installed Mandriva 2007 from your coverdisc [LXF87] to a Dell 5150. The Dell has no serial or parallel ports, just USB. I have a Sitecom 56k V.92 USB modem, model DC-009, but I cannot get Mandriva to connect to it. The USB keyboard and mouse work fine. I can get the cdc_acm module to load with modprobe but it does not seem to connect to a tty, although KPPP sees ttyS0 and ttyACM0 and reports `modem busy' when I query the modem on either ttyS0 or ttyACM0.

I have trawled the net to no avail, including www.linux-usb.org. The modem worked out of the box in Windows XP. These issues should surely be a thing of the past by now?

Martin Lawrence


It would appear that this modem either is not fully supported or needs some kind of firmware file. This particular modem failed to show up on a web search, but that is not too surprising. Many of these devices are made by one manufacturer and branded by another. However, all such devices need to be approved by the FCC (Federal Communications Commission) for sale in the USA, so you can find out what it really is from its FCC ID code. Type the code into the box at the bottom of www.hardwaresecrets.com/page/fcc to find out who really made your modem. Once you are in possession that information, a search of sites like www.linux-usb.org should prove a lot more fruitful.

There is a further complication that has a bearing on your situation: USB modems can be problematic because they are not truly standardised, with only some of them conforming to the CDC-ACM specification. Your USB keyboard and mouse work well because they all conform to the same standards (USB HID). One way to sidestep this problem is to use a serial modem via a USB serial adapter. I have a couple of these devices, using different chipsets and both bought cheaply from eBay, and they both work very well with the majority of the serial devices I have tried (a UPS being the only exception). This way you can use any serial modem, as well as any other serial devices you may wish to use with this computer. It seems that we have managed to get rid of parallel ports and floppy disc drives, but the old serial port just won't go away. NV


Qemu query

The Qemu emulator looks interesting, but it is impossible to get working as you describe it. It says it needs GCC 3 to compile, and OpenSUSE 10.2 has only GCC 4. Is there any way round this? May I ask if another member of your staff tries out your various programs and ideas before committing them to the magazine?

Joseph Lamb


Qemu is one of the very few programs that still fails to compile with GCC 4, but it is unfortunate that distros like OpenSUSE no longer have GCC 3 packages available. It is still possible to install GCC 3 on your computer, either directly from source or by using the RPM packages from Fedora Core 5, which are reported to work with OpenSUSE 10.2.

However, this is a lot of work for a single program, and there are precompiled packages available. One OpenSUSE 10.2 user has compiled Qemu and made it available from www.hasanen.com/files/linux/qemu.tar.gz. Now there is also a package on SUSE's website. Point your browser at http://download.opensuse.org/distribution/ SL-OSS-factory/inst-source/suse/i586 and click on the Qemu file (currently qemu-0.9.0-3.i586.rpm but it may have been updated by the time you read this). When the browser asks what to do with the file, select the option to install it and wait for it to be downloaded and installed (you'll need to give the root password when asked).

Alternatively, you can install it from the command line by downloading this file to your home directory, and then entering in a terminal:
su -c "rpm -ihv qemu-0.9.0-3.i586.rpm"
We do take every care to ensure that information given in the magazine is correct, including having another person try out instructions. It is an unfortunate consequence of the large number of distros, and the various permutations of installed packages for each one, that it is impossible to say that anything more complex than echo Hello World will work on every possible configuration without additional steps. The good news is that newer versions of Qemu are likely to be compatible with GCC 4. MS


From me to you

How do I change the 'from' address when using the Linux mail command? It insists on marking mail as from user@user-laptop (user-laptop is my hostname).

hubris, from the LXF forums


This is not possible with the standard mail command without fiddling with the USER and HOSTNAME environment variables, which may have unwelcome side-effects on other programs running in the same shell. However, there are a number of alternative commands that will do what you want. Mutt is able to read the 'from' address from the EMAIL environment variable. This is worth knowing if you already have Mutt installed, but it is a lot more than you need if you only want to send out messages. A small alternative is SMTPClient (www.engelschall.com/ sw/smtpclient), which is similar to mail in operation but accepts the --from argument to set the `from' address. SMTPClient only passes your mail to a suitable SMTP server and defaults to localhost. If you want to use a different server, you will need to specify it with the --smtp-host command line option, or set the SMTPSERVER environment variable. Enter this all on in one line:
echo "Hello world - what else?" | smtpclient
--smtp-host=my.mail.server --from=hubris@wherever
--subject "Hello World" someone@someplace
PH


Mepis mounting

Having successfully converted my computer from running from an old Mandrake Linux system to the SimplyMepis 3.4 distribution issued with LXF79, I found that all ran so smoothly that I soon gave up using F2 during the boot process, as there never seemed to be any warnings. However, a few weeks ago I looked at the booting details and found a message that fsck could not run because the root directory was not mounted read-only. I found no satisfactory resolution on the web, and so was delighted to see the Grub tutorial in LXF90. However, this succeeded only in deepening the mystery.

On my machine, /boot/grub/menu.lst.example contains:

color cyan/blue white/blue
foreground ffffff
background 2f5178
gfxmenu /boot/grub/message
title MEPIS at hda2, kernel 2.6
kernel (hd0,2)/boot/vmlinuz-2.6.12-586tsc root=/
dev/hda2 nomce psmouse.proto=imps
splash=verbose vga=791
initrd (hd0,2)/boot/initrd.img-2.6.12-586tsc
However, /boot/grub/menu.lst contains:
color cyan/blue white/blue
foreground ffffff
background 0639a1
gfxmenu /boot/grub/message
title MEPIS at hda6, kernel 2.6.15-1-586tsc
kernel /boot/vmlinuz-2.6.15-1-586tsc root=/dev/
hda6 nomce quiet vga=791
where the format of the code booting the 2.6.15 kernel corresponds to neither the example file nor the code in the LXF90 tutorial.

Although my OS works despite the warning message, I would obviously prefer to have fsck working where it is designed to, and would be grateful for any suggestions you may have as to the likely cause of this problem and the best way of fixing it.

Peter Nancarrow


The ext2/3 filesystem runs fsck after a set time or number of mounts, which can be changed by tune2fs. It is likely that this problem has been there from day one but that you didn't hit the time limit until after you stopped looking at the boot messages.

There are two main differences between your Grub configuration and the example setting. The first is that yours doesn't use an initrd to provide a splash, which has no bearing on your problem. The second is that no root path is provided for the Mepis boot. The kernel line should start with kernel (hd0,5)/boot/vmlinuz, otherwise you are relying on some indeterminate default. Alternatively, put root (hd0,5) at the top of the file. This is unlikely to affect the boot process's filesystem checks, but may cause more subtle problems.

You can configure Grub to mount your root partition read-only: by adding ro to the list of options on the kernel line. The filesystem will be remounted, using the settings from /etc/fstab, early in the boot process but after fsck has been run. Your kernel line should look like this:
kernel (hd0,5)/boot/vmlinuz-2.6.15-1-586tsc root=/dev/hda6 ro nomce quiet vga=791
You can also run fsck yourself by booting in a minimal maintenance mode. When the Grub menu appears, select your Mepis entry, press E to edit it, select the kernel line and press E again. Remove quiet from the options and replace with ro init=/bin/sh. Press Enter to accept the change and B to boot. This will give you a command prompt and, as the root filesystem is mounted read-only, you can run
fsck -f /dev/hda6 && shutdown -r -n now
This will check the disk and then reboot the computer only if the fsck was successful. NB


Moving tablet

I have a strange problem with my Wacom Graphire3 tablet: I have to change events in my xorg.conf every time I turn on my machine. I'm currently on Fedora 6 with Gnome and my xorg.conf is exactly the same as it was when I was using Fedora 5 with Gnome and the same hardware setup. The tablet worked perfectly at that time. I've tried searching Google with no success, so I don't have a clue what is happening. Any pointers even to help me understand the problem would be great.

Fred Kupferroth


I take it you mean you have to change the number of the event device. Your enclosed xorg.conf contains
Option "Device" "/dev/input/event2"
and you have to change the number. This is because input devices are numbered in the order they are detected, and something is changing the order each time you boot ­ maybe another device that is only sometimes connected, such as a memory stick or scanner. The solution is to get udev, the device manager, to assign a persistent name to your tablet, one that it will always have irrespective of detection order.

This is done by writing a udev rule. You first have to see how the system identifies the device with
udevinfo -a -p /sys/class/input/event2 | less
When run on my Aiptek tablet, the third block of output contains:
    SUBSYSTEMS=="usb"
    DRIVERS=="aiptek"
    ATTRS{vendor}=="AIPTEK"
This is plenty to uniquely identify the device; you should see something similar for your Wacom tablet.

To turn this into a udev rule, open a terminal, use su to become root and use your favourite editor to edit /etc/udev/rules.d/10-local.rules (create the file if it does not exist). Now, please do not be tempted to add the rule to an existing rules file, as it may be overwritten when udev is updated ­ 10-local.rules is the correct place for your own rules.

Now add a line like this, but using the values from when you ran the udevinfo command:
 SUBSYSTEMS=="usb", DRIVERS=="aiptek",
 ATTRS{vendor}=="AIPTEK", SYMLINK:="input/tablet"
You'll see it's just the attributes that identify the device, separated by commas, followed by a SYMLINK setting.

Note that the attributes are followed by ==, indicating a comparison, whereas the final item uses := because it is assigning a value. Your device will still be created as /dev/input/eventN but it will be linked from /dev/input/tablet, whatever the value of N ­ running ls -l /dev/input will confirm this. Now you can use /dev/input/tablet in xorg.conf and your tablet should always work.

For more advice, you could look at the tutorial on creating udev rules in LXF66 (although some of the details have changed since then as udev is under constant development) and the useful online tutorial at www.reactivated.net/udevrules. php. NB


Scrambled screen

Last week, I purchased a Dell Latitude (Pentium III) laptop that came from a local university surplus outlet. It formerly had Windows on it, but was sold without any software so we thought, let's try Linux! We got the magazine Getting Started with Fedora Core 6 Linux [a Linux Format Special], tried to load Fedora, and after the initial loading statements, got the attached screen ­ which was supposed to be the screen in figure 3, page 13.

John Schneider


It looks from the screenshot that you sent us that the Fedora Core installer is incorrectly reading your display details, resulting in a corrupted framebuffer display. There are a number of options that you can pass to the installer to try to correct this. When the splash screen appears, try typing this at the boot prompt and pressing Enter:
linux skipddc
This tells the installer to skip probing your monitor for details and use a (hopefully sane) default instead. If this fails, you can try specifying the display resolution with one of the following:
linux resolution=1024x768
linux resolution=800x600
linux resolution=640x480
If all else fails, you can run the installer in text mode with
linux text
This provides a basic-looking but fully-functional text installer that you can navigate with the cursor, Tab and Enter keys. I should stress that the display problem you have relates only to the installer; it will not prevent you from installing and setting up a graphical desktop during installation. NB


Ripping audio

Can you recommend a format to rip audio CDs to so that they'll play out of the box on patent-free distros? Can you then tell me how to batch convert the 30 or so albums I ripped to MP3 a few years back when I was a Microsoft user?

pootman, from the LXF forums


Ogg Vorbis provides slightly better compression than MP3 and somewhat better quality, and it is completely free. The other free audio compression format is FLAC (Free Lossless Audio Codec). Because this is lossless, there is nowhere near as much space to be saved, but you do preserve every bit of the original track. If you have the hard disk space, FLAC is a good format for storing ripped CD tracks ­ you can convert them to Ogg Vorbis or MP3 later if you want to fit them on to a smaller device such as a flash-based MP3 player.

Transcoding your existing MP3s to Ogg Vorbis will result in some loss of quality. Starting again for the CDs is a much better option. There are various GUI tools for this, the easiest of which is Konqueror if you run KDE. Pop a CD in the drive and type `media:/' into the location bar. Pick the CD from the list and you see the contents of the disc represented as MP3, Ogg Vorbis, FLAC and Wave files. None of these files is real, of course, but copying them to your hard disk causes them to be encoded on the fly. You can set the compression parameters in the Sounds & Multimedia section of the KDE Control Centre.

If you don't use KDE, I'd recommend Grip from www.nostatic.org/grip. This is a GTK audio player and ripper. For console use, there is abcde (from www.hispalinux.es/~data/abcde.php), which is a shell script that rips, encodes and tags. This is ideal for encoding a batch of CDs ­ just keep feeding it more CDs as it spits each one out after encoding it. All of these methods use the online CDDB database to add tag information to the files they create.

If you do have to convert your MP3s, there's a script called mp32ogg to do this ­ you can get it from http://faceprint.com/code. Using it can be as simple as running
mp32ogg musicdir
to get it to convert all MP3 files in musicdir to Ogg Vorbis. There are various options to control quality levels, file naming and whether the originals are deleted ­ run mp32off --help to see them. This script not only converts the music in the files, it also transfers the tags from the MP3 files. NB


Scanning for all

I have recently changed from SUSE 10.0 to OpenSUSE 10.2 via a re-install. My CanoScan N640P scanner worked well with XSane/SANE under SUSE 10.0, but will now only work with root privileges. SANE support for this scanner is good. The XSane/ SANE releases are as supplied by the distro, viz 0.991-32 i586 and 1.0.18-34 i586. I performed an online update soon after the install. The SUSE 10.0 SANE version was 1.0.15-20.2 i586.

Yast2 configures the scanner correctly ­ manually added to give device `canon_pp: parport0' ­ and tests OK with scanimage -d canon_pp:parport0 -T. Also, I can run this command successfully from the command line with root privileges but not as a normal user.

When I invoke XSane as a normal user I get the message `no devices available' with six possible reasons given. Of these the third, `the permissions of the device file do not allow you to use it ­ try root' seems most likely. However, I can't find a device file for the scanner. Also, as far as I can tell, all the configuration files are set up correctly.

I have tried all four settings of the parallel port ­ normal, ECP (DMA3), ECP/EPP and EPP ­ in the BIOS, all with the same result. Dave Coulstock


You are right in thinking this is a permissions problem. I had exactly the same with my Canon USB scanner. The scanner device should be /dev/parport0 (although it is possible that on some distro setups it is /dev/lp0).

Running 'ls -l /dev/{par,lp}*' will show you all relevant devices and their permissions. You would normally see something like
crw-rw---- 1 root lp 99, 0 Jan 27 11:37 /dev/parport0
This shows that the device is only readable by root and members of the lp group. In this case, the simplest solution is to add yourself to the lp group with
gpasswd -a yourusername lp
Some distros use a `scanner' group instead of lp, in which case you should make the obvious change to the above command. This only affects new logins, so log out of your desktop and back in again. Then try 'scanimage --list-devices' as root and as your normal user. You should see the scanner listed both times.

If the device is owned by root:root or the permissions are not rw-rw----, you need to change these, which you can do with a suitable udev rule. Add this line to /etc/udev/rules.d/10/local. rules:
KERNEL=="parport0", GROUP:="scanner",
MODE:="660", SYMLINK:="scanner"
This sets /dev/parport0 to have rw-rw---- permissions and to belong to the scanner group. It also creates a /dev/scanner symlink, which some software looks for. For more details on udev rules, see the answer to Moving Tablet.

If your device is not parport0 or lp0, you should be able to find it with
dmesg | grep -i -C 3 -e parport -e canon -e sane
NB


Disco inferno!

I've been a casual reader of your magazine for the past year and a half or so. Great job! I found the triple-booting DVD that came with LXF79 about as cool as anything since air conditioning! Now for my question: how do you do it? I want to make something similar for relatives and friends on dial-up to experience the Linux difference. I've got a DVD burner and media, so I think that should take care of hardware. I just need to know how to burn multiple distros and make each one bootable. Thanks a heap in advance. I'll keep enjoying your mag as long as you keep up the good work!

Mark


Making a multi-boot DVD is tricky: you need to be familiar with the Grub and Isolinux bootloaders, and the structure of the distros that you want to combine. First, create a new directory and copy the contents of the first distro CD into it. If you haven't burned the CD, you can loopback mount the ISO image as follows (as root):
mkdir /loop/
mount -o loop discimage.iso /loop/
Once you've copied the contents of the disc into your new directory, you should access the second distro disc in the same way. Look at the files and see if there's any clash with the contents of the first disc. If there is, you'll have to manually hack the distro (possibly even rebuilding it), so in effect you're out of luck. If nothing clashes, or if the clashes are limited to directories called grub or isolinux, that's OK.

Copy the second distro disc's files over to your new directory. Now you have a directory containing two distros. You then need to configure the bootloader for multibooting. Select the bootloader of one of the distros (in the boot, grub or isolinux directory ­ whatever the distro uses), and edit its configuration files (ie menu.lst, isolinux.conf ­ see the documentation for Grub and Isolinux to find out typical filenames).

Edit the configuration file and add boot entries for the second distro; you can get these from the second distro's bootloader directory. The bootloader config file for the first distro should now contain boot entries from the second distro's bootloader configuration file. Still following? It's tough, but make sure you keep track of the bootloader config files of both distros, and you can merge them together.

Then burn the directory to a disc, using the first distro's bootloader directory contents as the boot block ­ this will be named something like isolinux. bin (for Isolinux) or stage2_eltorito (for Grub). If you've merged the bootloader config files correctly, and no directories from the distros have overlapped, you should be able to boot the new disc and choose your distro from the boot menu. MS


Burning question

I use the K3b burning software to burn data discs for my job. When you burn a data disc in K3b the time stamps on all the files are made the same. I need to keep the original time stamp of the files I burn but can't find any way to do that. I was wondering if I could get a little help?

fleafly, from the LXF forums


K3b should do this by default, but it is an option you can change. When the K3b Project window opens, after you click Burn, go to the Filesystem tab and ensure that the box for Preserve File Permissions (Backup) is ticked. You should also tick the Generate Rock Ridge Extensions and Generate Joliet Extensions boxes for full compatibility. After setting the boxes as you want, click the Save User Defaults button to have them applied every time.

If this is a regular task, using the same parameters each time, you may find it quicker to use a short shell script to do the job; something like
#!/bin/bash
SOURCE_DIR="~/work/data"
DVD_WRITER="/dev/dvd"
ISO_FILE="~/tmp/image.iso"
mkisofs -rdJ -o $ISO_FILE $SOURCE_DIR &&
cdrecord dev=$DVD_WRITER $ISO_FILE
You need a recent release of CDRecord ­ or wodim, which includes a CDRecord version that writes DVDs ­ to use this with a DVD writer, otherwise change the last line to
growisofs -dvd-compat -Z $DVD_WRITER -rdJ $SOURCE_DIR
but note that growisofs will only work with DVDs, not CDs. MS


Installation woes I decided to have another go with Linux and see if I can fiddle with Wine to get my finite element engineering packages to run. I tried installing Ubuntu 6.10 from LXF88 but the display flickered during boot-up even though I used Start With Low Resolution support and used F4 to change the resolution to what my monitor and video card supported. I think the frequency refresh of my monitor is 50Hz but Ubuntu 6.10 and Fedora Core 6 both set it as 60Hz regardless of what resolution you choose.

Then I tried to install Ubuntu 6.06, which did support the display and installed it without a hassle. I then tried to install Wine from source and ./configure suggested installing 'flex' which suggested installing 'm4' and then 'Bison' needed to be installed. Following all of these Wine returned with an error message during make.

As I was not fully successful with Ubuntu, I tried installing Open SUSE 10.2 from the DVD of LXF89 but got the following error message halfway through the installation:

"error occurred while creating the catalog
Cd///?devices=/dev/hdc source rejected by the
user
Retry (yes) (no)"
Pressing Yes gave the message `error occurred dvd/// source rejected by the user' I . had been trying a dual boot installation with Windows XP already on the hard disk and am not sure if this is why the above error occurred.

Anvar Alizadeh


You are really going through a baptism of fire, but I'll try to address the various problems you have met. Most monitors handle a minimum 60Hz refresh rate, but you can edit the /etc/X11/xorg.conf file after installation to set it to suit your monitor. Look for the part that begins with Section "Monitor" and you'll see settings for HorizSync and VertRefresh. Change these to suit the specification of your monitor.

Most distros provide a large selection of software in their repositories and do not expect typical users to have to install software from source. As a result, the necessary tools are not installed by default. Ubuntu offers two approaches to your Wine problem. You could install the build- essentials package, which installs all you need to install from source, including flex and m4.

The simpler alternative is to add WineHQ's own repository to your list of package sources, then you can install the latest version with the package manager. Run these commands to add the repository:
wget -q http://wine.budgetdedicated.com/apt/
387EE263.gpg -O- | sudo apt-key add -
sudo wget http://wine.budgetdedicated.com/apt/
sources.list.d/edgy.list -O /etc/apt/sources.list.d/
winehq.list
The first adds the repository's key to your list of trusted keys, the second adds the source list itself. This is for Ubuntu 6.10 ­ for 6.06 change edgy to dapper in the second command.

Distro installers can occasionally get confused and fail to find the drive from which you are installing. This is usually when you have two optical drives; you boot the disc from one but it detects the other one and tries to load its data from that. In this case, the simplest solution is usually to boot from the other drive. If this is not possible, such as when the first device is a CD drive and you are using a DVD, temporarily disconnecting the first device will avoid this error. You don't need to physically remove the cable ­ most BIOSes provide an option to disable individual devices. This particular problem only affects installation; you can actually reconnect the drive once everything else is working. A similar problem sometimes occurs when trying to install from a USB-connected DVD drive.

It is also possible that you have a damaged disc. The easiest way to test it is to try booting it in another computer. You don't need to install to that computer, just boot up and see if the installer runs without the error. NB


Memory matters

I'm very aware that the amount of memory on my server could be a bottleneck. For a start, the server seems to be using swap space all the time. But I find it hard to work out just how much memory actually need on the system to make it run efficiently. I could just buy all the RAM I can afford, but it seems there ought to be some way to better determine where the sweet spot for memory is.

Phandro


Your question, while apparently simple, really requires a lot of Linux understanding to answer. In the first place, don't be too concerned about the swap space usage. A default Linux system will practically always use swap space. Excessive use can be a problem, though.

For example, you may have the latest in multi- core processors on your box, but it is the utilisation that matters. For very data-heavy processes, unless your physical RAM is fast and copious enough, the server will spend most of its time thrashing the data around in and out of swap space, and not very much time actually processing any of it.

There are a number of Linux tools that can help you determine what is actually going on with your system: top and uptime are quite useful. As well as other information, uptime displays a triplet of numbers that shows the load average on your box for the last one, five and 15 minutes. What does the `load' number mean. Well, it is a magic number that shows the amount of work the box is doing. Higher numbers mean a lot of work, lower numbers, not so much. Actually, it represents an exponentially damped moving average of the total queue length for the CPU. But it is easier to think of it as a magic number.

But you can't take this number on its own and turn it into something useful. Your box may be very busy, but coping very well with the load. It's only if the number crawls up and grows that your box may be experiencing trouble. Running top will show the running processes and their CPU and memory utilisation. But as I have said before, high CPU utilisation isn't necessarily bad, and low utilisation isn't always good. The latter may indicate that the data is spending too long getting to and from the process, so low utilisation with a corresponding high load value is a bad sign that your I/O isn't fast enough (buy some WD Raptors and a good controller), or you don't have enough physical RAM.

By looking at a combination of top, uptime and free (which displays memory usage) you should be able to determine which is the case for you. If you can't wait for a busy time on the box to test it, you can always create some activity of your own. For example, Apache comes with a benchmarking tool, ApacheBench (the actual binary name is ab), which can simulate high demand on the server for you.

Also, as a final tip, it is useful to check through the running processes on the box. Although Linux is quite good about running services only when they are needed, there is still some mileage to be had from killing off errant processes ­ print daemons, sound servers, X... even the HAL daemon isn't likely to be needed, but often runs by default.

If you really want to make the best use of memory, you should try recompiling binaries and taking a good look at the features you need and want. You can make surprising reductions in the memory usage of things like MySQL, Apache, PHP et al.

There is a lot more useful information online, and I would recommend the article on memory analysis by Lubos Lunak at http://ktown.kde. org/~seli/memory. NV


SSH hardening

For some tasks I want to be able to run a remote shell (SSH I guess) on my server. I'm nervous of running extra services on the box though, and wonder if it is really safe to leave an SSH server running. Also, as I know nothing about it, I wonder if you have any tips for making it more secure.

Greg


SSH is actually pretty secure by default, but of course, there are always ways to make it more secure. Most of these revolve around restricting the ways in which you can log in, the accounts you can log in to and the places you can log in from.

By default, SSH enables a simple password login. With this method, when you connect to the SSH server as a user, you are prompted for the password. But of course, passwords can be guessed, so there are other methods available. SSH also allows login through a trusted key pair. This involves generating a key on the client, and copying the public part of the key to the SSH server's authorized_keys store. This is a useful way to quickly connect without needing to remember a password, but you can also turn off the password option on the SSH server. First make a key and copy it to the server:
ssh-keygen -t dsa
cp ~/.ssh/id_dsa.pub servername:.ssh/authorized_keys2
This assumes that you are logging in with the same username on both boxes.

You'll need to edit the /etc/ssh/sshd_config file and change the line: 'PasswordAuthentication yes' to 'PasswordAuthentication no' Make sure you can log in with your key before you try this, especially on a remote server!

While you have the file open, there are a couple of other tweaks to try. Find these two lines (they aren't together in the original):
 PermitRootLogin yes
 ...
 Protocol 2,1
and change them to:
 PermitRootLogin no
 Protocol 2
This prevents anyone from logging in directly as root. For root access, you will have to log in as a normal user and use su to get root access. The simple reason for this is that instead of having just one password to crack (or key, in our superhard example), any potential cracker will need to know two passwords and the name of a user account on the system ­ just a little bit harder to do.

The second line there forces the server­client to use the more secure protocol for SSH communications. I can't think of a client that doesn't support it, so set this option now!

In addition to forcing a user login, you may wish to restrict the individual users who can log in, since it is easy to guess some of the account names on any Linux box.
 AllowUsers eric jeff mike degville
A simple space-separated list will restrict the accounts that can be accessed. If you want to be really harsh, you can link the accounts to particular sources, by appending a domain name of the originating server (be careful with this, as access from some sources may not always appear to come from the same IP address):
 AllowUsers mike@linuxformat.com eric@*.ac.uk
That should keep the evildoers out of SSH at least. NV LXF


Devil of a job

Please help me! My name is Jack. I have a problem. My motherboard is an ASUS P4S800D with an SIS655FX chipset. I have two hard disks: the first, an IDE disk, with OpenSUSE 10.2, the second, a SATA disk with Windows. The installer of SUSE 10.2 detects only the IDE disk. How can I set up and mount the SATA disk in OpenSUSE 10.2? On the official site of SIS I find a driver, but I get a make error because it can't find scsi_request.h. Is this being caused by a problem in the kernel?
Jack


Hey, Jack! SATA does still continue to be a problem for many people. In our experience, the easiest way to fix the problem is to switch the drives into compatibility mode using your BIOS, then complete the installation and try switching it back. Many distros struggle to get installed on normal SATA drives, but then work just fine once they're installed ­ particularly after you've installed all the latest patches. You should also check to make sure you're not using software RAID, because that can also cause problems. As a last resort, try adding insmod=ide-generic to the installation boot options box. Good luck! PH


Video slide shows

I would like to make video CDs of some of my photos. At the moment I just want to get the photos on to video CD so that they can be played on a simple DVD player and TV. Later, I will want to add a soundtrack.

It appears that there are various tools out there, but I haven't found a clear description of how to perform this simple task. Using FFmpeg, for example, I can create a movie from my JPEG files that takes about 0.4 of a second to run. I'd like to be able to show each frame for three seconds (for example) but can't find a way to add a delay between frames. Convert looks promising, but I just get errors about mpeg2encode with it.

I'm using Ubuntu Dapper. Thanks in advance for any pointers.
Daudi


DVD would be better than video CD. Not only can you fit a lot more photos on one disc, but the quality is much higher. The main part of the process is much the same whether you're making DVDs or video CDs, although most of the tools are set up for DVD creation, so will need some tweaking to create video CDs.

The most straightforward way to put a slide show on to a disc is to use the slide show plugin in DigiKam or KPhotoAlbum (both programs use the same plugin) to create a DVD slide show from an album or selected photos. These are quite limited, as you can only adjust the length of time that an image appears for and the length of its fade ­ and these have to be the same for all images.

If you want more control, DVD-Slideshow (its homepage is at http://dvd-slideshow. sourceforge.net) is a better choice. This is a set of scripts to generate DVDs from images and sound. The main script, dvd-slideshow, uses a text file listing all images and effects to create a DVD VOB file. Use dir2slideshow to generate a DVD- Slideshow input file, which you can pass directly to dvd-slideshow or edit to change the timings or effects. Then use dvd-slideshow to create the slideshow and add music. You can use MPlayer to view the resulting VOB file to check it before putting it on to a disc.

Finally, dvd-menu will create (no surprises here) a DVD menu for one or more slide shows, and you have the option of calling dvdauthor to write everything to an ISO image ready for writing to a DVD. Assuming you have a directory called pics that you want to make into a slide show, the commands are as follows:
mkdir slideshow
dir2slideshow -o slideshow -t 5 -c 1 -n myslideshow pics
# edit myslideshow.txt if you want to change timings or efects
dvd-slideshow -a somemusic.ogg myslideshow.txt
dvd-menu -t "My slide show" -f myslideshow.xml -iso
This creates a slide show with each image shown for five seconds with a one-second fade, and writes it to an ISO file ready for burning to a DVD. It is also possible to generate a DVD with a single slide show that plays immediately, without going through a menu. The programs default to NTSC output; for a PAL DVD you should add the -p option to each command or put 'pal=1' in ~/.dvd-slideshowrc.

If you want to create a video CD-compatible MPEG, you can use FFmpeg to transcode the VOB file that you created with dvd-slideshow, like this:
ffmpeg -target pal-vcd -i dvdslide.vob vcdslide.mpg
NB


The missing link

I need to link one directory to another so that if a program asks for directory x, it is shown directory y instead. I've tried ln with various options, but it just keeps creating the link inside the target directory.

The reason for doing this is that I've just updated from OpenOffice.org 2.0 to OOo 2.1, which has created a new directory called /opt/openoffice.org2.1. When I click on a text document or spreadsheet in KDE it tries to look inside /opt/openoffice.org2.0, which no longer exists. If I cd into /opt and do

ln -s openoffice.org2.0 openoffice.org2.1
it creates the OpenOffice.org 2.0 symlink to within the 2.1 directory. I've tried everything but just cannot get it to work!
OnlyTheTony


There are two problems with the way you are using ln. The first is that the syntax is `ln -s source destination' This one got me too: for some time I had to think twice, having first used links on an OS that used the opposite order. The arguments should go in the same order as they do when you're using cp and mv: I find that helps me remember.

The other problem is that if the destination given is an existing directory, ln thinks that you want to create the link inside that directory. This is also consistent with cp and mv, which copy or move into a directory if it is given as the destination. Remove the destination directory and ln will create the link as you need.

ln -s openoffice.org2.1 /opt/openoffice.org2.0
Note that with symlinks, the source is given relative to the destination, so even though this command is not executed in the /opt directory ­ and therefore no file or directory called OpenOffice.org2.1 exists ­ the ln command will still work.

Alternatively, you could go into the file associations section of the KDE Control Centre and fix it to call the ooffice2 programs with the correct path. NB


How many is too many?

I'm running Ubuntu 6.10 64-bit on AMD64 and I do a ton of audio encoding. I set up a small test to see what was more effective: encoding four directories worth of FLAC files (four files to each directory, all the same size) to OGG in serial or in parallel. I wrote two Bash scripts to attempt to measure the performance. The first script takes around nine minutes to execute (just over two minutes per directory) while the second script also takes roughly nine minutes, even though each folder contains nine minutes' worth of encoding.

I'm sure that there's a point at which running all of the tasks in parallel runs slower than running them one at a time. Watching the output from top shows four instances of flac running, each taking approximately 20% of the CPU's capacity when running in parallel. While running in serial, a single flac process uses much more CPU power.

Are there any benchmarks or guidelines to follow? Without further testing I'm left wondering whether I could be saving a lot of my time one way or the other when I need to encode tons of files.
Paul Hoch


There is some overhead in running tasks in parallel, because of the extra task switching and memory management involved, but this is insignificant for small numbers of tasks. Had you tried to run 20 or 30 encoding processes in parallel you would have noticed a reduction in speed, especially if you started to use swap space.

Encoding files from hard disk to hard disk places a heavy load on the CPU and memory while demanding little of your disks ­ this is what the techies call a `compute-bound' or `CPU-bound' task. On the other hand, ripping data from a CD or DVD is largely dependent on the speed of the transfer while asking little of the CPU ­ this is called `IO bound' So running two CPU-bound, or two IO-bound, processes in parallel is likely to have little benefit over running them in serial, but running one of each in parallel will give a large speed benefit.

If the audio that you're encoding is coming from optical discs, or any other source that gives relatively slow transfer speeds, you will see a great improvement in parallelling the processes, as in the following:
Rip track 1
Encode track 1 in the background
Rip track 2
There are a number of CD ripper/encoders that do just this, including my favourites: Grip (www.nostatic.org/grip) for GUI operation; and Abcde (www.hispalinux.es/~data/abcde.php) for console use. If your audio files are already on your hard disk, you may as well keep the number of encoding processes low, but be sure to use at least two ­ a single process will always be subject to interruption.

The only really useful benchmark is one that closely mirrors your own usage, which normally means running your own tasks and timing them, as you have already done. Bear in mind that your encoding will take place in the background, so unless you do a huge amount, or each job is urgent, you could easily spend more time on benchmarking than you would save by improving your machine's performance. You have already established that there is little discernible difference for a small number of processes. Higher numbers will not improve things ­ unless you're running multiple multi-core processors. NB


Missing GUI

I have a stable Linux system that runs my desktop and small home/office LAN. I keep a few spare partitions on my hard disk to try out new distros, and from curiosity I installed Fedora Core 6 [coverdisc, LXF88]. The main challenge I always have to overcome in such experiments is getting my PCI wireless card to work. It uses the rather infamous Broadcom BCM4318 chipset and is not all Linux-friendly.

Following tips and advice I used the following three steps to activate the card. First, I installed the drivers using NdisWrapper. Second, I disabled the BCM43xx Fedora driver. Third, following instructions on SourceForge, I tweaked two network files [Modprobe.conf and icfg- eth0]. All of that enabled my eth0 interface to work like wlan0 in other distros. The card starts from the command line like so:

/etc/init.d/network restart
To finish the job I activated network manager from the system menu on the KDE desktop. I brought up the network configuration box to do a few last tweaks, but it was empty. It shows no NIC interface of any kind and yet the whole system is running perfectly. I can surf in technicolour and multimedia splendour on broadband. How do I get the GUI controls to reflect what has already been done in the murky depths of the system using the command line?
Jim Macfarlane


Although you have taken a somewhat unorthodox route to enable your wireless networking, it works ­ well done! Did you set up the NdisWrapper alias by using these commands as root?
ndiswrapper -ma
echo "alias wlan0 ndiswrapper" >> /etc/modprobe.conf
Most importantly, after doing all that, did you use the Fedora system-config-network tool to create a new network interface for the device? If you've done all that and Network Manager still isn't working, you could try starting it at boot time like this (again, as root user):
chkconfig NetworkManager on
chkconfig NetworkManagerDispatcher on
Network Manager is actually quite a new tool, and is under constant development. You may find your problems just disappear in Fedora 7, which should be out in April. PH


Make it bigger!

I've just installed Fedora Core 6. How do I make the font size larger on the desktop/system?
Said Farah


Ah! An easy question. I like easy questions. The font size in Fedora is set in the System > Administration menu, under the Fonts menu item. When the Fonts Preferences box appears, click on Details in the bottom-right corner, then look for the resolution in the top-left corner of the new window. Increasing that number makes fonts bigger, and will also make buttons, windows, menus and other things larger so that the fonts fit properly. Be sure to write down the original resolution, though, just in case you want to get back to it in the future. PH


How full is your MAC?

I have recently installed Fedora Core 6 in dual-boot mode on my HP Pavilion t3065 (Intel Pentium 4 3.4GHz with 1GB of RAM) from the Fedora Linux Format Special. All went well until I tried to connect to my Belkin wireless G modem (802.11g ­ model F5D7632uk ver 1000), and it was only after much head scratching and searching on the internet that I discovered I needed a wireless driver. Having interrogated my network controller, I found that my chipset is as follows:

# Intersil Corporation ISL3890 [Prism GT / Prism Duette] / ISL3886 [Prism Javelin / Prism Xbow] (rev 01).
# Subsystem: Accton Technology Corporation WN4201B.
# Flags: bus master, medium devsel, latency 64, IRQ169.
# Memory at cfffc000 (32 bit, non-prefetchable) Size 8k.
# Capabilities: (dc) Power management version 1.

Having had a look at various sources on the web on the subject of connectivity with this chipset (including www.prism54.org), I am now confused. Do I need a FullMAC driver or an Islsm driver? The listed drivers cover one or the other ISL variant but not both together! This begs the question: does it matter which one I choose?

Assuming that I can get Linux to talk to my wireless modem, does Fedora or any other distro support WPA-PSK security, or is 128-bit encryption the best that is on offer for now? How would I go about implementing WPA-PSK on my PC?
Jonathan Peace


A few years ago, Prism released a new version of its chipset that offloaded some of the work to the host computer (in other words, it was a cheaper, half-complete design rather like a Winmodem). This became known as the SoftMAC design, and it broke compatibility with the Prism54 drivers until the Islsm drivers were developed. The Islsm driver works with both SoftMAC and the original FullMAC chips. The FullMAC driver works better with FullMAC devices, but not at all with SoftMAC ones. Unfortunately, it is difficult to tell which you have ­ the ISL3890 works with the FullMAC driver but the ISL3886 needs Islsm.

The FullMAC driver is built into the standard kernel for Fedora Core 6; you only need to install the firmware file, which can be downloaded from http://prism54.org/fullmac.html. You can test it by opening a terminal and typing 'su' (give root password when asked), then:
modprobe prism54
lsmod | grep prism54
If the final command gives an output, the driver is present and loaded ­ so try to connect to your modem. You should disable all encryption (WEP and WPA) while testing at first ­ get the connection working then sort out the encryption (until it works you have nothing to encrypt anyway). If the Prism54 driver fails to connect, try the Islsm driver. This also needs a firmware file, but a different one, which you can get from http://prism54.org/newdrivers.html. Comprehensive installation instructions are included in the package.

WPA-PSK encryption is available for Linux, in the form of wpa_supplicant (http://hostap. epitest.fi/wpa_supplicant). Fedora Core includes packages for this ­ you need to install wpa_supplicant and wpa_supplicant-gui. Only the first is essential, but the second provides a GUI for configuration, which saves reading and editing configuration files. NV


Nvidia strain

I have an AMD64 3000+ CPU, with 1GB of RAM, an Nvidia GeForce PCI-express graphics card and a 320GB SATA HDD. When I install Fedora Core 6, everything seems to go well until it gets to `starting udev [OK]', then the screen goes black. After that, the hard drive seems to continue working, but I can't see anything on the screen, which then delivers this message: `Mode not supported'.

I thought at first it might be the graphics card, but I then installed Elive 0.5, and everything there works. I tried removing the card and using the VIA in-built card, nothing; used a different screen (on the off-chance), nothing. I tried to boot in all of the options given at the Fedora screen by pressing `e' but nothing worked. I tried to force the screen resolution (linux resolution=1024x768) and I tried using linux noprobe.

The only other error messages that might have some bearing that I can see are: 'PCI: BIOS Bug: MCFG area at e0000000 is not E820- reserved' and 'PCI:Not using MMCONFIG' I don't know if that has anything to do with it, as they don't hinder Elive from working. Can you help me get Fedora to work?
Daryl


It sounds to me as though Fedora is trying to use its internal Nvidia driver, and it's struggling to cope with your screen resolution. The quick fix for this problem is to switch over to the VESA driver, which ought to work on pretty much any graphics configuration. If you open up the file /etc/X11/xorg.conf as root, look for this line:
Driver "nv"
Change that to read vesa rather than nv, then reboot. That should at least give you a working Fedora system.

Now, if you find that VESA isn't good enough for your day-to-day work, or if you want to try AIGLX or any 3D games, your best bet is to install the official Nvidia driver from www.nvidia.com. This is a great deal more stable than the driver that comes with Fedora, and should solve your problem. NB


Missing tools

I need help with post-installation configuration of Evolution mail on my Ubuntu system. My home computer runs a dual-boot system with Windows XP on one partition and Ubuntu Linux on the other. I have an Ethernet connection to a broadband hub. Mozilla works beautifully under the Ubuntu OS ­ indeed, I have successfully used it for upgrades, Google searches etc.

So Ubuntu has excellent internet access. Evolution mail, however, needs to be configured properly. An old issue of Linux Format describes a configuration wizard but I have been unable to invoke it. A Google search about Evolution gave a hit that revealed a document about setting up Evolution. At first that seemed to be very helpfu but it required the use of the Tools menu on the menu bar of Evolution ­ which my version of Evolution does not have. How can I configure my Evolution without a Tools menu?
John Cleave


As you have not stated which versions of Evolution and Ubuntu you are using, I am going to assume you are using the latest Evolution, version 2.6. Now, Evolution 2.0 has the Tools link in the menu bar, but Evolution 2.4 and 2.6 don't have it.

The first time you start Evolution, you should see the First-Run Assistant. In your case, it seems that didn't work and Evolution has hidden the option from you now. To invoke First-Run Assistant, you should delete the folder .evolution (note the dot) in your home directory then type evolution from the command line. This will make Evolution think it is running for the first time and it will show the First-Run Assistant, which you can then use to configure the software.

If you don't want to take the command line option, just click on Edit > Preferences, then Mail Accounts > Add to create a new mail account. NV


DeLi Linux install problem

It was good to see DeLi Linux included on the coverdisc [LXF86], as I have a 486 PC that I thought would be good to learn on. Burning the CD and the boot floppy went well, as did the install until a pane appeared asking `Where is delibase.tgz?' Part of the pane states `I can scan for CD-Rom drives. Should I try to do so?'.

Clicking on Yes gives another pane asking me to `Enter the device which contains the DeLi Linux Base Package delibase.tgz'. No matter what I put in there I get the message `ERROR! Failed to mount the source device. Exiting ...' and that's it. I had copied that file to C:\, because that was one of the locations stated in the original pane `Where is... etc' ­ however, when I try to enter C:\ I can only get C:#.
Mary Perrin


It appears that your installer was unable to detect your CD drive. This is possible if it is not a standard ATAPI IDE device. The fact that you needed to create a boot disk indicates that this may be the case. Your inability to type `C:\' is almost certainly caused by an incorrect keymap, which maps symbols differently to your keyboard. The \ character is there, but you'd have to keep pressing keys until you found the right one. This is similar to the problems I have finding # and @ when I boot a Live CD that insists on using a US keymap with my UK keyboard. You can usually avoid playing hunt the key by choosing the correct keyboard earlier in the installation, or you'll experience the same problem when you try to use DeLi Linux.

Good luck with DeLi Linux: running anything o a 486 is going to seem like hard work. You may also like to try out Damn Small Linux ­ this is another distro designed to be lightweight, and you'll find it on the coverdisc of LXF89. MS


Disappearing buttons

I am using Ubuntu 6.06 from LXF83 on a Compaq Presario SR1720NX and am very new to this. When I try to add an Epson Stylus CX4800 printer using Gnome CUPS, the bottom of the screen, which should show Cancel, Back and Forward or Apply buttons, is missing. I can use the Enter key instead of Forward, but on screen 3 I can't find a way to activate the Apply button.
Jim Laprad


This would appear to be a problem with your screen resolution. If Ubuntu's installer was unable to get accurate information about your graphics card and monitor, it would have defaulted to a safe 640x480 resolution. This is too small to display the full Add Printer window. A quick fix is to hold down the Alt key, click in the middle of the window and drag it upwards to expose the buttons. Alt+clicking means you can drag from any part of the window, so you can move it upwards even if that means moving the titlebar off the screen. This will allow you to add your printer, but does not fix the cause.

To change the screen resolution to something more suitable, use Preferences > Screen Resolution from the System menu. This should offer all the resolutions that are suitable for your combination of graphics hardware and display. If only 640x480 is offered, your hardware was not identified during installation. The Device Manager, from the System > Administration menu, will show you if your graphics card was identified correctly - it should be an ATI Radeon XPress 200 IGP on your computer. To change the settings for graphics card or monitor, you should run dexconf to probe the hardware and write a configuration file. It is wise to back up the existing configuration file first, so run this in a terminal:
cp /etc/X11/xorg.conf ~
sudo dexconf
This will generate a new configuration file in /etc/X11/xorg.conf after making a copy of the original in your home directory. Once you have done this, you will need to restart the X server. This can be done from the command line, but as you're a new user, restarting the computer is probably the easiest way to do it.

If 640x480 is still the only resolution available to you, you will need to edit the xorg.conf file. Without seeing your existing configuration, it is impossible to say what needs changing. If you get this far and still cannot get past 640x480, I recommend you ask on our Help forum at www.linuxformat.com, including the contents of /etc/X11/xorg.conf, the output from running lspci -v and details of your monitor. NB


Timing tasks

I am having some minor trouble with Cron jobs on my SUSE 9.3 system. Placing scripts in the cron.hourly, cron.daily and cron.weekly folders works fine, but how do I control when the files in those directories are executed? Can I set whether weekly jobs are done every Sunday or Friday and whether daily jobs are done at noon or midnight? I have tried to track down the way this works but it isn't clear.

As best I can tell, there is only one Cron job scheduled in the crontabs file that runs every few minutes for all of the folder-based Cron jobs. That Cron job seems to call a script that looks in all of the cron.* directories, keeps track of successes and failures, and somehow keeps track of which jobs need to be done when.


SUSE 9.3 does this slightly differently from some other distros. Instead of running the contents of these directories at a specific time, it runs them according to when they were last run. You've got most of the way to discovering this yourself: the single line in /etc/crontab calls the run-crons script every 15 minutes. This looks for marker files associated with each of the /etc/cron.* directories in /var/spool/cron/lastrun. If the marker file is more than an hour/day/week old, it runs the scripts in the directory and updates the timestamp on the marker. If no marker file is present, it runs the scripts and then creates one.

This all means that instead of, say, running the daily scripts at 4:30 every morning when system load is low, it runs them a day after they were last run. You can force a particular time by altering the timestamp on the files in /var/spool/cron/lastrun. This script will change the timestamp of each of the monthly, weekly and daily files to 4:30 am while leaving the date unchanged (otherwise you'd never run the weekly or monthly scripts).
#!/bin/sh
cd /var/spool/cron/lastrun
for i in daily weekly monthly
do
	if [ -f cron.$i ]
	then
		touch -t $(date -r cron.$i +%Y%m%d)0430 cron.$i
	fi
done
This uses the date command to extract the file's current date in YYYYMMDD format, adds the time you want (0430) and passes this to the touch command to update the file's timestamp. You can change the day for weekly scripts in a similar way.

While this will switch the runtimes to a time of day more suited to you, bear in mind that any delay in running the scripts later, such as the computer being turned off at 4:30, will set the runtime to whenever the scripts were run. You could automate this by setting up a separate task in /etc/crontab to run this script at 0400 every day. NV


Gentoo printing

I'm getting on surprisingly well with the installation I made of Gentoo, but I can't get my USB printer to print. I've gone through the Gentoo Printing Guide and other USB documentation most carefully. I've checked (and triple-checked) my kernel config options and I'm sure I've included everything I need. I've tried compiling with the USB parameters compiled into the kernel or as loadable modules and neither works. Neither does genkernel, so I don't believe it's a kernel issue.

The printer is a Samsung ML-1210. It's a discontinued host-based printer, but it serves my needs adequately and has always worked fine with Linux. And it prints fine from Ubuntu Edgy from another partition on the same machine using the same USB port, so neither CUPS per se nor the hardware is the problem.

If I open the Gnome Print Manager app, the printer is autodetected and the wizard offers me the same CUPS driver as other distros, but when I go to print a test page, nothing comes out the other end. The same happens when I use OpenOffice.org. OOo seems to think it has printed a document, but nothing appears.

Doing lsusb shows:

'Bus 002 Device 003: ID 04e8:300c Samsung Electronics Co., Ltd ML-1210 Printer'.

I checked /var/log/cups/error_log, and it showed nothing untoward that I can see.
Spotted cat, from the LXF forums


The first thing to do when encountering CUPS problems is to turn up the logging level. Edit /etc/cups/cupsd.conf by changing LogLevel from 'info' to 'debug'; then restart CUPS.

In this case, there is a clue in the logs you supplied. You are using GPL Ghostscript, which doesn't properly support the binary drivers needed by a GDI printer (aka WinPrinter) like your Samsung. So unmerge ghostscript-gpl and emerge ghostscript-esp, which has better printer support, like this:
emerge --unmerge ghostscript-gpl
emerge --oneshot ghostscript-esp
It is also probable you need the openslp package, even though this is supposed to be an optional dependency of CUPS. SLP (Service Locator Protocol) is useful for other programs too, so add it to your USE flags in /etc/make.conf. It is also worth adding foomaticdb, which doesn't affect CUPS directly but increases the level of printer support for some programs.

Now rebuild any packages that make use of your changed flags, including CUPS, with
emerge --newuse --deep --verbose --ask world
This will display a list of packages that will be updated or installed thanks to your changed USE flags, which should include CUPS and OpenSLP. Press Enter to install them and restart CUPS when it has finished.

USE flags are an important part of Gentoo and they are all described in /usr/portage/profiles/use.desc and /usr/portage/profiles/use.local.desc. Or you may find it easier to emerge Profuse and search, browse or set them in a GUI. NB


Wi-Fi woes

I was wondering if you could help with a Wi-Fi/NdisWrapper problem. I'm trying to get a Belkin card to work under NdisWrapper using the rt2500 Windows XP driver. The online instructions are great and I detected the card, worked out what driver I needed and so on. I've installed NdisWrapper, installed the XP driver and when I type ndiswrapper-l it shows the driver installed and hardware present.

I then did modprobe to load NdisWrapper into the kernel, configured the wireless LAN settings and it all worked fine. When I rebooted, of course, it forgot everything and I now can't get it work. NdisWrapper still shows the driver installed and hardware present, but the lights are off on the card and when I try to configure it and get an IP using DHCP it says 'no link present check cable'.

I've rerun modprobe ndiswrapper, and had the card out and back in again, but the card still doesn't light up.
Andrew Wood


This sort of problem is not uncommon with NdisWrapper, but it should not affect you. There is no need to use NdisWrapper with an rt2500 wireless card, because it should only be used when there is no Linux driver for the card (running Windows code as root is not something you should do if you can avoid it).

Linux kernel drivers for the rt2500 chipset are available from http://rt2x00.serialmonkey.com and http://sourceforge.net/projects/rt2400. Don't worry about the 2400 in the name - the same project produces drivers for the rt2400 (802.11b) and rt2500 (802.11g) chipsets. These are semi-official drivers in that they are based on the original closed source drivers from Ralink, which it was subsequently encouraged to release under the GPL. As well as the drivers themselves, the project includes a GUI for wireless scanning and configuration.

Some distros, such as Debian, include the drivers in their repositories, while with others you need to build from source. Without knowing your distro it is hard to give specific installation advice, but if you want to install from source, you will need the kernel sources installed. These are usually in a package called something like kernel-sources, linux-sources or kernel-devel. Make sure you install the package with the same version as your running kernel. As with all external kernel modules, if you ever upgrade your kernel you will need to reinstall the module. Because you may not have internet access until you do, I'd advise you to keep a copy of the source tarball or installation package somewhere safe.

If you insist on using NdisWrapper, it looks like you need to run ndiswrapper -m to set up an alias for wlan0 in the NdisWrapper configuration. This forces NdisWrapper to load the module and driver. MS


House clearance

After installing various distros I have a cluttered /home/username folder and a menu full of unusable program entries. How can I delete these dead entries and sort the files into folders by extension, for example putting all GIF, PNG and JPEG files into one directory? I would also need to deal with duplicate files. This is in preparation for reinstalling Ubuntu.
AJB2K3, from the LXF forums


I would rename the /home/username folder before installation, then only copy over the files you need. This is probably easier than trying to clean out the detritus from a live home directory. The best program I have found for identifying and removing redundant files is Kleansweep, from http://linux.bydg.org/~yogin.

Finding duplicate files is best done with Fdupes from http://netdial.caribe.net/~adrian2/fdupes.html. Use it like this:
fdupes --recurse ~
fdupes --recurse --omitfirst ~ | xargs rm
The first line will show all duplicate files, the second will remove all but the first occurrence of each file - use this with care.

Sorting files by name is best done with the find command. You can move the files you mention with
mkdir pics
find ~ -iname '*.jpg' -o -iname '*.png' -o -iname '*.gif' -exec mv "{}" pics ';'
NB


Linux on a stick

I have a 2GB USB stick on which I have installed Slax Popcorn Edition. I can easily boot my computer from the stick and save all my changes to the system. Once in a while I run into a system where I would really need to boot it from the USB stick but rebooting is not possible, and because the host system is configured 'tight' I can't use Qemu.

I have been trying to find the solution to this problem. I tried VMplayer, Qemu and Moka to no avail: there is always something missing. My ultimate solution would be VMplayer installed in the USB stick with the OS image, but I haven't found a way to do this.

Is there a solution in the market that would allow me to run my own OS from the USB stick regardless of the host machine?
Virtaava


There are a number of reasons why you may not be able to boot a USB Flash device on some hardware. Some computers are incapable of booting from USB devices, although these are thankfully few now. Another scenario, which you seem to be experiencing, is that the owner of the computer has configured the BIOS to not boot from USB. If this is the case, trying to circumvent such restrictions is usually wrong and often illegal, unless you have the owner's permission. If you do have the go-ahead, you can often use a bootable CD to start the boot before passing control to the USB device; the Slax website (www.slax.org) contains just such a CD image.

Another obstacle to booting from USB devices is that there are at least three ways of doing this. The device can be set up to boot as if it were a floppy disc, Zip disc or hard disk; the Slax USB installer appears to use the first option.

Not all BIOSes can boot all three types, so you may need more than one USB stick. Damn Small Linux (DSL, reviewed last month) has a USB installer capable of creating either a USB-ZIP or USB-HDD-style bootable device, so it may be worth investigating. My laptop will not boot the Slax image but will boot a DSL installation on the same USB key.

Some computers will not boot a USB Flash device from a partition larger than 256MB, so you should partition your drive with a 256MB partition for the OS and the rest for your data.

Your VMware solution is ingenious, as it removes the need to reboot, but VMplayer needs files to be installed on the host operating system. Moka would appear to avoid that need, but it works by temporally installing files to the host Windows system, so needs to be run as an administrator.

If the configuration of the computer is stopping you from booting a USB device, you should accept that or ask the owner to change it. If it is the way the computer boots from USB devices causing your problem, try a different distro. Mandriva has just announced Mandriva Flash, a complete desktop on a 2GB USB key. I haven't tried it yet - but you can find more information at www.mandriva.com/linux/2007/flash. NB


OnTheGo - stalled

I am trying to create an 'OnTheGo' disk from the Live distro version of SimplyMepis 6.0, but the disk selection box remains blank with no options offered. I have tried:

* Booting with the USB Flash drive in place, then mounting it.

* Inserting it after the computer has booted, then mounting it.

* Logging on as both 'demo' and 'root'.

* Both an Advent 2GB and a Huke 512MB USB2 drive.

I know that the drive has been successfully mounted because I am able to save files to it - I have dragged and dropped the selection of background pictures supplied, and they are still there after a hard reboot. My computer is about six years old; it's a Pentium 3 with Windows 98SE installed and a USB2 PCI card as an upgrade. My only experience with Linux is with the Live distros on magazine coverdiscs over the past few months. As a Linux newbie I am at a loss as to what else to try.
D Thompson


You have to be logged in as root to set up OnTheGo, and the USB device must not be mounted. After logging in as root, plug in the device. If the KDE dialog pops up asking you what you want to do, select Do Nothing. If the disc automounts, use KwikDisk from the Kicker panel to unmount it or type unmount /dev/sda1 in a terminal. Do not use the Safely Remove option from the disc's icon as this also removes the device's node in /dev, rendering it unavailable to the installer.

Now run Mepis Utilities - select the option to create an OnTheGo disc and your drive should be available, most likely as sda. Once the process is complete, remove the USB disc (there's no need to unmount it) and select Log Out from the K menu, followed by End Current Session. When the login screen appears, plug in the USB disc, wait ten seconds for it to be detected and log in with a username and password of 'onthego'. If you created OnTheGo with encryption, you will be asked for the encryption password later.

The OnTheGo disc only contains your personal data, which can be encrypted; you still need to boot from the Mepis CD. On the other hand, you won't run into any of the problems booting from a USB device mentioned in Linux On A Stick, and you can copy the .onthego.iso file to a different USB disc if you wish. NB


Linux plans

I am looking at implementing Moodle as a course management system (ultimately with a web hosting service that already has Linux, Apache and MySQL). But is there a version of Linux that is best to start with? Moodle permits a Windows install but I think it is best to go all the way and do it right.
Lee


Is there a version of Linux best to start with? I guess it depends on your preferences. As a Debian user, I would say try Debian, as it's very stable and easy to install. If you are a beginner I would opt for Ubuntu (which is based on Debian). The latest version is Ubuntu 6.06 LTS Server. I have experience on Debian and I can say it would take you 30 minutes maximum to install a Debian server from the moment you insert the CD and boot from it. The Debian package administration is extremely simple to use, using the command dselect. SL


Batch editing

I need to grep for a particular 'string' in a file and remove the entire line where the occurrence of the string is found. I want it to work across with a collection of files. Can you help?
Goutham Vutharkar


It is possible to use grep for this: grep -v string file will output all lines that do not contain the string. But sed is a more suitable tool for batch editing.
sed --in-place '/some string/d' myfile
will delete all lines containing 'some string'. To process a collection of files, you need to use a for loop (or find) because sed's --in-place option only works on single files. One of these commands will do it:
for f in *.txt; do sed --in-place '/some string/d' "$f"; done
find -name '*.txt' -exec sed --in-place=.bak '/some string/d' "{}" ';'
Adding =.bak in the latter example makes sed save a backup of the original file before modifying it. NB


VNC please!

I connect to my home server using VNC (not over SSH yet!). However, it doesn't bring up my 'start' bar on KDE and I automatically log in as the person who started the VNC server (not tested with root!).

I would like my system (Slackware 10.2) to start VNC on boot so I can vnc to the XDM/KDE login screen. My init is currently set to level 4. Any ideas, hints or advice on better software? My server doesn't have a monitor.
psykx, from the LXF forums


Here's what you need to do to configure a VNC server. Note: the VNC server must be running, and it must be configured to run your preferred window manager. You can do this by editing the file $HOME/.vnc/xstartup to call your preferred window manager. Use startkde & for KDE, gnome-session & for Gnome or fvwm2 & for Fvwm2. Also, make sure you have run vncpasswd in $HOME/.vnc/passwd to create the password file.

Red Hat provides an easy way to start up the VNC desktop at boot time. Use linuxconf to set the vncserver boot script (in /etc/init.d/vncserver) to come up at boot. The default bootscript, however, doesn't quite give the flexibility that I'd prefer. Edit /etc/init.d/vncserver, looking for the line that says
su - ${display##*:} -c \"cd && [ -f .vnc/passwd ] && vncserver :${display%%:*}\"
Change it to look like this:
su - ${display##*:} -c \"cd && [ -f .vnc/passwd ] && vncserver ${ARGS} :${display%%:*}\"
Then edit /etc/sysconfig/vncservers to this:
# The VNCSERVERS variable is a list of 
# display: user pairs.
# Uncomment the line below to start a VNC 
# server on display :1 as my 'myusername' # (adjust # this to your own). 
# You will also need to set a VNC password;
# run 'man vncpasswd' to see how to do 
# that.
# DO NOT RUN THIS SERVICE if your local # area network is untrusted! For a secure 
# way of using VNC, see .
VNCSERVERS="1:jdimpson"
ARGS="-geometry 1024x768 -alwaysshared"
Change the value 1024x768 in ARGS to represent the size of your actual X desktop. Add any other VNC server arguments that you wish to this ARGS variable. Also change jdimpson in VNCSERVERS to whatever user you wish to run the VNC desktop.

The value 1 in VNCSERVERS makes the VNC server run as display 1. You can have additional desktops come up like this:
VNCSERVERS="1:jdimpson 2:phred 
3:sysadmin"
On a Red Hat system, make sure the VNC server is running by executing this:
/etc/init.d/vncserver start
At this point, you can connect to the VNC desktop using any VNC client. SL


Rather one-dimensional

I have successfully installed Mandriva Linux 2007 and am trying to enable 3D desktop effects. When I click on the 3D icon under Configure Your Computer/Hardware, everything is greyed out, with a message at the top saying 'Your System does not support 3D desktop effects'.

I have an Nvidia GeForce 6800GT, which ran perfectly with Mandriva 2006. What can I do to get the 3D desktop working?
Zachary, from the LXF forums


The most likely cause of this is that you are using the free nv driver for your graphics card. This driver does not support any sort of 3D acceleration - you need Nvidia's own drivers for that. These can be downloaded from www.nvidia.com as a single file that you run to install them. However, you will need several other packages installed before you can do this. At the very least you will need the kernel sources to match your running kernel. Mandriva no longer includes these on its DVDs, so you will need to add Mandriva's online repository to the Mandriva Control Center before you can install it.

You may also need a compiler installed. The Nvidia installer comes with precompiled modules for a few kernel variants, but compiles them on the fly for others. Once the drivers are installed, you will have to edit your X configuration to use the new drivers. The Nvidia installer requires that you do all of this without X running, working entirely from a virtual console.

Fortunately, there is a much easier way. The Penguin Liberation Front (PLF) is the "official unofficial" repository for Mandriva, containing a number of non-free (as in speech) packages and others that cannot be included in the main distro because of legal complications; such as libdvdcss, needed to watch encrypted DVDs. The first step to easy Mandriva software installation is to add this and the official Mandriva repositories to your system. Go to http://easyurpmi.zarb.org and select suitable mirrors for the Mandriva and PLF sources; those closest to you are usually best. Click on Proceed and it will display a screed of text for you to type in a terminal, but even that is easy. Open a terminal from the Mandriva menu with System > Terminals > Konsole and type su to become root, then drag your mouse over the text in the browser so that all of the text in the box, and nothing else, is highlighted.

Now place the mouse over the terminal window, press its middle button to paste in the highlighted text and press Enter. You'll need to be online to do this and it will take a few minutes as it downloads lists of available packages.

Now fire up the Mandriva Control Center (System > Configuration > Configure Your Computer), go to the software section and type 'nvidia' in the search box. Select the package (it is currently nvidia-8774-4plf but the numbers may change as the 9000 drivers could be out by the time you read this), and click Apply. If any other packages are needed, they will be installed automatically - you only need to select the one package.

Finally, go into the Hardware > Graphical Server section of the Control Center and select the Nvidia option for your graphics card.

When you reboot you will be using Nvidia's drivers in all their 3D glory and you will be able to set up the 3D desktop effects. Have fun! NB


From @ to "

I have had a hopefully minor glitch in installing SUSE 10.1. When I key in the at sign I get ". Similarly, double quotes gives @. I definitely selected English and UK in the setup as this was shown in the confirmation before the installation panel. Hopefully there is a way to correct this mishmash without reinstalling. Can you help?
Anonymous, from the LXF forums


You will have to change the keyboard mapping. To do this, you need to edit the file /etc/X11/xorg.conf. Make sure you do take a backup of that file, in case you delete or edit the wrong line. This needs to be done logged as root (su -) - just open up a terminal and navigate to the folder /etc/X11.

To back up the file it's as easy as doing this:
cp xorg.conf xorg.conf-back
Edit the file using your favourite editor. Look for the line Option "XkbLayout" "whatever", change whatever for gb, save the file and restart your workstation (shutdown -r now). SL


Seeking Zen calm

This may sound a little strange, but, I want to stop the little Zen-updater icon from appearing when a user logs in. Can you tell me how I should go about this?

The reason why I want to disable it is that when a domain user logs in, Zen crashes, giving a lovely exception message. I am assuming this is because the domain users do not have a 'local' user ID and cannot be looked up by the system. Also, it is not required for users to be able to perform system updates.
Mike, from the LXF forums


Mike, the fix is very simple. There is a file called zen-updater-auto.desktop in the folder /etc/xdg/autostart. You would need to edit that file with your favourite editor (Vi, Pico or whatever) and comment out the line 'Icon=zen-icon'. You might then need to restart the Zen-updater application. SL


Recovering partitions

In my frustration at trying to get a new SATA drive to format, I accidentally formatted the wrong drive, which had three partitions (/, /home and swap). I must admit I was using the Windows XP install disk (last resort, honest!). I managed to press the reset button a few seconds in after failing to stop it with Esc or Ctrl+Alt+Del. The hard drive is of course unbootable now, but when I load up Knoppix and QtParted it still seems as if the /home partition is there (the desktop icon is present), although the other partitions have bitten the dust (unformatted space).

If I try to get the partition (hda3) to mount by double-clicking on its icon on the Knoppix desktop, an error code says something like 'filesystem not defined', which I suppose has something to do with the first chunk of hard drive having been formatted (is that where the 'TOC' info is held?). Can you help?
Mdgreaney, from the LXF forums


If you know the sizes of the partitions, you can create them again in Cfdisk. As long as you have not created any new filesystems in their place, the filesystems should still be on the disk - you have probably only deleted the partition table. It may take some trial and error to find the correct sizes for each partition, but as long as you mount each one read-only (add -o ro to the mount command) you can't make things worst. It is not surprising that you can no longer boot from the disk, as you have removed the root partition from the partition table, so Grub cannot find its files.

There are a couple of utilities for automating the process: Gpart (not to be confused with GParted) and TestDisk. They are both on the Knoppix 5.0.1 CD and DVD. You should be aware that these programs are trying to guess your partition layout from leftover data; the Gpart man page sums it up nicely with, "It should be stressed that Gpart does a very heuristic job, never believe its output without any plausibility checks. It can be easily right in its guesswork but it can also be terribly wrong. You have been warned."

Whichever program you try, read the man page thoroughly before you touch a byte of your disk, and be patient. Both programs take a long time to run, as they are scanning every sector of your hard disk, so an extra few minutes spent reading won't make much difference to the overall time taken, but could have a huge effect on the result.

Incidentally, a TOC (table of contents) is used on CD and DVD filesystems. Hard disks have a partition table at the start of the disk, with the directory information contained in the filesystem itself. NB


Locked out

When I try to log into my newly-installed Ubuntu I get a message saying: 'Session only lasted 10 seconds'.

When I check the log I see: 'Failed to set permission 700 to .gnome2_private'.

I can only log in to a terminal, so how do I fix this?
AJB2K3, from the LXF forums


Are you reusing a home directory from another distro? What you describe is a classic symptom of that. Even though you may have used the same username as on your old system when installing Ubuntu, the username and group could have been allocated different numerical values (usually referred to as UID and GID). The filesystem only stores the numerical values, so these files are no longer owned by your user, hence the error when trying to change the attributes of one of them.

Fortunately, the solution is simple and fairly quick. At the terminal prompt, type
sudo chown -R fred: ~fred
This resets everything in Fred's home directory to be owned by Fred. The string needs to be run as root, hence the use of sudo. The trailing colon after the username, which you will obviously replace with your own, is important - it sets the GID to whichever group Fred belongs to, his primary group. Not only is this quicker than running chgrp, it even saves you from having to look up the correct group.


Double vision

I am using SimplyMepis 6.0. I have sound and internet, am pleased with apt-get, Beagle and SuperKaramba are working well - the only thing missing is 3D acceleration. I installed the ATI drivers and activated them in the Mepis control centre. I also ran aticonfig, but the following is what I get from glxinfo:

root@1[philippe]# glxinfo | grep direct
_______________________________________
Xlib: extension "XFree86-DRI" missing on display ":0.0".
direct rendering: No
OpenGL renderer string: Mesa GLX Indirect

I have attached my xorg.conf file.
Philippe, from the LXF forums


Large configuration files, like the one you attached, can make it difficult to spot problems; a case of not being able to see the wood for the trees. Removing all the commented lines and sections made it easier to check and showed that you have two Device entries for your graphics card; one using the ATI drivers and one using VESA. This is normal, as are the duplicated Screen entries to go with them - it makes switching between the two setups as easy as changing one line in the ServerLayout section. However, this part of your configuration is definitely broken:
'Section "ServerLayout"
Screen 0 "ATIScreen" 0 0
Screen 0 "aticonfig-Screen[0]" 0 0
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "PS/2 Mouse" "CorePointer"
EndSection'.
You have included two definitions for Screen 0 in ServerLayout. It would appear that X.org is using the first one, as this uses the VESA definition, but a quick check of the log file with
grep Screen /var/log/Xorg.0.log
will show you which screen is in use. All you need to do is comment out the incorrect Screen entry in ServerLayout and restart X to get your 3D acceleration working.

You could remove the line, but by commenting it out, you can switch back to the VESA driver at any time by moving the comment to the other Screen line. NB


Distro moving

I have a two-disk box with the hda holding Windows 98 SE, and SUSE 10.1 and Mepis 6.0 on hdb. I have decided to switch my main distro to Mepis. Everything is now installed on it and I want to ditch SUSE. However, I would really like to transfer Mepis 6.0 on to the primary drive.

I am currently booting through SUSE's Grub, installed in the boot section of hda MBR, so as soon as I delete SUSE I will lose the ability to boot (but could solve this with the Live CD option). Is there an easy way, such as a total disk transfer, to copy Mepis as configured over to hda? I need to keep Win98, unfortunately.
Baron, from the LXF forums


Grub is installed in the MBR, so it won't be deleted when you delete SUSE. You will lose the /boot/grub directory, which contains files needed by Grub, but you can replace that with the same from Mepis. The whole process is best done from a Live CD - you don't want to be messing with filesystems that the OS could be changing while you are copying them.

Boot from the Mepis Live CD and log in as root, with password 'root'. Assuming that SUSE is on /dev/hda2 and Mepis on /dev/hdb1, do the following:
mount /dev/hda2
mount /dev/hdb1
cp -a /mnt/hda2/boot /mnt/hdb1/boot.suse
This mounts the filesystems and creates a backup copy of the SUSE boot directory, just in case you need it later.

Now you can reformat the SUSE partition. You have a choice of filesystems to use here but if you are unsure use
mke2fs -j /dev/hda2
to create an ext3 filesystem. Now copy everything across with
rsync -ax /mnt/hdb1/ /mnt/hda2/
The trailing slashes are important! This process may take a while. When it has finished you need to edit /mnt/hda2/etc/fstab and change all the hdb references to suit the new locations on hda. You can reuse the SUSE swap partition on /dev/hda.

The last step is to ensure your system boots correctly. If there was a /boot/grub directory on your Mepis disk, you need to edit the configuration file in /mnt/hda2/boot/grub/menu.lst to suit the new locations. Grub numbers disks and partitions from zero, so /dev/hda2 is (hd0,1) in Grubspeak. You may also need to add a menu entry to boot Windows, which you can copy from /mnt/hda2/boot.suse/grub/menu.lst. If there was no grub directory on your Mepis installation and you were handling everything from SUSE's bootloader, copy the grub directory from boot.suse into boot and edit menu.lst to add a suitable entry such as
title MEPIS at hda2, kernel 2.6.15-26-386
root (hd0,1)
kernel /boot/vmlinuz-2.6.15-26-386 root=/dev/hda2 nomce quiet vga=791
The name of the kernel file should match whatever is in your boot directory. Finally, run Grub to make sure you are using the correct configuration:
grub
root (hd0,1)
setup (hd0)
quit
NB


Windows killed my Linux

When I installed SUSE Linux Enterprise Desktop [SLED] 10, I made these partitions on my 28GB hard drive:

 FAT32, 10GB, /windows/C.
 Linux, 10GB, /.
 FAT32, 7GB, /windows/E.
 swap, 9GB, swap.

Then I installed Windows on C:. But now I have a problem: Linux is not booting. I do not get the menu that asks me to choose between the OSes; Windows starts directly. I should mention that when I was installing Windows it said something like, 'there is an unknown partition, it will be inactive, if you want to activate it do...', but when I went to follow its instructions it couldn't recognise that partition and now I have Windows with only two partitions, C: and D: (note that D: is the 7GB, not the 10GB). Is there an answer?
Penguin, from the LXF forums


This is a common problem, caused by the Windows installer's assumption that there are no non-Microsoft operating systems. When you install Windows it overwrites the bootloader with its own, without considering that you may wish to keep it. The good news is that your Linux installation is untouched, including the original bootloader menu and other settings. All you have to do is reset the hard disk's Master Boot Record to use the Grub bootloader that SUSE set up for you.

Boot from the SLED CD/DVD and select the Rescue System option from the menu; this will boot to a login prompt. Type root at the prompt (there's no password needed) and you are in a basic rescue shell. The first step is to determine which is your Linux partition. Run fdisk -l to display a list of partitions. One of them will be marked as Linux - probably /dev/hda2 based on your list of partitions above. You can mount this partition with
mount /dev/hda2 /mnt
Then type the following commands to enter the Grub shell and find the correct partition for the bootloader:
grub
find /boot/vmlinuz
This returns the boot disk in Grub's terminology, probably (hd0,1). Now type the following commands to set up the bootloader again:
root (hd0,1) #the disk label returned above
setup (hd0)
quit
That's it, you can now reboot with the cryptically-named reboot command. Eject the CD/DVD and you should get your Grub menu back with the same choices as before. Note that if you ever need to reinstall Windows, the same will happen again - with the same solution. NB


Wireless Knoppix

Back in December 2005, you featured an excellent article on how to create your own distro [Build Your Own Distro, LXF74]. I have a problem with my laptop's wireless network card, an Intel ipw2200-based card (the PCMCIA version thereof). So what I was thinking of doing (using Knoppix 5.01) was to download the firmware for the Intel card, place it in /lib/firmware and remaster Knoppix with the firmware so that the card is recognised and configured, or at least configurable with a Live distro on booting up.

I think that the drivers are built into the kernel for this card, but they cannot work without the firmware. I would really like to be able to use Knoppix as a Live distro, and get it to configure my wireless?
Grumpy@home, from the LXF forums


This is certainly possible; when you are in the chroot environment described in Step 3 of the LXF74 article you can remove and install any files you wish, just as if you were running Knoppix natively. These changes will be committed to your new Live CD distro in Step 6.

Knoppix already has the driver for this card in its kernel, but if you decide to rebuild the kernel make sure you include the CONFIG_IPW2200 option. Knoppix's hardware detection should pick up on the card, load the module, configure it for you and attempt to set up the wireless network interface using DHCP. If you want to run any specific commands at bootup, put them in a file called /etc/init.d/rc.local (this name is used by convention, but you can use any name you like) then activate it with
chmod 774 /etc/init.d/rc.local
ln -s ../init.d/rc.local /etc/rc5.d/S99local
Commands in the rc directories are executed in order, so the 99 in the name makes it start last. This is normally what you want, but if you need it to start before something else, lower the number (or raise the number of the other service). Don't be tempted to try to start the number too early. Bear in mind that you are working from a base system contained on a read-only disc, so it is almost impossible to mess things up.

If your new Knoppix doesn't boot, step back and try again - you have nothing to lose.


Nothing on TV

I have a home LAN of five PCs, one of which is set up without mouse, keyboard or monitor but has the TV-Out port connected via S-Video cable to an analogue TV. This PC has a digital HDTV card, and I have configured xorg.conf to clone the video card output to the TV. Everything works fine if I attach a mouse and keyboard because I can use the TV output instead of a monitor.

Currently I can remotely access this PC using NX (preferable) or TightVNC, but I cannot figure out how to remotely access Desktop:0, which is the desktop being output to the TV. I have tried Google, but nothing that I have found seems to resolve this problem. I guess that an alternative solution may be to somehow set up xorg.conf to output Desktop:1 to the TV. I am currently running Kubuntu 6.10.
Bill


VNC is intended to work like this, to run as a separate X session, although it is possible to change it. By running X11vnc, housed at www.karlrunge.com/x11vnc, you can make your existing X display available via VNC.

But because you are using Kubuntu, and therefore the KDE desktop, the answer is simpler: use the Rdesktop server and client contained in KDE. On the headless PC, start the KDE Control Centre and go to the Internet & Network > Desktop Sharing section.

Next, turn Allow Uninvited Connections on and Confirm Uninvited Connections off. You must set a password to stop unauthorised connections. Blocking port 5900 at your router is also a good idea, unless you want to be able to connect from the internet. Now you can connect with K-Menu > Internet > Krdc Remote Desktop Connection on another computer.

The alternative is to connect to the box with SSH and run individual X programs on your local desktop, thus:
ssh -X hostname someprogram
This solution has the possible advantage of the computer program not appearing on the TV display, which is useful if you want to carry out some administrative task while viewing video output to the TV. Although some films may arguably be improved by the presence of a Konsole window opened on top of them, it is unlikely that family members watching the TV will agree with that.

Setting X.org to put display 1 on the card won't help, as the standard VNC server will then start up on display 0. NB


BIOS reset

I have four totally different computers. Six months ago I entered the same password on each of them to protect the CMOS. The password has six letters and a * and an &. Now that I need to install and change some hardware I'm finding out that none of the four computers is letting me into the CMOS with that password. There is absolutely no doubt the password is the right one. How do I reset the CMOS?
Peter


It is most likely that the password you have given contains unacceptable characters. Resetting the CMOS varies somewhat from one motherboard to the next, but there is generally a motherboard jumper marked something like CLEAR CMOS or RESET RTC. You need to turn off the computer, move this jumper to the reset position - the is the opposite of its current setting - wait a few seconds then return it to the standard setting.

Under no circumstances should you do this with the power on. In fact you should disconnect the power lead because the PSU will supply a small amount of power to the motherboard even when the computer is turned off. When you reconnect the power, you should find your BIOS has reverted to the default settings.

The procedure can vary between manufacturers; you should only take the above as a guideline, except for the part about disconnecting the power lead - not doing so can wreck your motherboard. Check your motherboard's manual or search the manufacturer's website for a PDF version if you have no printed manual.

It is important to stress that while the basic procedure is the same for all motherboards I've used, the details vary (one I checked required you to remove the motherboard battery too) so you must read the documentation before doing anything. NB


Serial Capture

We have a small 5ESS switch which sends log files to a ROP (read-only printer) at 1200-O-7-1. The cable is the normal three-wire Unix serial cable. I have connected a PC to this using a Black Box bridge. The PC is running Windows 2000 with Procomm [a terminal emulator].

I can make a direct connection to the serial port and the logs are displayed on the screen. I am also capturing these displayed logs from the switch to a file (saves a lot of paper).

I would like to do the same thing with Linux (so I can discard Windows) but am unable to make this work using tee or any redirections. Ideally, not only would this log display always be available in a window, it would continue to be captured in a file, even if the user were logged off. In addition, I would like a new file to be created every night at midnight, closing the old file. Can you help with the syntax or a workable method of doing this?
Michael Ives


The first program I would try for this is Minicom. This is a terminal emulator, much like Procomm, and should be in your distribution's software repository. If it isn't, you can get it from the website http://alioth.debian.org/projects/minicom. Minicom has an option to log data to a file as well as display it - so, providing it will communicate with your switch, it should do all you need.

Alternatives involve more low-level communication with the serial port. There is Logserial from www.gtlib.cc.gatech.edu/pub/Linux/system/serial, which dumps the data from the specified serial port to stdout or a file; you could use tee to do both. There is also a Perl script available from http://aplawrence.com/Unix/logger.html, although this may require some modification to fit your requirements.

To keep the program running all the time, run it in screen. This keeps the program ticking over when the user is logged out, plus you can reconnect to it at any time, even over an SSH connection from elsewhere.

Provided you have set the logging and other options in Minicom's configuration, you can start it with
screen minicom
Press Ctrl+A D to exit screen while leaving Minicom running, and type screen -r to reconnect. One thing to watch out for when running Minicon in screen is that both use Ctrl+A as a command key; to get screen to pass Ctrl+A to a program running in it, press Ctrl+A A so you would use Ctrl+A A Z to show Minicom's help screen.

I would use Logrotate to split the log files. This is probably already installed and running as most distros use it to rotate system log files.


Give us the tools

I need help with post-installation configuration of Evolution mail on my Ubuntu system. My home computer runs a dual-boot system with Windows XP on one partition and Ubuntu Linux on the other. I have an Ethernet connection to a broadband hub. Mozilla works beautifully under the Ubuntu OS - indeed, I have successfully used it for upgrades, Google searches etc. So Ubuntu has excellent internet access.

Evolution mail, however, needs to be configured properly. An old issue of Linux Format describes a configuration wizard but I have been unable to invoke it. A Google search about Evolution gave a hit that revealed a document about setting up Evolution. At first that seemed to be very helpful but it required the use of the Tools menu on the menu bar of Evolution - which my version of Evolution does not have. How can I configure my Evolution without a Tools menu?
John Cleave


As you have not stated which versions of Evolution and Ubuntu you are using, I am going to assume you are using the latest Evolution, version 2.6. Now, Evolution 2.0 has the Tools link in the menu bar, but Evolution 2.4 and 2.6 don't have it.

The first time you start Evolution, you should see the First-Run Assistant. In your case, it seems that didn't work and Evolution has hidden the option from you now. To invoke First-Run Assistant, you should delete the folder .evolution (note the dot) in your home directory then type evolution from the command line. This will make Evolution think it is running for the first time and it will show the First-Run Assistant, which you can then use to configure the software.

If you don't want to take the command line option, just create a new mail account by clicking on Edit > Preferences, then Mail Accounts > Add. SL


Kino konundrum

A few days ago I installed Kino using Synaptic on my Ubuntu 6.06, I think the Kino version was 0.8 and it worked very well in all respects, including capturing from my camcorder.

Today I accepted the offer of an Ubuntu automatic update to Kino 0.9.2 but when I run Kino now it won't capture. Instead it gives the message:

'Warning: dv1394 kernel module not loaded or failure to read/write /dev/ieee1394/dv/host0/PAL/in'.

My camera shows up correctly as the capture device and Dvgrab from Kino captures without problem (horrible to use) so surely the 1394 must be working. Any ideas on how I can fix this?
bof, from the LXF forums


Hmmm. Did you update anything else at the same time? Ubuntu uses udev to manage devices, and at the moment the raw1394 device doesn't play nicely, so it doesn't get created (there is the same problem on Fedora and other distros that now embrace udev more completely).

So, that is the likely 'why'. As for what you can do about it, an inelegant hack is to merely create the device node yourself:
mknod /dev/raw1394 c 171 0
That should take care of it. If it doesn't, check that the relevant modules are actually being loaded! Running lsmod should tell you if raw1394 and video1394 are loaded properly. You may need to change the permissions on the device to get Kino to read it properly.

As a side point, there is much consternation among the distro packagers regarding Kino's insistence on using the raw1394 device. Apart from everything else, relaxing permissions on this device does raise security issues. At some time in the future, Kino may end up using a more regular device to access DV devices. NV


Rights and wrongs

If I was to use images from the net in a slide show, what would the rules concering these be? I can't find any associated copyrights to the images I want to use and nothing is marked down, but I was wondering what the rules are govering the use of images from the internet.
Anonymous, from the LXF forums


In the UK, copyright applies at the moment of creation, and does not need to be explicitly stated. So, for unknown images on the internet, you should assume they are copyright unless stated otherwise.

However, there are exceptions granted in the UK to copyright, mainly in the case of research or fair usage (eg if you were reviewing a website, it would be considered fair usage to include an image of it). The UK patents site has some useful information on this: www.patent.gov.uk/copy/c-manage/c-useenforce/c-useenforce-use/c-useenforce-use-exception.htm.

So under some circumstances you can still use them legally, but ultimately, it is better to ask permission. If you just need general images to use for your presentation, can I suggest you search for Creative Commons-licensed work? Check out www.creativecommons.org. NV


Dodgy DMZ

I have set up a web server (SUSE 10.0) running inside a virtual machine that is hosted on my SUSE 10.0 box. I have configured it to be in the DMZ of my router (also SUSE 10.0). Web traffic is correctly routed to the box; however, I cannot seem to access it from the internal network on any port. I would like to be able to ssh directly into the box from within the internal network.

The firewall on the router (192.168.0.9) was configured using aYast and maps its external port 80 to the web server (192.168.1.2). I tried mapping the internal (192.168.0) port 80 to the web server but this doesn't seem to work.

Is it possible to do this with the Yast tool? If not, is there any easy way to convert the existing Yast setup into an Iptables script where it should be easy to achieve? Hope you can help-
Lee


I would highly recommend that you install IPCop, a specialist Linux firewall distribution, instead of using SUSE 10.0 for the router. I used IPCop for a long time before I switched to Cisco PIX firewalls, and it only takes a few minutes to install. IPCop uses the concept of a 'Green network' for an internal protected interface such as your web server's, and this makes it relatively easy to join to two networks like yours together.

There is an excellent HOWTO about IPCop at http://howtoforge.net/perfect_linux_firewall_ipcop_p2, and the project homepage is located at www.ipcop.org. SL


Secure CDs

With Windows, there are commands in Nero and other software that enable you to put copy protection on to the CD-ROMs that you make. Is there such a command with Linux?

Also, I could not get Rute to work from your latest magazine [Coverdisc, LXF86].
Keith Tan


If by "copy protection" you mean the sort of thing that commercial CDs have, the answer appears to be no. The idea of restricting copying is anathema to free software. However, if you want to encrypt your data to protect it from prying eyes, such as when backing up personal files, the answer is yes. You might have come across an application called Cdrecord, which is the CD-writing back-end used by most CD programs. There is a patch for this available to add encryption of the data as it is written to disc.

Most distros do not include the patched version of Cdrecord (which is contained in a package called cdrtools), but you can tell if your copy includes encryption with
cdrecord --version
If this does not state that encryption is included you will have to patch and build it yourself. This is a fairly simple process: download the Cdrtools source from ftp://ftp.berlios.de/pub/cdrecord and the matching patch from http://burbon04.gmxhome.de/linux/CDREncryption.html, then execute the following commands as root:
tar xjf cdrtools-VERSION.tar.bz2
zcat cdrtools-VERSION-encrypt-1.0.diff.gz | 
patch -p0
cd cdrtools-VERSION
make
make install
You will need the GCC compiler and associated tools to do this; installing the gcc package should pull in everything you need. Now you can create an encrypted CD by adding -encrypt -encpass=ahardtoguesspassword to the cdrecord command. If you are using a GUI CD-burning program, such as K3b, you can add arguments to Cdrecord in the program's preferences. Store the password in a file and use -encpassfile instead of -encpass if you prefer. Keeping the password file on a USB key would improve security.

Reading the encrypted CD requires that you have dm-crypt support in your kernel (you almost certainly will have) and the cryptsetup package installed. Mounting the disc is a two-stage process:
cryptsetup -r -c aes -s 256 -h sha256 create ecdrom /dev/cdrom
mount /dev/mapper/ecdrom /mnt/cdrom
You could put these commands in a script to save typing them every time. If you have your password in a file, add --key-file /path.to/key to the cryptsetup command to save typing in the password. Unmounting follows a similar process:
umount /mnt/cdrom
cryptsetup remove ecdrom
As for your second question, Rute is standard HTML, so you should be able to read it by loading Help/RUTE/rute/index.html into any web browser. If this fails, let us know the exact error message it gives. NB


Stealth Linux

I belong to a computer club that is 98% Windows-oriented and I'd like to install Mepis 6.0 on the club's laptop to demonstrate Linux and perhaps persuade some members to try it. Installing Grub on the MBR [master boot record] is not a good idea as the laptop is also used by our members to take home and they wouldn't like the idea of choosing Linux or Windows XP at boot (some are not interested in Linux). How do you create a boot CD to boot Grub and then choose Windows or Linux?

Creating a boot floppy is not an option as the laptop has no floppy drive, and using a USB floppy is a problem.
jozien, from the LXF forums


Jozien, I commend you on your mission to show your fellow club members the joys of Linux through SimplyMepis. Now to your problem. There are two possible solutions to this. The first is to use Smart Boot Manager. This is a bootloader disk that also works from a CD. You'll find an ISO image in the Essentials/SBM directory of the cover DVD. To use this, you must install the bootloader for Mepis into the root partition rather than the MBR; this option is offered during the installation process. When you boot normally, the original Windows bootloader for the MBR will be used and the computer will boot straight into Windows. When you boot from the CD, a menu will appear, from which you can choose the partition to boot - select the Linux root partition here and it should boot. If the Linux partitions do not appear in the menu, press Ctrl+H to rescan the hard disk - I've needed this with some hardware.

The Smart Boot Manager CD is only used to run the bootloader. You can remove it as soon as the Smart Boot Manager menu appears, which means you can also use SBM to boot recalcitrant DVDs and CDs.

Another option is to stick with a bootloader on the MBR but hide its menu. To do this with Grub, install Mepis as normal, with the bootloader on the MBR; then boot into it and edit /boot/grub/menu.lst as root. Change the timeout to something short, say 5 (seconds), then add these lines after the timeout:
hiddenmenu
default 1
Grub counts from zero so default 1 makes the second menu entry the default. Now when you boot, users will see a message like 'Press Esc to enter the menu' and a countdown from 5 before Windows boots. Unless they press the Esc key, they will not see any reference to Linux. Let us know how you get on! NB


Proxy pest

I'm getting entries like the following in my Apache server log: 'GET http://cn.yahoo.com/ HTTP/1.1" 200 291'. Note the request for a completely different domain to mine and the protocol prepended to it, which would normally be stripped off. What concerns me is that the server is returning a code of 200. Should I be concerned?
Andrew Wood


Yes, you should be concerned. It appears that someone is attempting to use your server as a web proxy. If you have the mod_proxy module loaded and a ProxyRequests directive in one of your configuration files, Apache's proxy server will be activated. Even if proxying is not activated, you could see a log entry like this; if you are using virtual hosting Apache will normally return the homepage for your default virtual host.

You should be able to tell from the IP addresses and frequency of these log entries whether this is a single, misconfigured computer or scripted attempts to find suitable servers to exploit. If the size of the returned page is always the same, irrespective of the URL requested, Apache is returning a local page - probably an error message from the small size. In this case, you are not acting as a proxy for nefarious activities and the only harm done is the extra load on your server and bandwidth to service these requests.

You can disable proxying altogether by using the --disable-proxy option when building Apache, or by ensuring that the -D PROXY option is not used when starting Apache. If you are receiving a large number of these requests from robot scripts, you could look at blocking or dropping these addresses with iptables, which would save the server having to reply to them, even with an error. SL


Dual box

I currently have Windows XP on my computer but am looking to change over to Linux. Can I load Ubuntu without disrupting my XP? The reason being I have broadband and my ISP doesn't support Linux. If I do load Ubuntu will the partitions it puts disrupt my XP? I have an 80GB drive with at least 40GB available for Ubuntu.
Terry Brock


What you are asking for is called dual booting - almost all Linux installers support this. This used to be regarded as a somewhat hazardous process (although I have never had a problem in many, many installations) but the current Linux installers are much better and safer.

The Ubuntu installer will offer to resize your Windows partition to create space on your hard disk for Ubuntu. All you need to do is tell it how much space to give to each OS. Fragmentation of the Windows partition affects how well the installer can resize it, so you should defragment the disk from Windows before installing Ubuntu. Simply right-click on the drive in My Computer and select Properties, go to the Tools tab and hit Defragment Now. The installer will also add a new bootloader with a menu that offers you the choice of Linux or Windows each time you boot.

I should warn you that resizing a filesystem is potentially dangerous; for example, a power failure during the process could trash your data. The chances of a problem are minimal, but the consequences could be serious. If you value your data, back it up first.

As far as your broadband connection is concerned, actually it will most likely work on Linux, depending on the type of broadband (cable or ADSL) and the hardware you use to connect. Lack of Linux support from most ISPs is just that: they don't provide support. This does not mean that you cannot use their service with Linux. Provided you have a modem with an Ethernet connection, either for ADSL or cable, you should have no problem getting online with Linux. In most cases you'll find that it is simply a case of configuring your Ethernet connection to set up its address automatically, which is generally the default anyway. NB


Pretty printing

I have a very useful utility on Windows called FinePrint, which buffers print requests and enables me to preview them, reorder them, delete pages from them, save them, print them 2-up, 4-up, double-sided, booklet- and so on. I would be lost without it. Is there anything remotely similar available for Linux, which batches print requests and allows them to be manipulated before they are sent to the printer?
Roger Hurley


Not only is there something like this available for Linux, but you may already have it installed! KDE's print program KPrinter offers much of what you describe. When you print from a KDE program, click on the Properties button in the printer dialog and you'll see options to do things like printing two or four pages per sheet. KPrinter can be used with non-KDE applications - most programs have an option to set the print command, which usually defaults to lp or lpr. Change this to kprinter and all print requests will go through the KDE print system.

If you want some of the other features you mention, you will have to use the command line - the possibilities you mention are all there, and then some, but without a controlling GUI. The best program I have found for this is a2ps, the Any To PostScript filter. This is provided with most distros and may already be installed on your system.

As the name implies, a2ps takes data in (almost) any format and outputs it as PostScript, ready for sending to your printer. The filter part of the description is the interesting part, because a2ps does more than translate one file format to another, it also lays it out according to your specification. Running
a2ps -4 myfile -d
will print myfile four pages to a sheet and send the results to the default printer. As a filter, a2ps is ideal for inclusion in a pipeline, taking its input from one program and sending it to another. If you use this as the print command for a program
a2ps -=booklet | kghostview -
it will process the program's output according to the user option booklet and send it to KGhostView. You can then preview the layout before pressing the Print button in KGhostView. User options are a powerful feature of a2ps. Set in the user's config file at ~/.a2ps/a2psrc, they enable you to group a number of settings as a single option, a sort of option macro. You will find full details of this in the a2ps info page - run info a2ps in a terminal or type info:/a2ps into Konqueror's location bar. NV


Clearing the queue

My server is running Qmail and I have a lot of failure notice emails in the mail queue. How do I clear the mail queue on my mail server?
Peter Fieldhouse


To solve this problem you'll need a tool called QmHandle. It can easily be downloaded from http://hurricane.hinasu.net/scripts/qmHandle. This is a modified version of the tool with some extra functionality added. Using QmHandle you can then delete messages based on sender and also on recipient. Run ./qmHandle to get more information; here's a run-down of the parameters available (taken from the man page):
 -a: try to send queued messages now (Qmail must be running).
 -l: list message queues. -L: list local message queue.
 -R: list remote message queue.
 -s: show some statistics.
 -mN: display message number N.
 -dN: delete message number N.
 -Stext: delete all messages that have/contain text as Subject. 
 -Ftext: delete all messages that have/contain text as Sender.  
 -Ttext: delete all messages that have/contain text as Recipient
 -D: delete all messages in the queue (local and remote)
 -V: print program version
Additional (optional) parameters:
 -c: display colored output
 -N: list message numbers only (to be used either with -l, -L or -R)
You can view or delete multiple message ie -d123 -v456 -d567. So to answer your question, you would need to run the QmHandle command like this:
./qmHandle -S'failure'
SL


Which printer, which SUSE?

I print photographs as an amateur and for that purpose purchased a Canon i865 printer. I've had it a year and until now it was more than satisfactory for my needs (as a Windows user). Three months ago I moved my home computer to Kubuntu, which is very user friendly. Now I feel I can try other distros.

However, I cannot get my printer to work from Linux. There seem to be no drivers available for this (and many other Canon printers). All my printing is done through my dual-booted Windows. To make the move to Linux complete I need to be able to print. I have tried various methods to print, setting up a generic printer, various Canon drivers that are available etc. Nothing works. I did get two pieces of 'advice' from a forum I tried:

* Changing my printer.

* Buying a driver from a firm called TurboPrint.

Are these the only solutions? Secondly, in LXF82 you included a desktop version 10.1 of SUSE. Now in LXF84 you include the SLED 10 version which, if I understand correctly, is mainly for server use. In the accompanying article it is also highly recommended for desktop users. Could you explain to a novice like me how to make a decision as to which of these distros to use if I'm thinking of trying to use SUSE?
Errol


Canon printers are notoriously poorly supported in Linux (unlike Canon scanners and cameras - I wouldn't part with either of mine) but there is a driver that is reported to give excellent results with this printer and the good news is that you probably have it installed already. When configuring your printer, select Canon BJC-8200 as the printer type - the BJC 8200 driver works with the Canon i865 up to the printer's maximum resolution.

There are actually two drivers for the BJC-8200: one included with CUPS (the standard print system) and the other in the gimp-print package. If you have Gimp-Print (or Gutenprint, as the latest versions are called) installed you will be given a choice of the two drivers; you should try each of them to see which works best for your needs.

Installing the printer can be done from your distro's configuration programs, such as Yast in SUSE, or through a web browser. Point your browser to http://localhost:631, click on the Add Printer button and answer the questions. Once you have set up the printer with one of the drivers, you can click on the Printers tab and click on Modify Printer' when you wish to try the other driver.

TurboPrint is a commercial set of printer drivers that supports more printers than CUPS or Gutenprint (the company is Zedonet GmbH). Being commercial means it can buy developer kits from the printer manufacturers. The quality is excellent and you can download a demo version from www.turboprint.de/english.html. The demo adds a small TurboPrint logo to prints made at the highest quality, but this doesn't stop you from gauging the quality yourself to decide whether it is worth spending 30 on the full version.

On to your question about the different SUSE flavours. SLED is the SUSE Linux Enterprise Desktop - it is a desktop system aimed at business use. There is a server version, called SLES. Then SUSE 10.1 is the latest release of OpenSUSE, the open source, community-supported version of SUSE. Either of these will work for you, although there have been some problems with the updates system in SUSE 10.1. Despite the 'Enterprise' designation, SLED contains a lot of software that is equally useful to home users.

It is impossible to say which of the two will suit you the best, or whether you'd prefer to stay with Kubuntu (but note that SLED uses Gnome whereas OpenSUSE has the KDE desktop as used by Kubuntu), so the only sensible advice is to try them yourself. NB


Mounting Mepis

I bought your magazine today [Get Started With Mepis Linux Special] in order to try out Mepis Linux on a Dell Dimension PC, which already has Windows XP and Red Hat Enterprise Server (version 3) installed on it. The Microsoft product is what one would (sadly) expect but Red Hat proves very difficult when it comes to installing anything. The instructions I have often don't work so I thought I would try Mepis.

The system has a CD drive but not a DVD drive so I bought the CD version, but I just can't get it to install. Every option on the menu (as given on image one on page 12 of your magazine) eventually leads to a line on screen which reads:

Mounting MEPIS filesystem... mount: Mounting dev/loop0/ on /linux
failed: Invalid argument
done.
Can't start up filesystem
Halting...

I've tried the Ctrl+Alt+F7 and Ctrl+Alt+F8 but these don't do anything.
Brian


It is possible you have a faulty or damaged disc, but before you return it for a replacement, there are a couple of things you can try - it may be an incompatibility with your hardware. There are a number of options you can pass to Mepis when booting; press F1 at the menu screen and select Boot Options from the bottom of the list to see the choices. You type these options at the main menu screen (as shown in the first picture). The two most likely to have success here are
acpi=off
failsafe
Try these in turn, then investigate some others. If the same error comes up every time, it is most likely that your drive is having trouble reading the disc. If possible, try the disc in another computer; booting from the disc only runs it as a Live CD; it does not start the installation, so you don't have to worry about affecting any computer you try it on. If it still fails, you should contact us at the address or phone number given on the back of the disc sleeves. If the disc works in other computers, it is likely that your CD drive is either dirty or failing. As drives get older the laser loses some power and the laser lens gets dirty (especially if people smoke near the computer). A lens cleaner may help. MS


Dirty mail

I run a mail server. Can you tell me how I can monitor mailboxes for corruption?
Neil Darwin


Have a look for this error in the mail log (/var/log/maillog): 'File isn't in mbox format - Couldn't open INBOX'. If you find it, the mailbox is definitely corrupted. To avoid checking mailboxes manually, here's a script you can use:
#!/usr/bin/env python
import os, sys, re
mailpath = '/var/mail'
mailboxes = os.listdir(mailpath)
re_valid = re.compile('From\s+[^\s]', re.I)
mailboxes.sort()
for m in mailboxes:
    fn = mailpath + os.sep + m
    if not os.path.isdir(fn):
        f = open(fn, 'r')
        l = f.readline()
        if l:
            if re_valid.match(l):
                continue
            print "Invalid: %s" % m
Name the script verifymailboxes.bin and run it with
python veryfimailboxes.bin
SL


Video on SUSE

When I try to install w32codec on SUSE 10.1,I get messages like 'Transaction failed: Package transaction failed: Can not find resolvable w32codec-all 20060611-0.pm.0' or '2006-06-03 08:55:00 w32codec-all-20060501-0.pm.0.i586.rpm install failed rpm output: error: unpacking of archive failed on file /usr/lib/codecs: cpio: rename failed - Is a directory'.

What should I do so I can see video files?
Joao Pires


You don't say where you obtained the w32codec package - it could be that the file you downloaded was corrupt, or that it is not compatible with SUSE 10.1. The safest way to install the Win32 codecs, and any other software, is through Yast. The default Yast setup only includes the installation discs and may be an update repository, so the first step is to add extra software sources. Run Yast and select Installation Source in the Software section; click on Add and pick HTTP from the menu that pops up. Now type packman.unixheads.com/suse/10.1 in the Server Name, press OK and click on Finish.

This adds the Packman repository, which contains such goodies as the Win32 codec files. You can also add the main SUSE repositories for both free and non-free packages (the latter are excluded from OpenSUSE discs) with mirrors.kernel.org/opensuse/distribution/SL-10.1/inst-source and mirrors.kernel.org/opensuse/distribution/SL-10.1/non-oss-inst-source.

Now you can go into the installation section of Yast and install w32codecs-all. This will enable you to play various video file formats, but you will still be unable to watch copy-protected DVDs - the Xine libraries provided with SUSE OSS (OpenSUSE) do not have support for libdvdcss, needed to decrypt protected DVDs. As you have added the Packman repository to Yast, an update should take care of this, but you also need to install libdvdcss, so open a terminal as root and type
yast --install http://download.videolan.org/pub/libdvdcss/1.2.9/rpm/libdvdcss2-1.2.9-1.i386.rpm
For more information on extending SUSE OSS 10.1 by adding the missing, but useful, non-free parts, see the Jem Report at www.thejemreport.com/mambo/content/view/254. NB


Unwritable cards

My Evesham Voyager (running XP Pro and Ubuntu 64) will read but not write SD cards whether the slider is locked or unlocked. The card works fine in my brother's Toshiba. I have contacted Evesham and reloaded USB drivers but to no avail. I thought it might be helpful if I told them I'm dual boot and that the same happens in Linux. They now refuse to help, saying they can't support a dual-boot PC, and that I must reformat the hard drive! I only recently reinstalled everything so don't want to do that, and I want to continue with Ubuntu.

XP lists generic USB drives CFC, MMC, MSC, but there's no SDC even though the slot is supposed to be 4-in-1. In Ubuntu the Read, Write and Execute permission buttons are all ticked. I thought I'd try HardInfo after reading the article in your October issue [HotPicks, LXF84], but loading fails because glibc is too old. I'm told 'you need at least the following symbols in glibc: GLIBC_2.0', yet I've installed all auto updates. It tells me that upgrading glibc is highly dangerous, that whoever built the package did not build correctly, and that I should report this to the provider and ask them to rebuild using apbuild. Can you help?
Richard


Right. If this happens in both Windows and Linux, your card reader is almost certainly at fault and you will need to get Evesham to fix it, something that the company should do whichever operating system is installed because this is a hardware fault. If Evesham insists on your removing Linux, you could use Partition Image (www.partimage.org) to back up your Linux partition(s).

But if this error only happens in Linux, it is most likely a permissions problem. Even though the directory at which the device is mounted is writable by you, the underlying device may not be. Can you write to the card as root? You don't need to log into the desktop as root to do this; assuming the card is mounted at /media/sd, open a terminal and type
sudo touch /media/sd/tmp
If you can write as root, it would appear that the device node for the card is not writable by your normal user. Run mount to see the device name - you'll see something like
/dev/sda1 on /media/sd type vfat (rw,noexec,nosuid,nodev,noatime,uid=1000,utf8,shortname=lower)
at the end of mount's output, showing that the device, in this example, is /dev/sda1. Inspect the permissions on the device node with
ls -l /dev/sda1
You will see something like
brw-rw---- 1 root plugdev 8, 1 Oct 23 17:29 /dev/sda1
This shows that the device is owned by the root user and the plugdev group. The rw-rw---- shows that the user and group can read and write and that others cannot, so you need to ensure that you are a member of the plugdev group. Run id from the terminal to see which groups you belong to and use the following commands to add yourself to plugdev:
sudo gpasswd -a $USER plugdev
newgrp plugdev
The first command adds you to the plugdev group; the second makes that your current group, otherwise you would have to log out and back in again for the change to take effect.

The HardInfo error is odd, because Ubuntu Dapper comes with version 2.3.6 of glibc. This could be an error in the Autopackage build. An older version of HardInfo is in the Ubuntu Universe repository - the latest version, 0.4.1, is in the Ubuntu Edgy repository. Add
deb http://archive.ubuntu.com edgy main universe
to /etc/apt/sources.lst and you will be able to install it from Synaptic. We have also included a Deb package of HardInfo on the DVD. NB


Certified users

I am building a website (LAMP-based) that will provide sensitive information and store sensitive customer data in the database. The site will be restricted to specific IP addresses but I would like to add certificate-based authentication so that every user that is allowed to use the site should have a personal certificate in their browser that would be used in conjunction with their username and password.

That way, if someone tried to enter the site from an accepted IP address but did not have the correct username-password-browser certificate combination, they would be rejected. Can you tell if it is possible to do that?
Christos Ioannou


This is certainly possible. Apache can use SSL to authenticate clients with certificates, as well as to authenticate the server to the client. You will want the latter too, as it is important for your users to know they have connected to the correct server before sending sensitive information.

The first step is to put your certificate and its keyfile in Apache's configuration directory, preferably in an ssl subdirectory, and then to add these lines to httpd.conf to activate SSL and give their location:
SSLEngine on
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:
+EXP:+eNULL
SSLCertificateFile conf/ssl/myserver.crt
SSLCertificateKeyFile conf/ssl/myserver.key
Configure Apache to listen on port 443 (or create a virtual host for this and add the above lines to the virtual host's definition), and Apache will now authenticate the server to clients using your certificate. To authenticate each client with the server, add these lines to httpd.conf (or within a container in your virtual host's definition):
SSLVerifyClient require
SSLVerifyDepth 1
SSLCACertificateFile conf/ssl/myserver.crt
This will block access to any client that does not have a certificate signed by the server, so you need to create one for each user by running these commands on the server:
openssl genrsa -des3 -out username.key 1024
openssl req -new -key username.key -out username.csr
openssl x509 -req -in username.csr -out username.crt -sha1 -CA myserver.crt -CAkey myserver.key 
-CAcreateserial -days 365
openssl pkcs12 -export -in username.crt -inkey username.key -name "$USER Cert" -out 
username.p12
openssl pkcs12 -in username.p12 -clcerts -nokeys -info
The export stage will prompt for an 'export password'. This is needed, along with the username.p12 file, to install the certificate in the user's browser. The last line simply displays the certificate so you can check that all is well. For maximum security, install the certificate yourself, then the user will not be able to copy it to another machine as they will not know the password. SL


SUSE DVD to CDs

I thought that SUSE 10.1 from your August issue [LXF82] would give me an excellent opportunity to set up a Linux computer and begin to learn the system. I have a computer with a DVD reader but it's not the one I want to use for Linux and, being severely Linux-challenged, I don't know how to create installation CDs for my other computer from this distribution.
Alan Honeyman


That disc included a mkiso script to create CD images from the DVD, but only under Linux. We have included a script that will do the same thing under Windows on this month's DVD. The procedure takes a little longer than normal, because the script is not on the same disc, but it isn't too complicated. Start by copying the winmkiso.bat file from this month's DVD to somewhere to your hard disk. It doesn't matter where, and you can delete it when done. For the sake of this example, I'll assume that you copied it to C: and that your DVD drive is D:. Then open an MS-DOS prompt or command box and type
D:
cd distros\suse
c:\winmkiso
It will take a while to complete - a long while, because it has to scan the entire contents of the DVD for each CD image it creates. This appears to be a limitation of Jigdo, the program that does the hard work. The Linux version caches the data from the first scan to speed up the subsequent ones. The Windows version should do the same but in reality it takes a long time for each disc. Eventually, you will have five ISO images in C:. You can burn these files to CD using Nero or any other CD-burning program, but make sure you use the option to burn an image file.

If you want to put the created ISO images somewhere else, give the path at the end of the winmkiso command. For example, if your DVD drive is E: and you want to save the ISO images to D:\suse, type
E:
cd distros\suse
c:\winmkiso D:suse
Make sure the path you choose exists, otherwise winmkiso will spend a significant proportion of your lifetime scanning the disk and then abort with an error when it cannot write the ISO file. NB


Trust me

I have just been given a new digital camera (manufactured by Trust), and I have been unable to get DigiKam to recognise it. However, /var/log/messages seems to detect that a device has been added to the USB port, and the camera is powered up.

I don't know anything about USB, but I do know that an entry does not exist in my /etc/fstab. Could this be the problem? Is KDE's DigiKam the best software to use?
chris_debian, from the LXF forums


You don't need an entry in fstab for KDE's automounting to work. In fact, it generally works best without such an entry. However, not all digital cameras work as USB mass storage devices; some use camera-specific protocols. Does /var/log/messages show partitions when you connect the camera, in much the same way you would see with a USB memory stick? Something along these lines would indicate that it is a mass storage device:
usb-storage: waiting for device to settle before scanning
Vendor: NIKON     Model: NIKON DSC E3200   Rev: 1.00
Type:   Direct-Access                      ANSI SCSI revision: 02
SCSI device sda: 2012160 512-byte hdwr sectors (1030 MB)
...
SCSI device sda: 2012160 512-byte hdwr sectors (1030 MB)
sda: sda1
If not, this camera doesn't show itself as a storage device. But DigiKam should still recognise it if it is supported by Gphoto2, and you have Gphoto2 installed. Gphoto2 is the command-line client for libgphoto2, which is used by DigiKam. If you don't have it installed, you should find it on your distro's discs. You should run it as a normal user and as root - any difference in the output will indicate a problem with permissions.

To find out if your camera is supported, run
gphoto2 --auto-detect
and note what it shows. If your camera is not recognised, check the archives of the mailing lists at www.gphoto.org and send the developers details about your camera.

DigiKam is a fine program for managing digital photos, but if all you want to do is copy pictures from the camera, you can do that with Konqueror, by typing camera:/ in the location bar. This will show a list of any connected cameras that libgphoto2 recognises. NV


E-mail anywhere

I have a Linux mail server (SquirrelMail). How can I check my mail from any Windows workstation without installing any extra software?
Andy Wyatt


You can read your mail using any standard mail software, or even Outlook Express. If you want to be able to read it without reconfiguring the mailer on the workstation, as would be the case if you were only using it temporarily, your best option is to install a webmail program on the server - that way you can read your mail from anywhere with nothing more than a web browser.

One of the most popular webmail servers is Squirrelmail (www.squirrelmail.org). You will need an IMAP server running on the Linux box, because most webmail programs use IMAP. SquirrelMail is a PHP program, running through a web server, so you will also need Apache (or another web server) installed and running. Once installed and configured, which is well documented and a simple process, you can access your mailbox from most web browsers.

While SquirrelMail is one of the more popular and longstanding webmail projects, there are several other choices; I rather like RoundCube, from www.roundcube.net. This is an Ajax project, and although it's only at release 0.1beta2, seems stable with a reasonable feature set. Which of these you choose to use, or whether you go for another alternative, such as NeoMail (http://neocodesolutions.com/software/neomail) depends on your needs. If you need only occasional access to check your email when away from your own computers, I would recommend you try RoundCube, although any of these is fine for this task. If you anticipate a heavier use requiring more of the features of a full email client, you should try them all and see which suits your needs best.

You can have more than one of these installed at a time by putting each one in a different directory on the server. That way you can jump between them before you decide which one suits your needs most closely. DK


Hidden partition

My father in-law recently expressed some interest in GNU/Linux, so I told him to download the brand-new-to-Linux distro du jour, Ubuntu. I really, really, expected the install to go smoothly, but he ran into a problem that has me stumped. The answer is probably simple, but beyond me.

He is using a new-ish P4 Dell that came with Windows XP pre-installed. He had about 10GB of free space to use for the installation. The normal installation procedure ran smoothly and instructed him to reboot. Upon reboot, Grub returned an Error 21. After he did a little digging, he found out that Dell places a small, invisible partition on its disks that contains Dell tools and utilities. Apparently, the MBR [Master Boot Record] has been moved somewhere unusual on these machines. After consulting the marvellous Ubuntu forums, I discovered that, a/ yes, this is the problem, and b/ nobody seems to have a good solution.

So, how do I configure Grub so that it boots properly? I don't have physical access to the machine, but I know it only has a single hard drive. After install, the partitions should lay out something like:

* hda1 Dell super-secret files.

* hda2 Windows.

* hda3 onwards boot.

/ swap.
Michael Marks


You don't say which model of Dell this is, but the usual layout is to put the Dell utilities partition on hda1 and a bootable Windows partition on hda2. The MBR should be in the usual place, otherwise the BIOS wouldn't be able to find the partitions. Grub Error 21 is a stage 2 error, which tells us that Grub has already loaded from the MBR and found its stage 2 files in /boot to be able to get as far as this error.

Error 21 means 'Selected disk does not exist', so this would appear to be an error in the Grub configuration: trying to load a kernel from the wrong place, such as a non-existent partition. This is definitely the case if Grub is able to load Windows (which would atleast prove that Grub itself is working).

Press Esc to get to the Grub menu, highlight the Linux entry and press 'e' to see the details. You should see something like this:
root (hd0,0)
kernel /boot/vmlinuz-2.6.15-23-386 root=...
There is a good chance the root setting is wrong. Press 'c' to get the grub command prompt and type
find /boot/vmlinuz-2.6.15-23-386
using the full filename specified in the kernel line above. This will return the location of the partition that contains the kernel, which will probably be hd(0,2) or (hd0,4), depending on whether /boot is on a primary or logical partition. Press Esc to get back to the menu entry, highlight the 'root' line and press 'e' to change it to match the output of find. Press Enter to accept the change then 'b' to boot.

Once you know it works, you can edit the configuration file to make the change permanent by running this in a terminal:
sudo nano /boot/grub/menu.lst
You'll find the menu details below the line that reads '## ## End Default Options ##'. The bad news is that you really need physical access to the computer to do this, or to be able to talk your father-in-law through it. NB


Automating layout

I would like to print an image, JPEG or BMP, and a text file in one report. Is there any freeware that would allow me to do this easily via the command line? I want to pre-generate reports via a script automatically.
et_phonehome_2, from the LXF forums


Yes, there are a number of ways to do this. Which one you choose depends on the quality of output you need and how much time you are prepared to spend implementing it. The easiest way is to write the report to HTML, which can be viewed or printed in a browser. The following shell script will take the names of an image and a text file and write the HTML to standard output. It's a very basic example, but you'll get the idea.
#!/bin/sh
echo "<html><head><title>My Report</title></head><body>"
echo "<img src=\"$1\" align=\"right\">"
cat $2
echo "</body></html>"
At the other end of the spectrum would be one of the Tex-based packages, such as Tetex or Lyx. These are typesetting programs that give you a great deal of control over the finished document's layout. The learning curve is steep, but the results may justify it. Tex source files are plain text, so it would be easy to generate them from the command line using a template file and a short shell script.

Lyx provides a somewhat simpler means of producing Tex files. It is a graphical application, but once you have created your template you could manipulate it with a shell script. You have a choice of splitting the template into three and doing something like this:
cat template1.lyx >report.lyx
echo /path/to/my/image >>report.lyx
cat template2.lyx report.txt template3.lyx >>report.lyx
Or you could do something more complex by using sed to replace parts of the template with your text and image files. Once you have the report.lyx file, you can output it in a number of formats, all at the highest quality. For example,
lyx --export pdf report.lyx
will produce a PDF report. Lyx is a powerful program with detailed online help. Give it a try.

An alternative is to use the scripting capabilities of a word processor or page layout program such as OpenOffice.org (see our recent OOo Basic tutorials, LXF80?LXF84), AbiWord or Scribus. NB


Canon can't

I have a dual boot system (Windows XP and SUSE 10.1) and a Canon LBP-1120 laser printer. As stated, the printer works fine on the Windows system. The problem is that try as I might, I cannot make it do anything on the Linux setup. I have downloaded and installed the CAPT drivers (numerous times) and then gone through the printer configuration and generally, nothing happens. The most that has happened is that occasionally the printer will send a piece of paper through, with nothing printed on it. Apart from that, anything that I try to print just stays in the print queue, unless I email it to myself and use Windows to do the printing.
Nigel Norfolk


This is a 'Winprinter' - one that uses the driver to do part of the work of the firmware. As with their cousins, Winmodems, getting Winprinters to work with anything but Windows can be a frustrating process that is not always successful. You have a choice of drivers for this printer: there is the official Canon driver, which is the one I guess you have tried, and one recommended on www.linuxprinting org that is available from www.boichat.ch/nicolas/capt. I suggest you try both of these drivers and also follow the advice given for this printer at http://linuxprinting.org/show_printer.cgi?recnum=Canon-LBP-1120.

When diagnosing printer problems, your first step should be to check the CUPS log files. Type
tail -f /var/log/cups/error_log
in a terminal, then try to print a page. You should see messages written to the error log in the terminal. This often gives a clue as to the cause. By default, the logged messages are quite limited. If you need more information, edit /etc/cups/cupsd.conf (as root), find the line
LogLevel info
and change 'info' to 'debug'. Restart CUPS, either from Yast or the terminal with
/etc/init.d/cups restart
Now the error_log will contain much more detail. As a general point of advice, any Linux user should check the printer database at www.linuxprinting.org before investing in a printer. NB


Service Monitoring

I am running a number of services on my server - is there a way that I can monitor these services, and restart them if they die? I wondered about using some sort of Cron task.
Henry Roberts


There are a number of programs written specifically for this task - the most popular of which is probably Mon, which you can get from www.kernel.org/software/mon. There is quite a long list of dependencies, mainly Perl modules, so it would be most convenient to install it with your distro's package manager. Mon can be installed on the computer that you wish

to monitor or on any other computer that can reach it over the network. The latter is a better choice, as it will be able to let you know if the server dies altogether.

Mon is controlled by a config file located in /etc/mon. Here's an example section that monitors a web server:
hostgroup servers www.example.com
watch servers
  service http
    interval 5m
    monitor http.monitor
    period wd {Sun-Sat}
      alertevery 1h
      alert mail.alert webmaster@example.com
This will attempt to connect to the web server every five minutes and email an alert if it fails. The alertevery parameter means that although it will continue to check every five minutes, it will not send a mail on every consecutive failure, only nag you every hour. Mon is able to monitor more than services: it can also keep track of things like disk space and processes, which could help you prevent a rogue program or denial of service attack stopping the server completely. There are other alert options supplied with Mon, including pager alerts (after all, there's no point in an email alert if the mail server has just died). Monitors and alerts are Perl scripts, so you can customise them or build your own - the Mon website has a collection of user-contributed monitors and alerts, you can be nagged by AIM or text message if you really want.

Another program worth considering is Monit - www.tildeslash.com/monit. This works in a similar way to Mon, but is designed to run on the server itself and be able to take corrective action rather than disturb the sysadmin. Monit is able to restart a service that has died - it also has a built-in web server that enables you to log in from a remote computer to check on the status of monitored services. The safest approach is to run Mon remotely and Monit locally. DK


Traffic Warden

I need to keep track of how much bandwidth my servers are using. How can I log network traffic for all or selected interfaces?
Tom Rice


There are a number of programs that will monitor and display the traffic though each network interface, most of these use information culled from the /proc filesystem. The main differences between them are the way in which they display the statistics.

For a simple overview, Vnstat is a good choice. Available from http://humdi.net/vnstat, and probably in your distro's package repositories too, Vnstat is normally run as an hourly Cron job, collecting statistics from /proc and adding them to its database. You can query this database at any time by running Vnstat from the command line. There are options to display the statistics by day, week or month as well as various other ways of tweaking the output.

If you need more than a simple ASCII report, you should try Traffic-vis, from www.mindrot.org/traffic-vis.html. This consists of a number of tools; the one that does most of the work is Traffic-collector, which should be running all the time. Traffic-collector collates information on the traffic passing through the specified network interfaces and saves this data to a file. This file is not meant to be read directly but passed to one of the other programs in the suite that process the traffic data and produce reports in HTML, PostScript, plain text and GIF formats. The HTML option is particularly interesting if you want to monitor a web server, as you could have a CGI script run Traffic-tohtml and give you on-demand traffic reports from a web browser. There are other utility programs included that can process the data in other ways; for example, Traffic-exclude is a useful option if you have bandwidth limits or charges and want to know only how much traffic the interface passed over your more expensive connection while ignoring and traffic between, say, the web server and database server on the same network.


KFind finds too much

I am using Kubuntu 6.06 with the Ichthux packages but my question is common to any distribution using KDE. The 'Find Files' utility searches every directory on the root (/), including the directories in the /mnt directory. This means KFind is searching my files in other distributions, and I usually have several, so KFind loses a lot of time here. Is there some way to configure find to skip /mnt?

Otherwise the only solution I can think of would be to unmount /mnt every time before using KFind, but that might create problems.
Dave, from the LXF forums


KFind is a front-end to two standard shell commands: find and locate. Unfortunately it doesn't give access to all of the options of find, such as specifying which directories or filesystems to search or skip. All you can do is give a starting point. This isn't an issue when searching your home directory, the default, but it can cause the problems you describe when trying to search the whole filesystem.

Happily, in the 'Use files index' checkbox you can elect to use locate instead of find. Locate uses a database built with the updatedb command for much faster searching, although it only finds files that were present when the database was searched. Updatedb is usually run as a daily or weekly cron task. The search path of locate is configurable, so you should add /mnt to the PRUNEPATHS list in /etc/updatedb.conf.

For maximum flexibility, it is worth learning the find and locate commands themselves, eg
find / /home -xdev -iname '*.pdf'
will look for all files ending in .pdf or .PDF in / or /home, but ignore other filesystems (thanks to the use of -xdev) such as /proc, /dev and those mounted under /mnt or /media. The find and locate man pages will give you a lot more information, but the main thing to remember is that locate is for a fast, name-based search while find allows far more control over the search parameters, including filename, file type and file age as well as the directories and filesystems searched. Unmounting filesystems mounted under /mnt may work, but is just as likely to fail if you have an open file or directory in any of them. Either way, it shouldn't be necessary. NB


Enlarging GTK fonts

I am looking for information on how to enlarge the fonts in Xara Xtreme, Gimp and similar programs that do not change when System Config > Appearance and Themes > Fonts are manipulated.

I am using SimplyMepis, taken from your DVD [LXF84] - and like it!
Bill Applebee


Xara Xtreme and Gimp use the GTK2 toolkit, whereas the System Config settings only affect KDE programs. I would normally recommend that you install the gtk2-engines-gtk-qt package, which makes Gnome and other GTK programs use your KDE settings and adds options to control the appearance of GTK programs to the Settings menu and the KDE Control Centre. However- although the package is installed by default with Mepis 6.0 it doesn't work. The programs are there but it is impossible to load it (others have reported the same on the SimplyMepis forums).

Worry not: there is another way to do what you want, by installing gnome-control-center and using gnome-font-properties. Run the program by typing gnome-font-properties in a terminal or the Run command dialog that appears when you press Alt+F2. This allows you to set Gnome fonts in a similar manner to the way you set KDE fonts.

This technique will modify the fonts for the current session, but they will revert to the defaults on the next restart. To make the changes permanent, type this in a terminal as your normal user (not root):
ln -s /usr/lib/control-center/gnome-settings-daemon ~/.kde/Autostart/
This ensures that gnome-settings-daemon is started whenever you load your KDE desktop. The daemon causes your Gnome settings to be applied to all GTK programs. MS


Configure me this

I have my usual problem in trying to install a VNC program to my laptop running Xubuntu. I downloaded all the VNC programs on to a recent disc and copied them to my laptop. I then used the correct tar instruction to unpack the archive both of TightVNC and VNC, ran cd into the resulting directory, typed ./configure- and was told the instruction did not exist. I'm afraid this is the usual state when I try to load your programs either on Ubuntu or SUSE. Am I doing something wrong?
jfl


In my ever so humble opinion, it is the distro makers that are doing something wrong. They assume that you will find every program you need in their repositories and that only developers will need compiler tools. In reality, most Linux users will need a compiler at some time, even if it is only to install a driver for their network card or the latest Nvidia graphics card drivers. Even installing VMware, a closed source binary package, requires a compiler for the kernel modules. Many of these require the kernel source too, another package considered non-essential.

That's enough ranting. On Ubuntu, you need to install the build-essential package, which includes everything you need to compile from source. With SUSE, you need the gcc package. If you want to install software from source tarballs, which will happen sooner or later, these packages will be essential, so install them now and avoid any grief later. However, in this case compiling from source is not necessary. Both distros include recent versions of TightVNC in their standard repositories or on the install discs, with Ubuntu also including the standard VNC. Unless you need the latest, most bleeding-edge versions, it makes sense to use the packages that came with your distro, as they have been tested, and you will be informed of any updates. NB


I want my modem back

I have at last got a good program that I have been after for a long time. With LXF85 along came SLED 10. Brilliant! However, I already have SUSE 9.3/10.1 and my modem works fine in both of these, but try as I might I cannot get my modem working in SLED 10. It keeps asking me for an Ethernet card, which I don't have, and saying that it is not connected. I know the card is not damn well connected because I don't have one! How do I configure the modem to work? I am on dial-up, I do not have broadband.
Eric Jordan


When you installed SLED, you should have seen a screen listing your networking hardware (Ethernet/DSL/modem) with options to configure each. If you have an Ethernet connection in your computer - and most motherboards have one built in nowadays - this will be used as the default if you didn't select anything else. (Remember that SLED is an 'Enterprise Desktop' distro, so its native habitat is a PC connected to a LAN.) I suspect this is what has happened here. The default setting for a network card is to use DHCP to ask the network for an address and other connection details. Because your network card is not connected, this request fails, giving the error message you see.

You need to disable the Ethernet connection and enable your modem. Both these tasks are done in the Control Centre. Select Network Cards, highlight your network card and press Delete. The card will still show up, but it is now listed as Not Configured. Now go back to the Control Centre and select Modem. From here, you can choose an ISP and input your connection details. NB


Waiting for files

I am having some trouble with an updated Red Hat server at work. We have an applet that makes an FTP connection to the server, and users can upload files. This all works fine.

The problem lies with a script that runs on the server to look for changes in the modification date of the folder that they are loaded into. When the date changes it processes the file. This worked fine on the old server, but on the new server it appears that while the transfer is in process the folder modification date is being changed. This means that we are getting truncated files, because the process starts before the transfer is complete.

Is there any way to set how folder modification dates are created? Is it an FTP issue or an OS issue? The server is RHEL ES 4, the old server was RHEL ES 2.
Kevsan, from the LXF forums


Is your script looking at the file modification dates manually? The problem is that the folder is modified twice: once when the new file is opened at the start of the upload and again when it is closed on completion. I had to set something like this up recently and found the best approach is to use the Fam (File Alteration Monitor) service, which is able to distinguish between these events. You need to install Fam and ensure that the famd service is run at startup. Then you need a program that will ask the server to watch for changes to your files or directories and take the appropriate action when informed of them. I found the fileschanged program ideal for this if all you need to do is run a script. You can get fileschanged from http://fileschanged.sourceforge.net, and run it like this:
fileschanged --show changed --exec /usr/local/bin/ourscript /var/ftp/somedir/
The --show option tells fileschanged to listen only for changes to files in the directory, skipping the initial notification when the file is created (fileschanged can also watch for files being deleted or executed). When it receives a notification, it runs the script with two arguments. The first is a single letter indicating the type of file change: 'M' for modified. The second argument is the name of the file. With this information, your script knows the name of the file that was uploaded and can do whatever you want. You may also find it useful to add a --timeout option to increase the delay in notification of changes to a file. DK


Mepis menu missing

I have just received LXF84. I tried out SimplyMepis 6.0 on the DVD, and noticed you have left out the start-up screen with F1, F2 and most important of all, F3. This is a drop-down menu that lets you choose your screen resolution - in my case 1,600x1,200. You can, if you know how (I do not), alter it after install by using the command line, but it is not the sort of thing a new user should have to do.
Jim, from the LXF forums


The LXFDVD included SimplyMepis and Knoppix, giving a choice when you boot, so the Mepis-specific menu had to be omitted. The standard Mepis CD has the graphical boot screen with the menu you mention, but as it happens it makes no difference what I select here - it always boots into 1,024x768. It appears that the screen resolution is hard-coded into the boot configuration on Mepis 6.0. Fortunately, Mepis uses the Grub bootloader on the CD, and this makes it easy to change the configuration on the fly.

The usual way to change Grub options is to highlight the relevant entry in the menu and press 'e' twice to select the first line of that entry for editing. Change the number after 'vga=' to the value for the resolution you want. Press Enter to accept the change and 'b' to boot with the changed setting. Users of the Mepis Live disc, but not our DVD, can also achieve this by highlighting a menu entry and using the cursor keys to move to the VGA setting in the box at the bottom of the screen where you can change the number, then pressing Enter. The diagram above is a table of the most common values for the VGA setting. To get the 1,600x1,200 you want in a 16-bit resolution, you would use 'vga=802'. NB


Kded or KDE dead?

I have just installed Free Mandriva Linux 2006 from the LXF75 DVD. Every time I boot the machine, within a couple of minutes everything slows right down and it becomes difficult to use. I have GKrellM installed and this shows the CPU working flat out. Checking with top shows the culprit is kded. If I kill this, the problem is solved. What does this daemon do, and can I permanently disable it?
mikejd, from the LXF forums


Do you have the search tool Kat installed? If so, this is probably the real CPU hog. Kat calls kded when working, but although kded shows up in top, the problem is called by Kat, which is notorious for its ability to bring the most powerful of machines down to ZX81 levels of performance, although later versions are reported to be rather less demanding. Kded is a generic service daemon run by KDE. It handles updates to KDE's Sycoca database, which holds application information. This is probably the part that is sucking up all your CPU cycles.

The most extreme solution is to remove Kat, but you can kill the program with
killall kat
killall katdaemon
killall kded
The first two lines kill all Kat processes; the third kills kded, which is still trying to process all the requests from Kat. If you check the process list, you'll see that KDE restarts kded, but that it is no longer bogging your system down.

You prevent Kat restarting next time you boot with
touch ~/.mdv-no_kat
If you want to re-enable Kat so it starts automatically in future, delete ~/.mdv-no_kat. NB


Don't get apt-get

I am attempting to update the Epiphany browser in Debian 3.1 Sarge, using

su && apt-get install epiphany-browser

The following is what I get:

Reading Package Lists... Done
Building Dependency Tree... Done
epiphany-browser is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
2 not fully installed or removed.
Need to get 0B of archives.
After unpacking 0B of additional disk space will be used.
Setting up kernel-image-2.4.27-3-686 (2.4.27-10sarge3) ...
cp: writing `/tmp/mkinitrd.MMwVww/initrd//lib/libc.so.6': No space left on device
cp: writing `/tmp/mkinitrd.MMwVww/initrd//lib/libe2p.so.2': No space left on device
run-parts: /usr/share/initrd-tools/scripts/e2fsprogs exited with return code 1
Failed to create initrd image.

Hmm, 'epiphany-browser is already the newest version'? I've seen Epiphany 1.7.6 available for download! Unless this is an example of Debian's infamously glacial update cycle, and Epiphany 1.4.8 is the latest version that's available for Debian distros. And what's this 'no space left on device' message on about? Barely 14% of my hard disk is currently occupied!
AMS Bradley


The latest release version of Epiphany is 2.14.3; the latest in the 1.x series is 1.8.5. While Debian has only v1.4.8 in its stable distribution, Debian Testing has v1.8.3 and v2.14.3. See what is available in the various releases at http://packages.debian.org. You need to add testing repositories to your sources list, either by editing /etc/apt/sources.list by hand or by running Synaptic. To do this easily, duplicate the stable entry and change it to Unstable.

As for your 'no space...' message, this refers to a file in the /tmp directory. Do you have /tmp on a separate partition? This is a common setup, and a good idea. It prevents a runaway process from filling up your hard disk as it writes to a temporary file. I suspect this is what has happened, resulting in a full /tmp. Run
df -h
in a terminal. If /tmp shops up as a separate filesystem at 100% full, this is what has happened. You can safely delete any file in /tmp that is older than your last reboot. MS


Chmod bits

I'm new to Linux and have been a bit confused as to how the permission 'bits' work with chmod. Can you help?
Asim Mohammed


Chmod access permissions can be expressed by either three single octal digits or three lots of letters. This trio represents permissions for the file owner, the group and 'world' respectively. Take chmod 755. Each digit is a sum, added up to express the various permissions. Here is each bit with its value:
 0 = no permissions (---) 
 1 = execute only (--x) 
 2 = write only (-w-) 
 3 = write and execute (-wx) 
 4 = read only (r--)
So to set read and execute permission you'd use 5; this is 1 for execute added to 4 for read access. For full access, you'd add 4 for read, 2 for write and 1 for execute, 4+2+1=7.

If you set the permissions on a file to be 755, that means the owner has full access (7) and the group for the file has read and execute access (5), as does 'world', ie everyone else. There are other bits you can set for special functions but these are the main ones. DK


Family-friendly SUSE

I have a PC running SUSE 10.0 that is used by everyone in my household for browsing, email, TV and playing movies and music. None of my family has any Linux user knowledge and this has caused some hassles when I am not around. The PC is connected to broadband via Wi-Fi, which works well almost all of the time. The only problem I have is that when my ISP has problems and drops me off (usually in the early hours of a Saturday) the wireless router needs to be restarted. This requires a restart of the networking on the PC - it's a simple process for me, but I cannot get my non-techie family members to understand firstly why a terminal session is needed and secondly what all that root rubbish is about! They just want an icon to initiate the network restart. Can you help?
Dave Wise


If your family don't understand why a root password is needed, you definitely should not be giving it to them! This is exactly the sort of situation that calls for sudo, which was the subject of last month's A Quick Reference To. You need to create a script that contains the sequence of commands needed to bring your connection back up and save it somewhere safe, say /usr/local/bin/restartnetwork. Make sure root owns this script and only root can edit it with
chown root: /usr/local/bin/restartnetwork
chmod 755 /usr/local/bin/restartnetwork
Add this to your /etc/sudoers file so that anyone in your users group can execute it without a password, like so:
%users ALL = NOPASSWD: /usr/local/bin/restartnetwork
This allows all members of the users group to run your script without having a password. If you change NOPASSWD to PASSWD, the user will have to provide their own password. You could specify individual users instead of a group by using a comma-separated list, such as
ma,pa,johnboy ALL = NOPASSWD: /usr/local/bin/restartnetwork
Now any authorised user can run the script with
sudo restartnetwork
The full path is not needed here if /usr/local/bin is in $PATH, but it must be given in /etc/sudoers. Once you have your script, you can drop it on to the desktop or the panel to create an icon or button and any of your users can reset the network with a mouse click. Because sudo is executing the script as root, all of the commands you put in it will be run as root when called from the script, without giving your family permission to run those commands (or any others) directly. I use this method to add a button to my laptop's panel to start my wireless network - not because I don't know the root password, but because I am lazy and one click is less effort than typing my password. NB


A space odyssey

My / partition is nearly full. I need more space, and have a free partition where I could put, for example, /usr/lib. But how is that done?
jensjk, from the LXF forums


Linux allows you to mount a new filesystem anywhere under your / directory, making it quite easy to use a separate partition for part of the overall filesystem to increase the space available. The trickiest part of the process is moving the data from the original filesystem to the new one. If you do not already use a separate partition for /home, I would strongly suggest using this, because separating /home carries several advantages. Whatever you do, back up first. If you accidentally delete the wrong data, you'll be glad you made a backup.

Copying data, particularly system files, while a filesystem is in use is a risky business, so you should boot from a Live CD, such as Knoppix. This assumes that your current partition is on /dev/hda1 and you are moving home from there to /dev/hda2. Make the relevant adjustments if your system is different.

The first step is to run QtParted to prepare and format the new partition. Now open a terminal and type the following:
su
mount /dev/hda1 /mnt/hda1
mount /dev/hda2 /mnt/hda2
rsync -avx /mnt/hda1/home/ /mnt/hda2/
The first line gives you root access, the next two mount your old and new partitions, and the third replicates everything from the old home directory to the new partition. You could use tar or cp to copy the files, but I find rsync to be the most reliable method of producing an exact copy, including all permissions and timestamps.

Now you need to add a line to /etc/fstab so that the new partition will be used. Knoppix comes with the Nano text editor, among others, so do
nano /mnt/hda1/etc/fstab
and add a line like
/dev/hda2 /home ext3 defaults 0 0
This assumes you formatted the partition with the ext3 filesystem, the default in QtParted. If you used ReiserFS instead, change ext3 to reiserfs.

If you reboot into your distro and type 'df -h' in a terminal, you will see that /home (or whichever directory you decided to move) is on its own partition. "But," you are shouting at the monitor, "my / partition is still full!"

That is because you copied the data to the new partition, so it is still in the old location too. This was deliberate, so you could go back if something went wrong. The data is there, but invisible because the new partition is mounted on /home, obscuring the original files. You could reboot Knoppix to remove these files, once you are sure you want to, but here is a little trick to save you having to reboot:
mkdir /mnt/tmp
mount --bind / /mnt/tmp
rm -fr /mnt/tmp/home/*
This lets you see and delete the files in the old home directory. Make sure you only delete the contents, not the directory itself. That is needed to mount to the new partition. You could do this with /usr/lib as you suggest, but /home is a better choice if not already mounted elsewhere (otherwise look at moving /usr/local). A lot depends on how much space you want to free up, so it helps to know how much space each directory is using. My favourite tool for this job is Filelight, available from www.methylblue.com/filelight and included in some distros' package repositories.

You can also use LVM to combine disks for more space, as described in LXF83. NB


Moving Money

I have been using Linux for about three years; two years running Mandrake and one running Ubuntu. I am trying to convert my other half to use it as well - she already uses OpenOffice.org on her XP machine. I have loaded a disc for her with Ubuntu Dapper Drake but there are two snags. One is that I cannot successfully export MS Money files to KMyMoney: the data files will export to QIF files, but KMyMoney will not import them no matter what I do. It says it is an unrecognised format, probably the Microsoft version of a QIF file.

The other problem is that she also uses MS AutoRoute and I cannot find an equivalent for Linux. I was thinking of using Wine as an alternative but I know nothing about running Wine, or how one installs a Microsoft program using it. Any help that you could give to me would be most gratefully received.
John Morton


The answer to your first question is to use a different program to convert the files. GnuCash will import Microsoft QIF files, which you can then save out in GnuCash's own format. GnuCash has options to handle several variations on the QIF format (I've successfully imported files from MS Money in the past). KMyMoney has an option to import GnuCash files - I use this because I keep my accounts in GnuCash but like KMyMoney's reporting options. The reason for using GnuCash's own format for saving is that this is a fixed format, whereas QIF files come in quite a variety of flavours. You could also try the latest version of KMyMoney, released recently - it mentioned improved QIF support.

There is a route-planning package for Linux called Navigator, from www.directions.ltd.uk. This is a commercial product that works on x86 Linux and Windows. There's no demo version, so check with the manufacturer for compatibility first. Installing Wine is easy with Ubuntu, there is a package in the Universe repository. Select Settings > Repositories in Synaptic and tick the box for 'Ubuntu 6.06 LTS (binary) Community maintained (Universe)', close the repository windows and hit Reload. Once the reload is complete, use the Search button to find Wine. You only need select the wine package itself. Once Synaptic has installed the package, run winecfg to set things up, although the defaults are fine for most uses. Now you can run a Windows program with
wine /path/to/someprogram.exe
Andy Channelle has written an extensive beginners' Wine tutorial on page 80. Also try the Wine Applications Database at http://appdb.winehq.org for information on compatibility with various programs. You could consider, too, CrossOver Linux, the commercial derivative of Wine from www.codeweavers.com. NB


WebDAV daemon

I've got WebDAV access to my Hotmail account. Is there anyway I can get it into my POP3 server?
Harish Tandon


There are a few things out there that will do the job for you. I prefer to use Hotwayd. It runs as a simple inetd service and can be used in conjunction with Fetchmail. Get the source from http://hotwayd.sourceforge.net and once you've expanded the archive, simply install it with your favourite configure options.

When you've done that and Hotwayd is installed, you need to activate it. To do this with xinetd, create a file in /etc/xinetd.d called hotwayd and populate it as follows:
service hotwayd
{
only_from = 127.0.0.1
disable = no
type = unlisted
socket_type = stream
protocol = tcp
wait = no
user = nobody
groups = yes
server = /usr/sbin/hotwayd
port = 1100
}
Restart xinetd and you're sorted! From there you can use Fetchmail. Simply create a .fetchmailrc file in your home directory containing:
 poll localhost protocol pop3 port 1100 username "username@somemail.com" password "yourpassword"
Now run Fetchmail. It will poll and pull down your mail from Hotmail to your local POP3 server. DK


Wherefore art thou admin?

I have been a bit of a fool and removed admin privileges from all three users on my Ubuntu system. This cut down the System-Administration menu to only a few items. At the moment, I do not have a user in the admin group that I can log in as to manage system items. I can use Gnome Terminal as sudo but cannot work out how to add one of the users back in to the admin group. I tried to use the usermod -g command but cannot get it right. That is, of course, if I am using right command. How can I add my user 'master' to the admin group again?
ellip, from the LXF forums


First, don't worry about having made a mistake, we all do it. The two most important aspects of making a mistake are learning from it and not letting anyone else know you've done it. The command to add your user to the admin group is
gpasswd -a master admin
However, only the root user can manipulate the password and group databases, so you have a Catch 22 situation here: you need to use gpasswd to add yourself to the admin group, but you need to be a member of the admin group to do this.

Do not despair, there is a simple solution. The installation disc is also a Live distro, set up so you can run root commands with sudo. Boot from the disc, open a terminal and run
sudo bash
mount /dev/hdaN /mnt
nano /mnt/etc/group
Replace hdaN with whichever partition contains your Ubuntu installation. Nano is an easy-to-use console text editor. Scroll down to the line beginning 'admin:x:112:' and add 'master' to the end, so it reads
admin:x:112:master
You can add more than one user if you wish by separating them with commas, for example:
admin:x:112:master,slave
Don't worry if the number is not 112; leave it as is. Press Ctrl-X to save the file and exit, then reboot from your hard disk. You should now have your admin privileges again. NB


Linux 4 Video

Is it possible to use Linux to create a web-based TV station, broadcasting over the internet, mixing live feeds from webcams or video cameras? Secondly, if it is, is it possible to do this totally with open source?
John Preston


Yes, it is possible, and all with open source software. You haven't given us much detail about your intended project, so it is difficult to give specific help, but the (LS)3 Open Media Streaming Project looks to be a suitable starting point. This includes Fenice, a multimedia streaming server, and plenty of documentation to help you. It specifically mentions streaming from live video feeds. Fenice supports Video4Linux devices, so any webcam that works with Linux should be suitable. The (LS)3 website is at http://streaming.polito.it and includes discussion forums where you can exchange information with the developers and other users.

Another server worth investigation is Flumotion, from www.fluendo.com. This is a commercial project, but the basic server is free under the GPL. You may also find a use for FreeJ, for mixing images and effects in real time. Its website is at http://freej.dyne.org.

Finally, you should look at Dynebolic, a distro aimed at multimedia production and broadcast. It can be used as a Live CD, enabling you to try it out before installation. You can get the latest version from www.dynebolic.org.

Good luck with your project and let us know when it goes public! NV


Removing Linux

I purchased Fedora Core 5 on DVD from a publication and installed it on my Acer notebook, thinking I would always have access to my programs already installed on my Windows desktop. That was a wrong assumption on my part. As you can tell I'm new to this Linux OS. I realise there is the Wine software that allows one to incorporate Windows with Linux. Unfortunately, when I partitioned my hard drive, I lost my wireless connection to the internet and I can't seem to reconnect. Also, I have important software that I have to use on my Windows XP Pro but can no longer access.

My question to you is: how do I undo the partition, thus removing Fedora Core 5 until I'm ready to reinstall it? I've tried to upload Partition Magic 8.0 software that I have but it won't work since it's an .exe file. I should also mention that my notebook came pre-installed with Windows XP and therefore I don't have the Windows XP CD. I've gone to different websites from my PC but to no avail since they all mention having the XP CD.
Ari


There are two possibilities here. The first is that you deleted your Windows partition when installing Fedora Core by choosing the option to Remove All Partitions On Selected Drives. If this is the case, you have lost Windows and will have to reinstall. You should be able to obtain an installation CD from your laptop's supplier or manufacturer.

The second, and hopefully correct, possibility, is that you still have Windows installed but have lost the option to boot it. Most distros' installers have the option to set up a dual-boot with Linux, where you get to choose your operating system each time you boot up. When you see the message 'booting Fedora Core... in n seconds' shortly after booting, press a key and you will see a menu. If Windows is on this menu, select it and you'll have Windows working again. To remove the Fedora Core bootloader and have your system boot straight into Windows needs a Windows rescue disc. As it does not need a full XP installation CD, you can usually fix things with one of the boot discs available from www.bootdisk.com. It will be easier if you have access to a working Windows computer to download a disc image from here and write it to a floppy drive.

Boot into the rescue system and run fixmbr to restore the Windows bootloader and remove the Grub menu. Fedora Core will still be there, but you can now run Partition Magic to reclaim the space it uses. NB


Move over NFS

I've been using NFS between two boxes but have noticed that it's not the fastest transport in the world. Is there anything else you can recommend?
E Mays


Indeed, there's a great project that has developed something called the Network Block Device (http://nbd.sourceforge.net). It's something that has been compiled into the kernel for some time now and essentially presents a remote filesystem as a local device. The only downside is that you can only have it mounted read/write by one machine. Assuming that this wouldn't cause you any problems, I'd suggest you give Network Block Device a go - it's much faster than NFS and is really straightforward to configure.

First of all, because NBD uses a file rather than a directory as its device you need to create a file to the size you require. To create a 1GB NBD you can do the following on the server:
dd if=/dev/zero of=/mnt/nbd-drive bs=1gb count=1
This will create a 1GB file as /mnt/remote. Next up you need to tell the NBD server to start up, listen to a certain port and use the file we just created. In this example we are using port 1077:
nbd-server 1077 /mnt/ndb-drive
Once this is done, ensure the nbd-client module is loaded on the client machine with 'modprobe nbd.o nbd=client 192.168.1.2 1077 /dev/nd0'. Obviously you need to replace the IP given here with that of your server. You can use any filesystem you want to with NBD - because this is the first time we have accessed it, we'll format it ext2 with 'mke2fs /dev/nd0'. And finally we can mount it:
mount -text2 /dev/nd0 /mnt/nbd-drive
If your server has multiple network cards you can start NBD on multiple ports to provide extra capacity or resilience:
nbd-server 1077 1078 1079 1080 /mnt/ndb-drive
And then on the client you can specify multiple IPs and ports:
nbd-client 192.168.1.2 1077 1078 192.168.2.2 1079 1080 /dev/nda
DK


NIC and easy

My box has really poor network performance. Someone recently mentioned I might be set to half duplex (whatever that is). How can I find this out and what speed I am connected at?
Jean-Guy Leconte


Firstly I'll explain half duplex. In a nutshell this means that your network card has negotiated with your network hardware and is not sending and receiving packets at the same time; in essence it's a one-way conversation. If you are using any modern piece of network hardware you should be able to achieve full duplex easily. When a NIC is connected to a network device it has to negotiate a compatible speed and duplex setting at the physical layer. On most cheaper switches this is done through a process known as autonegotiation: the switch 'advertises' what link modes it supports, the NIC chooses one and informs the switch. This is the default behaviour for most NICs.

On more expensive managed switches this setting can be fixed to ensure optimal performance. Often, if this is configured on the switch but your machine is still set to Autonegotiate you'll end up with a duplex mismatch, which causes network performance to be poor. To find out what your NIC is currently set to you need to use the ethtool command:
[root@dan ~]# ethtool eth0
This will show you various details. Note the Duplex and Speed entries; you'll also see what advertised modes the switch supports.

Assuming your duplex is the issue and your switch is hard set to, say, 100Mbps for speed and Full Duplex, you can change eth0's setting by executing
 ethtool ?s eth0 speed 100 duplex full autoneg off
Be aware, though, that this will revert when you reboot the system. To set it permanently you should set the options for your NIC driver in modules.conf when it's loaded. If this doesn't solve the issue there are a number of things you can look at, but first you need to narrow down the issue. Is it a particular service that is slow? Your network connection could be fine but a service could be slow to respond for a number of reasons. Run ifconfig and see if you have any Tx/Rx errors or collisions - is it just your machine? Could it be affecting several machines due to a saturated switch? In essence, you need to track down where the issue lies to define your problem and resolve it! DK


Questions for SUSE

I have just installed SUSE 10.1 and have the following queries. First, where do I install Mozilla plugins? I can't find a mozilla/plugins directory. Second, I created a user during installation but this doesn't have root privileges. How can I achieve this?
Brian Clifton


Mozilla and Firefox plugins can be installed in one of two places, depending on whether you install as root or a user. System-wide plugins and extensions are stored in /usr/lib/firefox/plugins and /usr/lib/firefox/extensions, respectively. Those installed by a user, which happens when you install directly from a website such as http://plugindoc.mozdev.org or http://addons.mozilla.org, go into the appropriate directory under the user's home directory. This is .mozilla/firefox/xxx.default, where xxx is some random string of characters. You shouldn't normally need to manipulate these files directly; installing, updating and removing extensions can be done from within Firefox.

Your normal user does not, and should not, have root privileges - otherwise what's the point of a separate root user? When you need to run something that requires root privileges, Yast (or whatever program you are using) will usually ask for the root password, which you set during installation. The program will switch to root for as long as is needed and then switch immediately back to your normal user. If you need to run a terminal command as root, type
su -c "command you want to execute"
to run a single command or
su -
somecommand
someothercommand
...
logout
In both cases, you will need the root password. NB


Installing CentOS

I installed CentOS 4.3 on my Pentium III, Windows 98 computer and it worked great. At the top of page 70 of LXF81 [Coverdisc], it says I need a Pentium CPU. I also have an AMD FX-53 with Windows XP and Fedora 5. I would like to replace my Fedora 5 with CentOS 4.3 on this computer also, but am afraid to try. Would it be OK to try to install CentOS on my AMD computer? If not, is there a way to do it with some third-party software or will there be an AMD version in the future?
R Davison


That should really say "Pentium-class CPU". That is, anything compatible with an i586 processor. Pentium is Intel's trademark, but AMD CPUs are compatible. You can run CentOS from the coverdisc on your FX-53, but you wouldn't be getting the most out of the chip. The FX-53 is a 64-bit processor, but would switch to 32-bit mode to run the supplied version of CentOS. That will still be faster than most 32-bit CPUs, and it will be fine for trying out CentOS to see whether you like it, but if you want the best performance, you would be better off with the

64-bit version of CentOS, available for download from www.centos.org. If you are not able to download it, check the adverts at the back of the magazine for someone who will be able to supply you with it on a disc. NB


Hung up

I'm trying to mount an NFS export from my server to my workstation, and it keeps hanging. There is nothing in any logs to indicate what's going wrong - are there any common causes?
Horley


NFS relies on a number of RPCs, or remote procedure calls. The key to all of this is the portmap service. This processes RPC requests and sets up connections to the correct RPC. Check that it's running by looking at the process list:
[root@test gnump3d-2.9.8]# ps -ef | grep portmap
rpc 2584 1 0 Jul23  00:00:00 portmap
root 30843 30474? 0 07:45 pts/4 00:00:00 grep portmap
Assuming your NFS server service is started, you should see the following RPCs running: mountd, nfsd, lockd, statd, lockd, rquotad and idmapd. Depending on your distribution these may be started by the startup script for NFS. If they aren't, it's possible to start them manually:
[root@test]# rpc.mountd
[root@test]# ps -ef | grep mountd
root 30906 1 0 07:54  00:00:00 rpc.mountd
You can repeat this for the other services, but you will need to add them to the appropriate startup script to ensure that they are restarted at boot time. KC


Silence is not golden

I am quite a new user of Linux and have just installed SUSE 10.1. When I tried to use Amarok on this installation I got no sound, even though the soundcard appeared to be working (it sounds at startup, for instance).

I noticed that you had a new version (1.4) on your coverdisc [LXF82] so I have tried to install it through Yast. Can you tell me how I specify the new program on the LXFDVD to Yast? I know this must be a very basic question but at the moment I do not know how to do it.
Adrian


There are two separate questions here: one about Amarok and one about software installation.

Look at the status bar at the bottom left of the Amarok window when you try to play a song - this will give you some feedback. If the song appears to be playing but you hear nothing, open the mixer (usually a speaker icon in the taskbar) and make sure that the volume controls are set high enough. If Amarok refuses to play the song, it is probably your sound engine configuration at fault. Look in the Engine section of the Settings window. If this is set to 'aRts' and you are running a Gnome desktop, you are unlikely to hear anything; because Arts is the KDE sound engine. The best setting for this, in terms of both quality and compatibility with all desktops, is Xine. You may also need to set the output plugin - Autodetect normally works; if not, set it to ALSA.

Yast is more suited to installing software from the repositories that it knows about. These include the SUSE directory of the install disc and any online update servers that may have been added automatically during installation, or manually by you later. You can tell Yast to install from individual RPM files from the command line, as root, with
su
yast2 --install /media/LXFDVD82/Sound/AmaroK/SUSE/*.rpm
to ask it to install the packages from the DVD. However, Yast doesn't handle dependencies when run this way and may well fail without telling you why. It is better to use the rpm command directly:
su
rpm -Uhv /media/LXFDVD82/Sound/AmaroK/SUSE/*.rpm
It may still fail, but at least it will tell you what is missing. A more satisfactory solution is to add a repository to Yast containing the newer software. You can find a list of such repositories, along with instructions for adding them to Yast, at http://en.opensuse.org/Additional_YaST_Package_Repositories. NB


Logjam!

Can you help me reduce the number of identical log messages? When I first started using Linux there were lines of 'message repeated x times', but these have become rare. The problem is not really the size of the files but the difficulty of finding important single messages. I have appended below some of the common sequences that occur with Mepis 3.4.

The first group of messages comes from my Zip drive breaking up the log through booting and beyond. The larger figure is the total size of the disc. Only the smaller is supplied by the partition table. The second group looks as if something is sending pings at one-minute intervals. So 10.10.10.134 is the local IP address and 10.10.10.91 is remote. The third group produces hundreds of these messages within a few seconds but this occurs only occasionally. You can see the signs of a race condition.

There seems to be little effect on the functioning of my machine but I would like to be able to find more serious errors without having to trawl through so much guff. Here are examples of the messages:

Jul 18 19:07:40 localhost kernel: hdd: The disk reports a capacity of 752896000 bytes,
 but the drive only handles 752877568
Jul 18 19:07:40 localhost kernel:  hdd: hdd4
Jul 18 19:13:20 localhost kernel: martian source 10.10.10.255 from 10.10.10.134, on dev eth1
Jul 18 19:13:20 localhost kernel: ll header: ff:ff:ff:ff:ff:ff:00:0a:5e:1d:53:c2:08:00
Jul 18 19:14:00 localhost kernel:  [unmap_page_range+217/232] unmap_page_range+0xd9/0xe8
Jul 18 19:14:00 localhost kernel:  [unmap_vmas+172/376] unmap_vmas+0xac/0x178
Jul 18 19:14:00 localhost kernel:  [unmap_region+125/242] unmap_region+0x7d/0xf2

Cecil Wallis


I can think of three approaches to this. The first is to investigate the cause of the messages and deal with it, preventing them ever appearing. The system.txt file you sent was extremely helpful, as it helps pinpoint the cause of the third set of messages, which occur because you are using a 2.6.15 kernel with an Nvidia graphics card. The solution is to either upgrade to a newer kernel, or install SimplyMepis 6.0, which was included on the LXF84 coverdisc.

The 'martian' network entries refer to unroutable packets. In this case they are coming from an unroutable address - 10.10.10.255. You can stop their being logged by doing
echo "0" >/proc/sys/net/ipv4/ip_log_martians
as root, but it would be a good idea to find the cause first. These could be caused by faulty or misconfigured network equipment, or they could be a sign of someone trying to exploit your computer. If they still occur while your network is disconnected from the internet, the cause is local, otherwise check your firewall.

The Zip error may be unavoidable, which brings us to the next approach: filter out everything you don't want to see. Run the logfile through grep to remove the 'noise' before viewing it, for example
grep -v -f /var/log/filter /var/log/messages | less
where /var/log/filter is a file containing the patterns you wish to filter out, one per line, such as
localhost kernel: *hdd:
The third approach to try is the most comprehensive, but also the most complex. You can configure the system logger to filter messages into different files (or even /dev/null). Mepis uses sysklogd, which has fairly limited filtering. You could replace sysklogd with syslog-ng and put this in /etc/syslog-ng/syslog-ng.conf to have all messages relating to hdd sent to a separate file.
destination messages { file("/var/log/messages"); };
destination d_zip { file("/var/log/zip"); };
filter f_zip { match("hdd"); };
filter f_nozip { not match("hdd"); };
Then replace the line that reads 'log { source(src); destination(messages); };' with
log { source(src); filter(f_nozip); destination(messages); };
log { source(src); filter(f_zip); destination(d_zip); };
The first filter matches all messages about hdd, which are sent to a separate file. The second matches those that don't contain hdd, which go to the standard log. You may need to tweak the search string, but keep it the same for both filters or you could lose messages. NB


Printing and scanning

I recently installed a version of SUSE from the cover DVD of your April issue [OpenSUSE Slick, LXF78]. It all works fine but for the fact that it will not print from any application. My printer (an HP 1200) is recognised correctly but when I try to print, the jobs are processed and wait in the printer queue indefinitely - any ideas?

Also, I am trying to find a flat-bed scanner for home use (not too expensive) that will work with, say, Xandros (or SUSE). Linux scanner compatibility seems a problem.

A final question: why is the /dev directory such an apparent mess? Why not have the software interrogate the hardware and create the device files in /dev as required? Extra device files could be manually added if needed.
David Bowskill


It is difficult to say exactly what is wrong with your printer setup without more information. Did a test print work when you first set up the printer? The best source of information is the CUPS error log. Run this command in a terminal:
tail -f /var/log/cups/error_log
If you get an error message about inability to read the file, use su to log in as root then run it again. Now try to print something and you will see messages from the CUPS print system in the terminal. The error messages should help you find the cause.

It is possible that the printer is simply disabled (this happens after an error). To fix this, you would clear the print queue and try again. You can do this from the Gnome or KDE print manager, or from the command line with
/usr/bin/enable PrinterName
This should be done as root, and you must give the full path to the command.

Scanner support in Linux is good these days, using the SANE (Scanner Access Now Easy) system. The website (www.sane-project.org) has a comprehensive list of supported scanners. If you want a personal recommendation, I bought a Canon LiDE 60 a few months ago. It gives good scan quality and works well with SANE. There is no support for the buttons on the front of the scanner (yet) but scanning from applications gives excellent quality.

Many of the device nodes in /dev are created on demand. Plug in a scanner, printer or USB stick and its device node appears; remove it and they disappear. The /dev directory looks busy because there are so many device nodes used by the system, even though users may remain blissfully unaware of them. A static /dev directory used to be the norm, but modern Linux systems use udev to create device nodes in response to hardware detection. NB


Everything in its right place

I've heard the term FHS banded about. What exactly is it and what is it for?
R Elia


The FHS or Filesystem Hierarchy Standard is a set of requirements or guidelines for where file and directories are located under Unix systems, and what some system files should contain. For instance, it advises that "applications must never create or require special files or subdirectories in the root directory", so that root partitions can be kept simple and secure to adminstrate.

Most Linux distributions adhere to the FHS loosely, which is why the filesystem layout is fairly similar from one distro to another. Each of the folders in the FHS has a defined purpose. For example, /dev contains entries referencing devices attached to the system, /lib houses libraries required to run binaries in /bin and /sbin, while /usr holds most of the binaries and libraries which are used by you, the user, and as such is one of the key folders in any Linux system.

In a nutshell, the FHS is essential to the organised chaos within Linux. It means that users like you can come to expect certain directories in certain locations, and it also means that programs can 'predict' where files are located.

The first filesystem hierarchy for Linux was released in 1994. In 1995, this was broadened to cover other Unix-like systems and take in BSD knowhow, and was renamed FHS. It is overseen by the Free Standards Group, which also runs the Linux Standards Base project. While all distros stick to the principles of FHS, some use the layouts in slightly different ways, or omit some of the usual directories, which is one of the reasons why different Linux systems are sometimes incompatible. KC


Archiving images

When using mogrify to resize and change the format of a collection of images, how do I set a target directory for the output, and also make the name of the file contain a numeric string as a timestamp? I work with groups of young kids and things can get very busy. Often I am stalled by opening a digital photograph in Gimp, resizing it and saving it to the $HOME/.tuxpaint/saved/ directory as a PNG file so they can use it with Tux Paint. But the delay means that the other kids are left waiting.

So far, my command would something look like this: 'mogrify -antialias -geometry 448x376 -format png digicampic.jpg' but this does not place the finished file into $HOME/.tuxpaint/saved/, and I would also like the command to rename the file with a timestamp such as 20060719162549.png.
Lancer, from the LXF forums


Firstly, well done for getting kids working with Linux at such an early age. The more that grow up realising that Windows is one choice of several and not compulsory, the better. ImageMagick's mogrify command is for modifying images in place, so saving to another directory is out. For this, you need the convert command from the same package.

This should do what you need:
for PIC in *.jpg
do
  convert -antialias -resize 448x376 ${PIC} $HOME/.tuxpaint/saved/$(date +%Y%m%d%H%M%S).png
done
The main problem with this is that you could end up overwriting one picture with the next if they are processed within a second of each other. You could get around this by testing if a file of the same name already exists and adding an extra digit to the name if it does. But as you are using the time of conversion, not the time at which the picture was taken, you could simply pause for a second if there is a clash.
for PIC in *.jpg
do
  while true
  do
    DEST=$HOME/.tuxpaint/saved/$(date +%Y%m%d%H%M%S).png
    [ -f ${DEST} ] || break
    sleep 1
  done
  convert -antialias -resize 448x376 ${PIC} ${DEST} && mv ${PIC} done/
done
This version also moves the picture to another directory if the conversion is successful, so you can run the command again to process newly added images. If you want to use the time the photo was taken in the filename, replace the $(date... part of the command with
$(date -r ${PIC} +%Y%m%d%H%M%S).png
This will use the last modified time of the file for the timestamp. The date man page details the various options. A more sophisticated approach would involve reading the EXIF information from the picture. There are a number of programs for this - I prefer Exiftool (www.sno.phy.queensu.ca/~phil/exiftool). NV


I want both!

I have installed Fedora Core 4, but now I'm in a quandry. I don't know which is better: Gnome or KDE. Can I have them both installed on the same computer? Also, I tried to download K3b but I could not install it. Do you know why?
Jan Birsa


Yes, it is possible to have more than one desktop environment installed on your computer. Below the username box on the Gnome login screen, there is a menu named Session. This lets you choose which of the installed desktop environments you load. If your system is set up to boot straight into Gnome, select Log Out from the System menu and you'll see the Session menu in the login screen

Of course, you have to have KDE installed for this to work, but that's as easy as selecting the KDE group from the package manager. The most likely reason why K3b failed to install is that you don't have the KDE libraries installed. You don't have to be running on KDE to use K3b, but you do need the basic KDE libraries available. Similarly, when you are running KDE, you will still be able to use Gnome programs on it, because you have the Gnome framework installed. NB


Proc of gold

When I issue a mount command, I see a filesystem called /proc that is not on my hard drive. Can you tell me what is it, and why is it there?
Darren Birkett


On a typical Linux system, when you issue the mount command you will see at least two filesystems that don't appear to be accessible in the normal way. The first of these is the /proc filesystem, and the second will show as something like 'none on /dev/shm'. As you may know, /dev/shm is a filesystem that is used to manage virtual memory on your system, and doesn't create anything on your local disk.

The /proc filesystem contains virtual files that are like a window into the current state of the running kernel. It does not occupy any space on the hard drive, and therefore is referred to as a virtual filesystem, but acts and looks like a disk-based filesystem.

Viewing some of the files in /proc can give a great deal of information about your system. If you look at /proc/meminfo, you'll get a nice stack of information about the memory on your system:
# cat /proc/meminfo
MemTotal: 515484 kB
MemFree: 74656 kB
Buffers: 5912 kB
Cached: 352464 kB
SwapCached: 12 kB
Active: 126788 kB
Inactive: 289772 kB
Looking at this information, you will see that not only does it tell you how much memory you have, including swap and real memory, but it tells you exactly what the current state of the memory is in terms of free space, and how it's allocated. Chances are that if you run this command again, some of the information will have changed, and this is generally the whole point behind /proc. It's like a snapshot of the current system state. The more advanced user can actually change the functionality of the kernel temporarily by editing the files in the /proc filesystem. For example, to turn on IP forwarding (to allow your system to act as a router passing network traffic arriving on one network interface out on another interface), you can issue the following command:
echo 1 > /proc/sys/net/ipv4/ip_forward
Do be aware, however, that this status is not permanent and will be lost at the next reboot. To make it permanent, you need to edit the /etc/sysctl.conf file to include the following:
net.ipv4.ip_forward = 1
To learn more about your system, have a root around in /proc. You can't break anything just by looking, and even if you do make a mistake and edit one of the /proc files accidentally, a quick reboot will wipe any changes you've made. KC


Lost in the Ether

I would like to sign up for NTL's cable broadband package, but they tell me they do not support Linux. Does this mean the system will not work with Linux, or just that they cannot provide advice? I do not doubt that if I plug the modem into the Ethernet card it will be picked up, but how do I connect it to the broadband system? Do I use KPPP with different settings or what? I'm wondering if I need a specialist provider or if I can use any provider as long as I have 'the knowledge'.
Adrian Horrocks


I can assure you that you can use NTL broadband internet with Linux - I had it myself. All you do is connect the Ethernet port of the modem to the Ethernet port of your computer (you should not need a cross-over cable) and set your network interface to use DHCP. You do not need KPPP, KDE's internet dialler, as cable broadband does not use PPP.

You will need to switch on the modem and wait for the RDY and SYNC lights to become stable. This means that the modem is connected to NTL. Now you can bring up your Ethernet interface and it will get its IP address, routing and DNS information from the modem. However, you will find that with NTL, as with most ADSL broadband providers, Linux will work with their service but they don't provide support. The notable exceptions (in the UK at least) are UK Linux and The UK Free Software Network, at www.uklinux.net and www.ukfsn.org respectively.

Whichever provider you choose, the most important decision is that you use an Ethernet-based modem. NTL provides one, whereas most ADSL ISPs, especially the cheaper ones, offer a USB modem as standard. Accept the 'free' modem by all means, but budget 20 pounds or so for an Ethernet ADSL modem. NB


Fast track learning

I am a new Linux user, and through fiddling with distros (the best way to learn) I have become adequate with Linux. I can use the terminal and enter commands, open RPMs and debug the system (as in if I have a problem, I run to the internet for help). I would appreciate it if you could run a series of sections to help beginners get to grips with the OS. I know there are so many distros that it may be hard to make a beginner's guide that covers them all, but perhaps I could look around and get the best learning distro. If not, could you at least point me in the direction of a guide that I can read at my leisure?
Michael Quin


The First Steps series of tutorials that we have been running for a couple of years have covered much of what you want. But the best distros for learning Linux tend to be the ones that are less friendly to beginners. There's an old saying 'Use Red Hat and you learn about Red Hat, use Slackware and you learn about Linux'. While it is not as cut and dried as this, the likes of Mandriva and SUSE also fall into the first category, while Debian and (most definitely) Gentoo fall into the latter. GUI tools 'protect' the user from the inner workings, so impeding the learning process.

There are a number of websites that provide excellent documentation for those learning Linux, at all levels. One of the most highly rated is Rute, at http://rute.2038bug.com. You can read this online, download it in PDF or HTML format for offline reading, buy it as a genuine treeware book or read it in the Help section of this month's DVD. One of the best introductions to command line usage is LinuxCommand.org (http://linuxcommand.org). MS


Valid HTML 4.01! Valid CSS!