Installing qmail

I’m going to be writing a series of posts that detail my experiences in
installing a new mail server on Linux.  I have been running a mail
server called the Mercury Mail Transport System on Novell NetWare for a long time.  The NLM version of Mercury
has been robust and works … although it is lacking in some more
recent innovations in e-mail systems.  On top of that, I want to
get rid of my NetWare servers … they just aren’t what I want to be
running any more.

I have installed a server with Fedora Core 2, and as of this weekend I
finally dove in and began the actual installation of the mail
server.  After a lot of looking around, I chose qmail – “Second most popular MTA on the Internet” – and I also wanted to add the TMDA anti-spam solution.

I read through the qmail installation instructions and have to admit
that I was a little worried … until I found the “lazyinstaller for
qmail” at lazyinstaller.net
This is one amazing script, and it made the entire process a
breeze.  Once I had the script on my machine, I simply edited a
few parameters to define my primary domain, some paths, and a few other
items.  (NOTE:  I noticed later that I could have used their
on-line generator to create my customized script ready to download!)

Once I had customized the lazyinstaller script, I ran it and was
impressed.  It downloaded all of the source tars, unpacked them,
built the projects, customized configuration files, and set-up qmail
complete with smtp, pop3, imap (both SSL and non-SSL!) and web-based
administration tools.  There was only one error in the script that
I ran (v2.0.2) where a directory was not created for binqimap … I
created the directory and copied the contents of the config file from
the script into the new directory.  At the end of the install,
there was a short note on creating the start-up and shut-down scripts
… and I was ready to go.  I started up the services, and
everything has been running smoothly!

I have already started testing with some virtual domains, and
everything seems to be working fine.  As of tonight, I installed
TMDA, and have now started my testing with that.  I just completed
the first tests there, and it’s working great.

I have a total of ~15 mail domains with 40-50 users that I have to move
to this new server.  I’m looking forward to moving one of those
tomorrow … I’ll post more about my success!

Novell NetDrive … a dying product?

I have been using NetDrive
(http://support.novell.com/servlet/filedownload/uns/pub/ndrv41862.exe/)
for years now, and it is a very innovative piece of software. It
completely alters the way that people use FTP to transfer files …
making it as easy as “mapping a drive”. With NetDrive I can “map”
a drive letter, say “N:” to my FTP server on the Internet. I can
then “drag and drop” files just like any other drive on my system.

The real issue with FTP is that it is not the most secure protocol that
you can use. Most Linux and UNIX users are using SSH and SCP
instead. SSH is the “secure shell” and combined with SCP, a
“secure copy” it allows you to access your remote boxes through an
encrypted connection. I use both of these all day, and what hit
me was that the usability of SCP – even using WinSCP – is not equal to
that of NetDrive.

I started to check and see if Novell had released a version of NetDrive
that would use the secure protocols, and found that I could not locate
any newer versions of NetDrive! The last one I can see if from 17
Apr 2003! And there is no apparent work on a version that
supports SSH/SCP …

Here they have a very powerful tool that could be used to “seed” the
market and alter how people access Linux from Windows … branded with
Novell’s name … and they seem to be letting it die.

Novell … drop it into Open Source … or update it! You are again allowing a valuable beachhead to disappear …

More attacks on Open Source

This is actually a pretty impressive situation … and further demonstrates the potential use of Patents to attack Open Source projects. The NoCat
project is a pure Open Source solution that could be quickly adopted by
many of the wireless HotSpot companies that are doing redirection of
web access for logins … and if they did, will this company sue the
NoCat users? Would they go after all users of the project?

Abstraction of the Operating System

Former BEA Execs Launch New Open-Source Venture

This is a really good topic to examine. There are numerous trends
that are coming up in the world, and these are the next big software
companies … they have realized the writing on the wall.

The software platforms of the future are being built as “abstractions”
above the operating systems … and far above the kernels in use.
The world is very quickly becoming one filled with interpreted
languages, scripting languages, and virtual machines. Hardware is
becoming so powerful and so cheap that the compute inefficiences are
quickly masked, and “good enough” performance emerges. This is
even further demonstrated by the rapid growth in hardware “emulators”
or “virtualizers” like the VMware, Qemu, and Virtual PC solutions.

On top of this, we are actually watching the entire computer software
industry converge towards a complete “UNIX-compatible” set of APIs and
development tools. NetWare? Novell is moving to
Linux. Macintosh? It’s now based on Darwin.
Windows? Even they have SFU (Services for UNIX) that supports all
the same applications. UNIX-compatible software is what the
market is actually embracing … not “Linux” or any specific version of
kernel.

Most people do not realize that the majority of any “Linux”
distribution is actually a wealth of GNU tools and UNIX-compatible
software. In my research only ~3% of any distribution is “Linux”
itself … the rest is all of the common libraries, languages and
applications that we all hear about – Apache, MySQL, Perl, PHP, C#,
Java, Gnome, gcc, X Windows etc. – and all of those are
“UNIX-compatible” applications and services or have versions that run
on UNIX-compatible kernels.

If the world is now going to see a mass commoditization of the kernels
… with complete compatibility around a common set of development
tools, then the real play – that SourceLabs is pursuing – becomes the
certification and support of the wealth of Open Source software.
And if I were going to do it, I would ensure that I could provide all
of this software across *ALL* of the UNIX-compatible kernels in
existence … or at least the core four for now – Linux, Darwin,
FreeBSD, and Windows/SFU.

With a strategy to become the de facto provider of software across all
of these platforms, you would be able to provide the solutions to your
customers and not care about kernels. If there are problems with
one … you can move them to a different kernel without issues.
If they are a Windows shop, you get them to adopt SFU and
UNIX-compatible applications as a “migration preparedness”
strategy. If they are a Linux shop … you are ready to move them
if the legal issues take a turn for the worse.

In all cases, you are setting yourself as the optimal software
development solution … being paid to maintain and enhance the
software that truly hits the users and customers … while further
commoditizing and making irrelevant the kernels and low-level code that
everything runs on.

In addition, if you play this right you are able to take over Open
Source projects, and demand the copyrights be signed over for all
contributions … allowing you to further control the “fork” of
software that you are driving forward. Yes … people could
attempt to fork a project in a new direction, however it takes a lot of
effort … and if that occurs, you still charge your customers to
support and maintain the new fork … win … win … win.

So when I see this announcement, I have to say that I see this as being
the real win in Open Source and the current market craze.
Companies like IBM are well placed to capitalize on this also … and
you see that even the big IBM does not have a “distribution” of Linux
they sell … they are moving on beyond the lower layers, and up to the
applications. Companies like this are going to be well positioned
to allow their customers to take advantage of the newest kernels …
and move away safely from those that can not survive … or to ones
that are cheaper or free …

The abstraction of the operating system is well under way … and this
is the birth of a new business that is doing to the operating system,
what operating systems did to the processor.

Adwords Adventures

I just started to experiment with Google Adwords
It is amazing what they have created, and how it all works.  One
thing that I would really like to see is a set of APIs so that I can
monitor my Adwords in near-real-time.

While searching (on Google of course!) for such APIs, I found a number
of fun articles about Adwords.  The first one was about The Google Adwords Happening … a very funny post.  Linked from there was the WifiSM page … again … some fun stuff!

After reading this, I started to wonder what might be some fun ads to
create … just to grab some attention … it would be amusing!

Time to buy your own toy rocket!

Ok everyone, it’s time to go out and buy your own “next generation” model rocket
The folks at Hokkaido Aerospace Science and Technology Incubation
Center (HASTIC) in Japan have created a rocket for the home hobbyist
… only $19,000!

“Measuring 1.6 meters (5.25 feet) long and weighing 10.5 kilograms (23
pounds), the rocket has a cavity that can carry a small payload, such
as a camera, weighing up to half a kilogram (1.1 pound), it said.”  It can go up to .6 miles high!

We’re back into the space age … first the X Prize … now home
rockets that use liquid oxygen!  I’m glad they are requiring the 2
day safety course!

Mermaid Books, and A Cup of Joe

One of our most popular spots in town – A Cup of Joe – has changed
owners in the last several weeks.  After several conversations
with the new owner we were surprised to find out that he is installing
his own Internet services.  He informed us early last week that we
were welcome to remove our services at any time.  Instead of
leaving our customers without access, we looked at what other
businesses might be in the area.  As of Friday, we have moved …
across the hall!

We are glad to announce that we have now partnered with Mermaid Books,
and are able to provide the same Internet services – at the same prices
– at the same location.  Mermaid Books is a very cool shop
offering a variety of books and other items … and has a great owner!

Our Cafe PC is now in the sitting area at Mermaid Books, and we still
have the same wireless access – 24×7 – that covers the same areas and
upstairs.  If you want to, you ought to still be able to get
access at A Cup of Joe … so continue to buy your coffee, hang out,
visit Mermaid Books, and enjoy your Internet.  If you have
questions or comments about your service, please feel free to contact
us!

wget timeouts

I finally resolved another outstanding problem that has been bothering
me for a long time. It’s funny that I didn’t try this experiment
sooner, however I just happened to stumble on this earlier today.

I write a lot of scripts that I use with mrtg. In many of my scripts I use the wget
utility to fetch a web page, and then I “scrape” the web page for the
data that I want mrtg to graph. When I started to do this, I
noticed that when the target device was down, it would take wget a long
time to timeout. In reading the man page, it said to use the “-T”
option to set a shorter timeout. I tried all sorts of values, but
was not able to shorten the timeout to seconds.

Today, while reading a variety of web pages, I found that there is a “-t” (note the lower case!) option that sets the retries.
I don’t know – and can’t find – what the default number of retires is,
but by updating my scripts to have “-t 1 -T 10” I am now getting a 10
second timeout!

So it seems that my problem was not the timeout, but instead the number
of times that wget was retying the request! I have posted my
updated NoCat Gateway Monitoring script that contains the fix. Now I’m going to go and update more of my scripts!