About Scott C. Lemon

I'm a techno futurist, interested in all aspects of humanity, sociology, community, identity, and technology. While we are all approaching the Singularity, I'm just having fun effecting the outcomes of the future!

Linux and patents …

There are a number of articles coming out since the OSRM announement of potential patents and Linux. Today, I found the following article that got me thinking about political spin …

“I can assure you that IBM has no intention of using its patents against the
Linux kernel,” IBM’s senior vice president for technology and manufacturing,
Nick Donofrio, told delegates at LinuxWorld.

Well … of course! Note that Nick is very careful to say “against the
Linux kernel”
… which really doesn’t say very much. In my research, while at
SCO, I was looking at what is really contained in a “Linux
Distribution” and the amount of code that qualifies as “the Linux
kernel”. What most people do not understand is that the Linux
“kernel” only amounts to a maximum of 1%-3% of a distribution!

In the case of IBM, I’m sure that they have no issue with the “kernel”
… but I’m not so sure that they think the same about all of the
applications software, and the numerous layers of software that truly
enable Linux to be usable.

So Nick … what about the other 97%? Are you only saying that
you want the “kernel” to move unhindered? What about all of the
other software and components?

I have a feeling that this statement is going to remain very precise
and talk about the “kernel” … I agree with Bruce Perens … I want to
see it in writing. I also would love to see it use much broader
langauge to specify what IBMs intentions are.

Bad Ass Coffee – Salt Lake City on-line!

80211.net is happy to announce another Bad Ass Coffee
location has joined our network! The Bad Ass Coffee in Salt Lake
City, located at 3530 South State Street is now offering Internet
Access to their customers. This location has high-speed 802.11b
wireless, and also has a PC on-site for customers to use.

The Bad Ass Coffee Company was started on the Big Island of Hawaii
in July of 1989 and has become one of the largest sellers of Kona
coffee in the world. In 1995, Michael Bilanzich of Salt Lake City, Utah
purchased Royal Aloha Coffee, Tea & Spice Company and the Bad Ass
Coffee Company. They opened their first Bad Ass Coffee Company store in
Salt Lake City in May 1995 at 3530 South State Street, their next
location in Park City at 651 Park Avenue in July 1995. They are
continuing to grow with numerous shops in Hawaii and throughout the
continental United States and Canada. 

If you are in the area in Salt Lake City, stop in … grab a coffee and something to eat …. cruise the net … and enjoy!

Increasing virtualization and simulation …

This is another prime example of the ability to virtualize and/or
simulate a historical PC architecture. I used to write a lot of
software for CP/M 80 … and still have some of the hardware machines
in my basement. (Hmmm … note to self … consider selling on
eBay!)

If I were to want to run some of that software, I’d have to boot an old
PC … which means that I would have to have one around! Now I
can simply run the software on a “virtual” CP/M 80 machine.  I
truly believe that the world is going to move from trading pirated
software, to trading in pirated disk images.  You can provide a
disk image, and simply boot it with the right virtual machine …

Aliados 0.1.1. A CP/M 80 emulator. [freshmeat.net]

OpenDarwin 7.2.1 progress …

I spent some time the last couple of days working on the new release of OpenDarwin 7.2.1 and it’s looking better and better. I have been experimenting with running it in Virtual PC on my Windows XP laptop.

The initial installation looks pretty good. Under Virtual PC I
was able to boot the CD .iso and proceed through the installation with
little effort. The .vhd “virtual hard disk” that was created was
much larger than other operating systems (>3GB!) however everything
seemed to work.

Upon rebooting, I did have one problem where the OS would hang for a
*long* time indicating that it was “Still waiting for root
device”. I went through the archives of the “hackers” mail list
and found a way around this … during boot I hit a key and entered
“rd=disk0s1 -v” to set the root device and verbose mode. This got
me past the problem and on to the next one … a shorter hang
initializing something with DNS … and then finally to a login
prompt.  I ought to comment that I was able to make the “root
device’ definition “stick” by editing the file:

 /Library/Preferences/SystemConfiguration/com.apple.Boot.plist

and changing the text:

<key>Kernel Flags</key>
<string></string>

to be:

<key>Kernel Flags</key>
<string>rd=disk0s1</string>

I got this fix from another e-mail in the archives …

There still appears to be a networking problem with OpenDarwin
supporting the network interface card that is emulated by Virtual
PC. I had worked with Chuck Remes on this in the past, and I
think this is going to be a simple fix. Related to this, there
appears to be a DNS issue where the OS hangs for a bit during boot as
it attempts to initialize something with lookupd or netinfod …

I’m a little disappointed that in the distro they left out X … there
were too many other packages they wanted to include and so X is left
out as a RPM to install separately. It seems to me that X ought
to be there first, and then allow for the downloading of other
services. However I can see where this version is still oriented
towards the “server” market and the services are more important than
the GUI applications.

I’ll be posting more as I go along … I want to see just how quickly
Darwin and OpenDarwin continuet o advance. I believe that before
long we’ll have another OS that will stand up well beside Linux!

SCOx Web Services Substrate …

It was fun to see the announcement today by SCO
of the SCOx Web Services Substrate.  As the Solutions Architect
and later Chief Technologist at SCO I was central to the design and
architecture of the product.  I worked with an incredible team,
led by Bruce Grant, in solidifying the concepts and implementing the
code.  We were able to get the product out the door just before I
left SCO.

I was really surprised in one way that this particular press release
seems to give so much credit to Ericom.  They are a key partner,
with an incredible product, however they are a very small part of the
overall “substrate” idea.

Our product actually comes from a larger idea that I called the
Biologically Inspired Application Substrate – BIAS.  I presented
this to the public for the first time last year at SCO’s Forum event
that was held in Las Vegas.  In my keynote and presentations, I
spoke about a substrate that consists of a core set of “substrate
services” that enable the creation and execution of distributed
components across a diverse set of platforms.  Think of this as an
alternative to “grids” and other forms of distributed processing.

To create this environment, we first started by creating a set of
“encapsulators”.  The first of these were the “Host Encapsulator”
(based on the Ericom technology) and the SQL Encapsulator (developed at
Vultus and bought by SCO).  These encapsulators provide solutions
for exposing services as SOAP web services.  The Host Encapsulator
can transform “green-screen” application interaction as a set of SOAP
web services.  The SQL Encapsulator can transform a simple set of
SQL CRUD (create, read, update, delete) operations into SOAP web
services.  Both are extremely simple to use and require little
real programming expertise.

Another core component of this release of the SCOx Web Services
Substrate is the Web Application Manager – WAM.  This was a cool
little web application that manages the configuration files of Apache
and Tomcat to automate the deployment and configuration of the SOAP web
services that you create with the Encapsulators.  Both
Encapsulators create .war files that are ready to be deployed under
Tomcat.  The WAM allows you to upload the .war to your server
through a browser interface, and then make some simple selections to
chose to deploy it non-SSL or SSL, and the path that you want it
accessed through.  A user or integrator no longer has to know all
of the complex entries in the various configuration files to do this …

What was not announced in this press release was any effort to complete
the broader vision of the BIAS.  This included the UDDI directory
components, and the other components that further automate the
BIAS.  Our intention was to create an alternative way to create,
deploy, and manage applications that was completely independent of
language, operating system, and hardware.

In any case … it’s fun to see the initial uptake by the market of
what we did get completed.  I would have enjoyed taking this
project a lot further!

Layers of abstraction … virtualization is the future. Management is critical!

I saw the update to this project a while ago, and believe that this is
a valuable solution for one niche of virtualization. User-Mode
Linux is a good solution for virtualizing Linux on Linux.
Although it does not provide the “processor/hardware” level of
virtualization like a VMware, Virtual PC, or Qemu,
it is able to create the necessary “virtual-Linux” for those who only
require such a solution. This is an “operating system
virtualization” that addresses an important segment. Management
of the “virtual machines” is going to be a critical aspect.

As I continue to look at virtualization technologies, I believe they
are going to become more and more important to users of all
kinds. With the new generations of MULCOTH
processors – multi-core, multi-threaded (hyperthreaded) – we will begin
to easily have the horsepower required to emulate CPUs and hardware at
acceptable levels of performance. I recently read a two part
article by Nicholas Blachford that talkes about some future possibilities for CPUs and then future possibilities for operating systems.
I have to agree with many of the perspectives that he expresses.
We are on the verge of having more computing power than we can
imagine. Raw compute capabilities will begin to outpace the
ability of the human sensory system to keep up with it.

In all of this, it appears that having several “virtual operating
systems” or “virtual computers” running on my laptop at any one time is
going to be the natural progression. Instead of getting rid of
old computers, I’ll simply “virtualize” the data and operating system
that is on them and I’ll have them forever!

I’m just beginning to experiment with this … I have some old laptop
hard drives that I am pulling into Virtual PC with their “virtual disk
wizard”. I’ll now be able to boot and run them if I ever want to
look back. At some point in the future, I’m sure that I’ll be
running my current Windows XP and Fedore Core 1 and 2 on some other
virtualization platform on some new 10Ghz+ laptop. And it might
be in the next couple of years!

UMLazi 1.0.3.1. A management system for User-Mode-Linux-based virtual machines. [freshmeat.net]

Robotic Reasoning and Hide and Seek …

I was sitting here tonight doing some reading (catching up on a number
of message boards and RSS feeds) I turned on the TV. I have a Dish Network satellite system, and really like the fact that there are some very cool educational channels.

Tonight I turned to channel 9404 which is UWTV from the University of Washington. This particular channel has some of the best computer science ‘colloquiums‘ that I have seen. Oh yeah … and that is the word they use to name this series of programs on their own web site. This is a really cool program. It’s all about Robotic Reasoning, and the learning being down by creating robots that play Hide and Seek.

What is even more impressive is that the entire series appears to be
viewable on-line! I’m going to go and take a look at the other shows they have!  This program really looks interesting … Location Enhanced Web Services … oh man … I’m going to watch this one on ZebraNet for sure

Anyhow … it is amazing to see the animations they have created that
show the belief models that are created by the robot as it navigates
through a room looking for an
“evader”. These are extremely powerful robots able to travel at
over 2 meters per second, with impressive acceleration. The have
some sort of a scanning laser rangefinder for ‘vision’, and have
powerful computers on them. They have a programmed ‘evader’
robot, and then one or more ‘seeking’ robots that all cruise around
some building – the evader evading and the seekers seeking.

The level of mathmatics that are being used is what I really
enjoy. The robot is creating a real-time belief model that
contains probability values for all parts of the building as it cruises
around looking. The models even show where previously searched
areas will show signs of slight increased probability of an evader
being present after the seeker has left the area. Very cool …

The speaker – Geoffrey Gordon from CMU – then reviewed the range of
algorythms they used to produce the results, and also to demonstrate
the learning and compression capabilities.  Overall … this was
well worth watching.  He is doing some very impressive work …

80211.net and Paypal …

The last several days I’ve been completing something that I wanted to
do for over a year … integrate credit card processing into our
wireless network.  My friend and I run a Wireless ISP called 80211.net and we have several Internet Cafe’s and a local residential network.  We have been running NoCatAuth as the authentication and authorization system.

Several years ago, I contributed to the project by updating the LDAP
connector so that we could use an LDAP Directory as the primary user
database.  I picked this direction since we could use the ‘account
expiration’ features of the directory and allow users to purchase time
on the network.  I also designed and implemented a ‘cash Internet
Access coupon’ system so that the Cafe’s could sell the coupons to
customers.  The user entered the coupon information and the system
would credit their account by incrementing the ‘account expiration’
date.

The coupons have been working great for years, and offer a variety of
benefits to our company, and the customers.  But we wanted to
implement accepting credit cards to augment the coupons, and to allow
users to buy Internet Access when there was no one present.

We picked Paypal as the first
solution for this.  And I am almost complete with the work. 
It’s been fun learning how this works.  I had to write some new
scripts and web pages … first to offer the credit card payment, then
to verify the user account, and then to redirect the user to Paypal …
passing some custom transaction information.  Paypal has a very
simple system for passing these variables, and then offers their IPN
(Instant Payment Notification) that will call the script on my server
that adds credit to the users account.

Lastly, upon completion, Paypal will redirect the user to another one
of my scripts so that I can provide details of the transaction.  I
am now working on the last two steps of this process, and hope to
complete it in the next couple of days.  My folks are coming into
town today, and so I’m not sure how much time I’ll be able to spend
working on this.

Migrating NetWare to Linux …

Today I took another step forward in migrating my last NetWare servers to Linux.

While working at Novell I did a considerable amount of R&D on my
own time to set up a complete ISP using Novell’s NetWare. I
experimented with web servers, mail servers, IRC servers, voice/video
conferencing servers and even wrote an MP3 streaming server for
NetWare. By the time that Eric Schmidt joined us at Novell, I had
already created a long list of limitations that were stopping me from
being successful. The irony of the entire situation is that now NetWare
v6.x really has some powerful Internet services … but it’s really too
late. SuSE Linux is going to be the future for Novell.

One of the primary Internet services – of course – is the web server. As a part
of the executive team that partnered with Netscape to create the
Novonyx venture, I quickly embraced the Novonyx SuiteSpot Web Server
on NetWare and have been running it since. To facilitate my move to Linux, I
wanted to first move to Apache on NetWare, and then take the configuration to Linux. Once I have Apache on NetWare working
solid, then changing out the kernel underneath is no big issue.

To get from Netscape Enterprise Server to Apache, the first thing that I did was to take copies of the two core
configuration files – obj.conf for Netscape, and httpd.conf for Apache.
What I found is that there were three core areas that I had to address:

  1. virtual host definitions
  2. log file specs
  3. cgi handling

The first two items in this list were fairly simple to convert. It really
just came down to creating all of the httpd.conf virtual host
definitions. In the obj.conf it might look like this:

<Client urlhost=”www.inevitable.org”>
   NameTrans fn=”document-root” root=”/users/org/Inevitable/Web/Docs”
</Client>

and so in the httpd.conf it now looks like this:

# the.inevitable.org
<VirtualHost 63.230.21.106:80>
    ServerAdmin support@onepostal.com
    DocumentRoot /users/org/Inevitable/Web/Docs
    ServerName the.inevitable.org
    ErrorLog /users/org/Inevitable/Web/Logs/error_log
    CustomLog /users/org/Inevitable/Web/Logs/WebHits.log combined
</VirtualHost>

Note that this also addresses the ‘combined’ log file format and location. In the obj.conf I had to have some lines like:

Init
log0=”/users/org/Inevitable/Web/Logs/WebHits.log”
format.log0=”%Ses->client.ip% – %Req->vars.auth-user% [%SYSDATE%]
“%Req->reqpb.clf-request%” %Req->srvhdrs.clf-status%
%Req->srvhdrs.content-length% “%Req->headers.referer%”
“%Req->headers.user-agent%”” fn=”flex-init”

and:

<Object ppath=”/users/org/Inevitable/Web/Docs/*”>
    AddLog fn=”flex-log” name=”log0″
</Object>

… to write to the logs. Apache is much simpler. It’s just the line within the virtual host above:

CustomLog /users/org/Inevitable/Web/Logs/WebHits.log combined

With all of my .conf file converted, everything seemed to work well. Except cgi …

NetWare was always an interesting animal to deal with when it comes
to server-side development languages. There are a lot of
intersting hoops that have to be jumped through to invoke scripts …
and most of this is done through “NLMs” which are the binary
executables on NetWare. On my servers I had some scripts written
in various languages – including Netbasic and Novell Script – and this
turned out to be the toughest part to define and get working.

The first step was to get ‘mod_lcgi.nlm’ working. I added it to the httpd.conf with the following line:

LoadModule lcgi_module modules/mod_lcgi.nlm

I then had to find the right combination of statements to
get mod_lcgi working properly. The following section I added to
the end of the httpd.conf … and it took me a *long* time to get it
right!

<IfModule mod_lcgi.c>
    AddHandler lcgi-script .nlm .ns .asp .nsp .pl .bas

    ScriptAlias /nsn sys:/nsn/web
    LCGIModuleMap sys:/nsn/lcgi/cgi2ucs.nlm /nsn .ns

    ScriptAlias /sp sys:/nsn/web
    LCGIModuleMap sys:/nsn/lcgi/scrptpgs.nlm .asp .nsp /sp

    ScriptAlias /perl sys:/perl/web
    LCGIModuleMap sys:/perl/lcgi/cgi2perl.nlm .pl /perl
    AddEnvVar PERL_ROOT sys:/perl/web

    ScriptAlias /netbasic sys:/netbasic/web
    LCGIModuleMap sys:/netbasic/lcgi/cgi2nmx.nlm /netbasic .bas
</IfModule>

Once I had that working, I just had to add the mod_lcgi.nlm to the
apachemodules directory … uh and then things went very wrong.

When I restarted the server, mod_lcgi barfed with a whole lot of
“Unresolved Externals” … symbols that could not be resolved and
dynamically linked. It appeared that mod_lcgi.nlm was attempting to
autoload ‘nslcgi.nlm’ and dynamically link.  I had to dig for a
while to determine what
the issue was. I found this Novell TID
that explained my problem! I went and looked at one of my newer
servers where Apache was running fine, and sure enough I found an
updated ‘nslcgi.nlm’ in the
system directory of that server … once I updated this NLM I was
up and going …

I am now running all of my web sites on Apache on NetWare … and so it
is now just one next step to get all of this running on Linux … maybe
next week … 😉

If anyone else is trying to do something like this … feel free to contact me and ask questions!