About Scott C. Lemon

I'm a techno futurist, interested in all aspects of humanity, sociology, community, identity, and technology. While we are all approaching the Singularity, I'm just having fun effecting the outcomes of the future!

80211.net now accepting PayPal and credit cards!

Now you can purchase your Internet Access using PayPal, or your credit
card! As of last week, we have implemented a new payment system
that allows you the flexibility of paying for your 80211.net Internet
Access completely on-line. In the past, our 80211.net Internet
Access Coupons were available at all of our locations. Some
80211.net locations have made the choice to no longer carry the cash
coupons, but don’t worry … you will be able to continue to experience our cost effective service by paying on-line.

Access to the new payment system is through our login page, using the
same payment link that you used in the past. Now, once you have
entered your username and password, you have the option to click the
PayPal link to pay with a PayPal account, or credit card. You will be
automatically redirected to the secure PayPal web site to enter your
payment information.  We are
able to accept VISA, MasterCard, Discover, American Express, or
eCheck. Upon completion of the PayPal transaction, your account
will be updated and ready to use!

We hope this adds even more flexibility to our services, and let us know how we can continue to improve our services!

Linux and patents …

There are a number of articles coming out since the OSRM announement of potential patents and Linux. Today, I found the following article that got me thinking about political spin …

“I can assure you that IBM has no intention of using its patents against the
Linux kernel,” IBM’s senior vice president for technology and manufacturing,
Nick Donofrio, told delegates at LinuxWorld.

Well … of course! Note that Nick is very careful to say “against the
Linux kernel”
… which really doesn’t say very much. In my research, while at
SCO, I was looking at what is really contained in a “Linux
Distribution” and the amount of code that qualifies as “the Linux
kernel”. What most people do not understand is that the Linux
“kernel” only amounts to a maximum of 1%-3% of a distribution!

In the case of IBM, I’m sure that they have no issue with the “kernel”
… but I’m not so sure that they think the same about all of the
applications software, and the numerous layers of software that truly
enable Linux to be usable.

So Nick … what about the other 97%? Are you only saying that
you want the “kernel” to move unhindered? What about all of the
other software and components?

I have a feeling that this statement is going to remain very precise
and talk about the “kernel” … I agree with Bruce Perens … I want to
see it in writing. I also would love to see it use much broader
langauge to specify what IBMs intentions are.

Bad Ass Coffee – Salt Lake City on-line!

80211.net is happy to announce another Bad Ass Coffee
location has joined our network! The Bad Ass Coffee in Salt Lake
City, located at 3530 South State Street is now offering Internet
Access to their customers. This location has high-speed 802.11b
wireless, and also has a PC on-site for customers to use.

The Bad Ass Coffee Company was started on the Big Island of Hawaii
in July of 1989 and has become one of the largest sellers of Kona
coffee in the world. In 1995, Michael Bilanzich of Salt Lake City, Utah
purchased Royal Aloha Coffee, Tea & Spice Company and the Bad Ass
Coffee Company. They opened their first Bad Ass Coffee Company store in
Salt Lake City in May 1995 at 3530 South State Street, their next
location in Park City at 651 Park Avenue in July 1995. They are
continuing to grow with numerous shops in Hawaii and throughout the
continental United States and Canada. 

If you are in the area in Salt Lake City, stop in … grab a coffee and something to eat …. cruise the net … and enjoy!

Increasing virtualization and simulation …

This is another prime example of the ability to virtualize and/or
simulate a historical PC architecture. I used to write a lot of
software for CP/M 80 … and still have some of the hardware machines
in my basement. (Hmmm … note to self … consider selling on
eBay!)

If I were to want to run some of that software, I’d have to boot an old
PC … which means that I would have to have one around! Now I
can simply run the software on a “virtual” CP/M 80 machine.  I
truly believe that the world is going to move from trading pirated
software, to trading in pirated disk images.  You can provide a
disk image, and simply boot it with the right virtual machine …

Aliados 0.1.1. A CP/M 80 emulator. [freshmeat.net]

OpenDarwin 7.2.1 progress …

I spent some time the last couple of days working on the new release of OpenDarwin 7.2.1 and it’s looking better and better. I have been experimenting with running it in Virtual PC on my Windows XP laptop.

The initial installation looks pretty good. Under Virtual PC I
was able to boot the CD .iso and proceed through the installation with
little effort. The .vhd “virtual hard disk” that was created was
much larger than other operating systems (>3GB!) however everything
seemed to work.

Upon rebooting, I did have one problem where the OS would hang for a
*long* time indicating that it was “Still waiting for root
device”. I went through the archives of the “hackers” mail list
and found a way around this … during boot I hit a key and entered
“rd=disk0s1 -v” to set the root device and verbose mode. This got
me past the problem and on to the next one … a shorter hang
initializing something with DNS … and then finally to a login
prompt.  I ought to comment that I was able to make the “root
device’ definition “stick” by editing the file:

 /Library/Preferences/SystemConfiguration/com.apple.Boot.plist

and changing the text:

<key>Kernel Flags</key>
<string></string>

to be:

<key>Kernel Flags</key>
<string>rd=disk0s1</string>

I got this fix from another e-mail in the archives …

There still appears to be a networking problem with OpenDarwin
supporting the network interface card that is emulated by Virtual
PC. I had worked with Chuck Remes on this in the past, and I
think this is going to be a simple fix. Related to this, there
appears to be a DNS issue where the OS hangs for a bit during boot as
it attempts to initialize something with lookupd or netinfod …

I’m a little disappointed that in the distro they left out X … there
were too many other packages they wanted to include and so X is left
out as a RPM to install separately. It seems to me that X ought
to be there first, and then allow for the downloading of other
services. However I can see where this version is still oriented
towards the “server” market and the services are more important than
the GUI applications.

I’ll be posting more as I go along … I want to see just how quickly
Darwin and OpenDarwin continuet o advance. I believe that before
long we’ll have another OS that will stand up well beside Linux!

SCOx Web Services Substrate …

It was fun to see the announcement today by SCO
of the SCOx Web Services Substrate.  As the Solutions Architect
and later Chief Technologist at SCO I was central to the design and
architecture of the product.  I worked with an incredible team,
led by Bruce Grant, in solidifying the concepts and implementing the
code.  We were able to get the product out the door just before I
left SCO.

I was really surprised in one way that this particular press release
seems to give so much credit to Ericom.  They are a key partner,
with an incredible product, however they are a very small part of the
overall “substrate” idea.

Our product actually comes from a larger idea that I called the
Biologically Inspired Application Substrate – BIAS.  I presented
this to the public for the first time last year at SCO’s Forum event
that was held in Las Vegas.  In my keynote and presentations, I
spoke about a substrate that consists of a core set of “substrate
services” that enable the creation and execution of distributed
components across a diverse set of platforms.  Think of this as an
alternative to “grids” and other forms of distributed processing.

To create this environment, we first started by creating a set of
“encapsulators”.  The first of these were the “Host Encapsulator”
(based on the Ericom technology) and the SQL Encapsulator (developed at
Vultus and bought by SCO).  These encapsulators provide solutions
for exposing services as SOAP web services.  The Host Encapsulator
can transform “green-screen” application interaction as a set of SOAP
web services.  The SQL Encapsulator can transform a simple set of
SQL CRUD (create, read, update, delete) operations into SOAP web
services.  Both are extremely simple to use and require little
real programming expertise.

Another core component of this release of the SCOx Web Services
Substrate is the Web Application Manager – WAM.  This was a cool
little web application that manages the configuration files of Apache
and Tomcat to automate the deployment and configuration of the SOAP web
services that you create with the Encapsulators.  Both
Encapsulators create .war files that are ready to be deployed under
Tomcat.  The WAM allows you to upload the .war to your server
through a browser interface, and then make some simple selections to
chose to deploy it non-SSL or SSL, and the path that you want it
accessed through.  A user or integrator no longer has to know all
of the complex entries in the various configuration files to do this …

What was not announced in this press release was any effort to complete
the broader vision of the BIAS.  This included the UDDI directory
components, and the other components that further automate the
BIAS.  Our intention was to create an alternative way to create,
deploy, and manage applications that was completely independent of
language, operating system, and hardware.

In any case … it’s fun to see the initial uptake by the market of
what we did get completed.  I would have enjoyed taking this
project a lot further!

Layers of abstraction … virtualization is the future. Management is critical!

I saw the update to this project a while ago, and believe that this is
a valuable solution for one niche of virtualization. User-Mode
Linux is a good solution for virtualizing Linux on Linux.
Although it does not provide the “processor/hardware” level of
virtualization like a VMware, Virtual PC, or Qemu,
it is able to create the necessary “virtual-Linux” for those who only
require such a solution. This is an “operating system
virtualization” that addresses an important segment. Management
of the “virtual machines” is going to be a critical aspect.

As I continue to look at virtualization technologies, I believe they
are going to become more and more important to users of all
kinds. With the new generations of MULCOTH
processors – multi-core, multi-threaded (hyperthreaded) – we will begin
to easily have the horsepower required to emulate CPUs and hardware at
acceptable levels of performance. I recently read a two part
article by Nicholas Blachford that talkes about some future possibilities for CPUs and then future possibilities for operating systems.
I have to agree with many of the perspectives that he expresses.
We are on the verge of having more computing power than we can
imagine. Raw compute capabilities will begin to outpace the
ability of the human sensory system to keep up with it.

In all of this, it appears that having several “virtual operating
systems” or “virtual computers” running on my laptop at any one time is
going to be the natural progression. Instead of getting rid of
old computers, I’ll simply “virtualize” the data and operating system
that is on them and I’ll have them forever!

I’m just beginning to experiment with this … I have some old laptop
hard drives that I am pulling into Virtual PC with their “virtual disk
wizard”. I’ll now be able to boot and run them if I ever want to
look back. At some point in the future, I’m sure that I’ll be
running my current Windows XP and Fedore Core 1 and 2 on some other
virtualization platform on some new 10Ghz+ laptop. And it might
be in the next couple of years!

UMLazi 1.0.3.1. A management system for User-Mode-Linux-based virtual machines. [freshmeat.net]

Robotic Reasoning and Hide and Seek …

I was sitting here tonight doing some reading (catching up on a number
of message boards and RSS feeds) I turned on the TV. I have a Dish Network satellite system, and really like the fact that there are some very cool educational channels.

Tonight I turned to channel 9404 which is UWTV from the University of Washington. This particular channel has some of the best computer science ‘colloquiums‘ that I have seen. Oh yeah … and that is the word they use to name this series of programs on their own web site. This is a really cool program. It’s all about Robotic Reasoning, and the learning being down by creating robots that play Hide and Seek.

What is even more impressive is that the entire series appears to be
viewable on-line! I’m going to go and take a look at the other shows they have!  This program really looks interesting … Location Enhanced Web Services … oh man … I’m going to watch this one on ZebraNet for sure

Anyhow … it is amazing to see the animations they have created that
show the belief models that are created by the robot as it navigates
through a room looking for an
“evader”. These are extremely powerful robots able to travel at
over 2 meters per second, with impressive acceleration. The have
some sort of a scanning laser rangefinder for ‘vision’, and have
powerful computers on them. They have a programmed ‘evader’
robot, and then one or more ‘seeking’ robots that all cruise around
some building – the evader evading and the seekers seeking.

The level of mathmatics that are being used is what I really
enjoy. The robot is creating a real-time belief model that
contains probability values for all parts of the building as it cruises
around looking. The models even show where previously searched
areas will show signs of slight increased probability of an evader
being present after the seeker has left the area. Very cool …

The speaker – Geoffrey Gordon from CMU – then reviewed the range of
algorythms they used to produce the results, and also to demonstrate
the learning and compression capabilities.  Overall … this was
well worth watching.  He is doing some very impressive work …

80211.net and Paypal …

The last several days I’ve been completing something that I wanted to
do for over a year … integrate credit card processing into our
wireless network.  My friend and I run a Wireless ISP called 80211.net and we have several Internet Cafe’s and a local residential network.  We have been running NoCatAuth as the authentication and authorization system.

Several years ago, I contributed to the project by updating the LDAP
connector so that we could use an LDAP Directory as the primary user
database.  I picked this direction since we could use the ‘account
expiration’ features of the directory and allow users to purchase time
on the network.  I also designed and implemented a ‘cash Internet
Access coupon’ system so that the Cafe’s could sell the coupons to
customers.  The user entered the coupon information and the system
would credit their account by incrementing the ‘account expiration’
date.

The coupons have been working great for years, and offer a variety of
benefits to our company, and the customers.  But we wanted to
implement accepting credit cards to augment the coupons, and to allow
users to buy Internet Access when there was no one present.

We picked Paypal as the first
solution for this.  And I am almost complete with the work. 
It’s been fun learning how this works.  I had to write some new
scripts and web pages … first to offer the credit card payment, then
to verify the user account, and then to redirect the user to Paypal …
passing some custom transaction information.  Paypal has a very
simple system for passing these variables, and then offers their IPN
(Instant Payment Notification) that will call the script on my server
that adds credit to the users account.

Lastly, upon completion, Paypal will redirect the user to another one
of my scripts so that I can provide details of the transaction.  I
am now working on the last two steps of this process, and hope to
complete it in the next couple of days.  My folks are coming into
town today, and so I’m not sure how much time I’ll be able to spend
working on this.