Adding a secondary IP address on Fedora Core

Every now and then I find another thing in Linux – or a particular
distribution – that really impresses me.  Tonight was one of those
times.  My home network is connected through a Fedora Core Linux
box, to a wireless link to my communications shed up on the mountain
above my house.  The shed then has a ~6 mile wireless link to my
office up at the Heber City Airport.

I have actually overlaid two different IP subnets on the wireless
network, one being the small subnet provided by my DSL provider, and
then other being a 10.x.x.x subnet that I created for a set of hosts
that I use for testing.  When I set up my Fedora Core Linux box as
a home gateway, I gave it a public IP address that is part of the fixed
range through my DSL modem.  So I can get in and out of my house
just fine, but was unable to access anything on the 10.x.x.x network
from my house.

To resolve this, I had to bind a second address to the same NIC card
… something that I had done with other operatings systems in the
past, but never with Linux.  After a quick search on Google, I
found a couple of articles that outlined a manual method of configuring
this, and then realized that I ought to look at the GUI config tools
that are a part of the Fedora distribution.  I opened the Network
Configuration tool, and simply clicked the “add” button … chose the
ethernet card, and provided address information.  I was impressed
that the tool added the new binding perfectly!  I clicked the
“activate” button, and up came the secondary IP address.

In a matter of 10 minutes I had the secondary binding working, and now
have a host that is connected to both the public IP address range, and
my private 10.x.x.x network through the same ethernet card.  Yeah
… I know … simple stuff.  But I had to comment how impressed I
was that the config tools managed this in such an intuitive way!

[tags: ]

The value of learning multiple languages

Last night I was talking with a 13 year old who is creating some
impressive works in Photoshop.  He created some very cool graphics
for his Counter Strike clan web site.  I have to admit that I was
really blown away by what he created.

As we talked I asked him what he used to create the graphics, and that
is when he told me about using Photoshop.  I asked what else he
was doing on the web and he replied “Some Javascipt … and a little
PHP.”  Wow … I was surprised.  We talked about Javascript,
and although his knowledge was not incredibly deep, he had a good grasp
of the basics of the language.  I showed him some of the stuff
that I have been working on lately and he asked some good
questions.  We then progressed to talking about PHP, and he
explained some of the small things that he is learning there. 
Impressive.

What struck me today is the shift in learning to “speak” different
“languages”.  Decades ago, or even hundreds of years ago, it was
seen as important to learn to speak other languages from around the
globe.  People in non-English speaking countries learned
English.  Most of the schools here in the USA taught middle and
high-school students Spanish, French, German, and other
languages.  (I actually spent years learning Spanish … although
it is very rusty at this point!)  It always seemed to me that the
intent was to give me a leg up on interacting with people of other
countries and origins.  I have, from time to time, found value in
my Spanish learning.

Today it now appears that the future is more in “talking” to computers
and the Internet.  And so now it seems there is more value in
learning “computer” languages … then “foriegn” langauges.  As I
thought about this today, I realized that this is probably true. 
More and more people from around the globe are learning English, and
much of the Internet – and computer technology in general – is based on
English.  So where do children turn?  To “interacting” with
computers.  And so understanding the languages used by computers
is becoming more and more important.

Its fun to think about the evolution of computer langauges, and to see
the various roots of the popular langauges.  In addition, most of
the scripting languages are becoming so high-level, and the component
libraries so rich, that even a beginner at programming can create
powerful applications.  With the Internet as a platform, Web
Services, XML, RSS, and many other standards are emerging as the APIs
independent of operating system … or programming “langauge”.  I
can only imagine what a 13 year old, who today is learning Javascript
and PHP, might be developing in 5 or 10 years.  I know that it
will be fun to see!

New version of the RadioAtomBridge tool – v3.2!

Tonight I posted the lastest version (v3.2) of my RadioAtomBridge tool for
Radio Userland
. This tool is a way to mirror blog posts from Radio Userland
to
blogs that are hosted at Blogger.com. The tool has been working
great over the last year, but recently Blogger.com changed their API
(wisely!) to require the use of SSL. I discovered this when my
tool failed and stopped working.

I spent the last several weeks making a number of changes, and cleaning
up the tool in general. There were several fixes that I wanted to
get in place, and I think that I have covered the majority of
them. The primary change was to allow you to specify using
SSL/TLS for the ATOM posts. This does require a download from
Userland of the Radio Userland TLS module … but it’s a free download.

I’ve done a good bit of testing, and have actually posted this article
with the tool. I believe that I have things working well enough
to release a build as v3.2 … so go and grab a copy and give it a
try! If you have problems, please post comments on the
RadioAtomBridge blog … I’ll look for them there!

What is the true cost?

I am always a bit skeptical when I read articles like this.  Yes
… it’s all marketing speak.  How can anyone know until they
actually experience the migration/upgrade process?  I’m not being
“pro-Vista”, nor “anti-Linux” … I’m working to be
“pro-realistic”.  I believe that there are multiple points that
Jack is not addressing here.

First, I am curious about the “in-place” migration to Vista, vs. the
“in-place” migration to Linux?  Is there even an “in-place”
migration to Linux?  I have to admit that I have not looked into
this in detail, however I can almost promise you that we will see an
in-place migration tool, that will allow any user or company to migrate
to Vista with minimal (yes – minimal!) pain.  Will there be some
problems?  Of course.  But lets now compare that to a
migration to Linux from Windows.  Could I go to a Windows user,
and perform a clean “in-place” migration, and have them productive when
it completes?  I have my doubts here.

Second, I have to admit that hardware evolution is against Jack and
Novell.  As people upgrade hardware – which they will – they are
going to get that Windows Vista license, instead of the older Windows
XP license … and they will migrate.  And it will be almost
painless.  The cost of the “upgrade” of the OS will be hidden in
the cost of the new hardware.

To me, the mentality is still all wrong.  To continue to fight
against something only continues to strengthen it.  Even Novell
employees heard the old saying “embrace, extend, extinguish”. 
Its all about creating more powerful abstractions … there is much
more value in Mono, then Linux … IMHO.  There is much more value
in the applications then the OS.  There is even more value in the
services hosted in the Internet … enabling the use of the Internet Platform for business.

To fight these fights, and make these claims, is just old school
thinking and behavior.  Microsoft knows this, and the real Open
Source community knows this.

Novell: Vista will drive users to Linux. Jack Messman claims that the cost of moving to Windows Vista will prompt users to consider moving to desktop Linux. [CNET News.com]

Tagging in Radio Userland

This is my first post using my new tagging tool within the Radio Userland

blogging application, using Mozilla. It really has been quite an
adventure in learning Javascript at a much deeper level … for
me. Radio has a WYSIWYG editor within it’s web interface that can
be used for writing posts, and there are two flavors (of course):
one for IE and one for Mozilla.

What I wanted to do is add a new toolbar button in the WYSIWYG editor
to allow me to enter a list of comma-delimited words, and have those
words converted to the proper HTML for tags … like the tags at the
bottom of this post. What an adventure!

I was actually able to get the code written and working for IE first
… but not exactly what I wanted. I wanted to have the tags
‘inserted’ at the cursor position, but could only get ‘append’
working. So the tags would always end up at the end of the
post. I then worked on the Mozilla version of this, and got
everything working perfectly! There was already some sample code
on how to do an ‘insert’ in Mozilla. Well … if I could do it in
Mozilla, then I had to find how to get the same functionality in
IE. The problem is that the Mozilla code (of course) would not work
with IE. In the end, it was three simple lines of Javascript!

win = document.getElementById(“idEdit”);
var sel = win.document.selection.createRange();
sel.pasteHTML(insertNode);

First, get the iFrame object that the user is typing in, then get the
selection range, then simply paste the HTML/text into that
selection. Simple! In fact much easier than the Mozilla
code that I was looking at. Of course, finding those three lines
was a real pain … until I found this great article at WebReference.
WebReference has always been an incredible site for reading and
learning about HTML, CSS, and Javascript. They came though again
with this article which finally gave me the details to resolve this
issue. Now I’m on to adding toolbar buttons to ease the way that
I insert images into a post. We’ll see how quick that goes …

What hacks and trojans are really about

Few people seem to understand what all of these viruses and malware are really about.  Yes, there is a certain amount of spam that you get that is designed to then barrage you with pop-up ads … as though you would say “Wow, what a great ad!  I’m glad I get these pop-ups … I’ll have to go and spend money with these folks!

One of the real business models behind all of this – the real people doing business in this space – are the ones that use large numbers of compromised computers at business and homes to launch DDOS (Distributed Denial Of Service) Attacks.  These attacks are used for good ol’ conventional extortion.  It becomes a very simple case of “Pay me, or your Internet presence will be shut down!”  The person who controls the compromised machines can easily “task” them to attack various web sites, at various times, and for various amounts of time.  Reading this article, you can begin to get the idea that large scale “Internet vandelism” can quickly grow into a profitable – yet illegal – business.  I recently read another article where a California 20-year-old had over 400,000 machines under his control as a massively distributed “botnet” that he could divide up and control as a virtual military force.  Yes … 400,000 machines!

A while back I had one of my Linux boxes compromised through a hole in a Open Source PHP application.  The attackers were able to install and execute a small script that pulled down and ran a larger script.  That one actually attached to an IRC server and waited for additional commands.  I found that they then sent a command to download a DDOS script, and would then begin to run it from time to time attacking various sites.  I discovered this whole scenario when I noticed that my DSL line would get swamped from time to time and isolated the traffic to that Linux box.  I actually had some fun before cleaning everything up.  I did patch the hole, but I modified the DDOS script to simply log information about the command and the target, but not actually generate the traffic.  It was fun to review the log and see that my box was being controlled by a compromised machine in South America, and that I was being to used – at one point – to attack an on-line gambling site.

This got me thinking a lot about what we don’t know that we don’t know about the whole world of the Internet, spam, viruses, and malware.  In addition, it reinforces the levels of indirection that can easily be created to hide the identity of the controller.  But not forever!

Hackers Admit to Wave of Attacks. With their ringleader on the run, two cybervandals own up to using an army of compromised PCs to take down sites for commercial gain. By Kevin Poulsen. [Wired News]

[tags: ]

Next GoBinder Beta Build?

Today we are working hard to complete the next beta build of GoBinder 2006.  It’s taken a number of weeks to get it completed, however I think that it is well worth the wait.  The engineers have been doing a lot of work on performance, bug fixes, and a lot of enhancements.  Things are really getting a lot more solid.  If we get a good build today, then we go into code lock-down for a week or so to clean-up and fix any last major bugs before doing a public release.

We’ll also be releasing a new version of the GoBinder SDK which allows you to easily create new plug-ins for GoBinder, Backpack, or the new version of Plan Plus.  I’m working on several new sample plug-ins for doing peer-to-peer sharing, chat, and some RSS/Blogging tools.  I’ve got numerous other ideas … but can only do so much right now.

I’ll post later today, or Monday, when we will release the next beta on the Agilix Forums … it ought to be within the next week.

[tags: ]

Playing with Virtual Earth

I have to admit that I really like Google Maps.  It’s an impressive web application, and has completely altered how I use maps – and send map information – to friends, family, and coworkers.  It’s now so easy to quickly locate something on Google maps, get the link, and send it.

As of yesterday I started to experiment more with Microsoft’s Virtual Earth, and I now think that I found something I like even better than Google Maps!  I was listening to the Virtual Earth podcast from Where 2.0 and Stephen talked about the features that are there … many of which I hadn’t realized.

First, if you put in a query, it will display the results contained in the map you are viewing.  As you pan the map, the query results update.  You can use the compass for “game-panning” by clicking your mouse on the compass and holding the mouse button down.  You can also use the Tools->Scratchpad to drag and drop locations that you want to keep.  It’s then easy to e-mail these to a friend.

I’m going to continue to use both, and comment on which I like better.  I also am going to start to experiment with the developer APIs.  I have some ideas of some data that I want to place on a map.

[tags: ]

Apple and User Interface

Anyone who has followed the entire life of Apple Computers knows that they have been involved in several User Interface disputes and lawsuits.  The whole desktop idea was born from a visit to Xerox PARC.  Now it looks like Creative Labs has quietly patented the way that music can be navigated using the ID3 tag information.

The interesting part is that the Apple iPod might be in violation of this patent!  It will be interesting to see what comes of this … I have to admit that I like my iPod!

After patent, Creative examines all options. Creative Technology is “evaluating all alternatives” now that it has received a patent for music player interfaces such as those found on rival Apple iPods. [Computerworld News]

[tags: ]