What to do with Wireless Freeloaders … and the dangers of “open” wireless

I came across this web page – upside-down-ternet – today while looking for some other technical information. I thought that this was great … just way too creative!

It seems that this guy found out that some neighbors were using his open wireless network. Now I have thought about this before … and thought of many of the security issues. For example, did you know that when you use someones open wireless network, they can “sniff” all of your non-SSL secured usernames and passwords from your applications? If you do not use SSL for your e-mail application, and you were using my open wireless network, I can get your e-mail account username and password. In fact … I have done this in the past. Likewise, any website that has you login and does not use SSL to secure it, is passing your username and password back and forth in the clear!

Anyhow … what this guy realized is that he could lock down the wireless, or have fun with the neighbors! The first thing he chose to do was redirect all of their web requests to a kitten website. So no matter where they tried to go on the Internet, they would end up at the kitten site.

His second ploy is even better. What he realized is that he could re-route all of their web browser requests through his own “proxy” … without them even knowing! What he played with was modifying the images on all of the web pages they viewed. His two examples are flipping all of the images … or blurring them!

The real important thing to realize here, is that when you choose to jump onto someone’s open wireless network, all of your Internet traffic is being routed through their network, and maybe their servers. If the person who owns that network is smart, or devious, there is a good chance that your communications are being monitored, recorded, or modified. You might want to think about it …

Botnets … the unknown reality

I find it hard to believe that many people are not aware of botnets,
and what they are doing.  Of course, I’m so deep in technology and
the Internet that my “common sense” is now severely distorted. 
For those people who have not heard about botnets, this Washington Post botnet article is a great place to start.  It’s an eye opener.

When you read this article, just realize that this is about the botnets
that have been discovered and are known.  There are more out there
that are unknown, and the sizes of some of these botnets is staggering.  Read here, here, or here, about the FBI arrest of a many running a 400,000+ node botnet!  Oh yeah … and he was 20 years old.

Microsoft Research and Mesh Networking

I have been following the work that Microsoft is doing in their Windows Peer To Peer Networking
This is actually some very impressive technology that allows for a
distributed set of users to create peer-to-peer groups for exchanging
data and information.  I’m working on some applications (actually
plug-ins for GoBinder) that are going to exploit this
technology.  Microsoft has put together a Peer To Peer SDK allowing you to perform name-to-IP name resolution (PNRP
– a serverless DNS technology), along with graphing and grouping APIs
for the transfer of data between the peers.  It’s all very
impressive stuff … and is in all Windows XP SP2 machines … and will
be in all Vista machines.  The bottom line … this is going to
drastically alter how ad-hoc groups of users on Windows machines will
be able to locate each other, communicate, and collaborate.

Today, I found yet another amazing technology out of Microsoft Research
For years I have been tracking the “wireless mesh networking”
space.  This is where each node in a wireless network is a
repeater/relay for any other node that is within range.  With true
mesh technologies I can communicate with other users, even if they are
beyond the reach of my wireless signal, if there are one or more nodes
between us that are part of the “mesh” network.  Mesh networks are
the next big thing … even the cellular carriers are talking about
adding emergency mesh capabilities into cell phones.

What I found today is that Microsoft Research
has code available today that will allow you to experiment with some
pretty advanced mesh networking using your Windows XP machine! 
The Microsoft Research Networking Research Group has released their Mesh Networking software, and even an Mesh Networking Academic Resource Toolkit
I’ve started to go through the documentation, and so far this is a very
impressive solution.  They have embraced and extended some of the
standards that are currently being developed:

We implement ad-hoc routing and link quality measurement in a module that we
call the Mesh Connectivity Layer (MCL). Architecturally, MCL is a loadable
Microsoft Windows driver. It implements a virtual network adapter, so that to
the rest of the system the ad-hoc network appears as an additional (virtual)
network link. MCL routes using a modified version of DSR (an IETF protocol) that
we call Link Quality Source Routing (LQSR). We have modified DSR extensively to
improve its behavior, most significantly to support link quality metrics.

The MCL driver implements an interposition layer between layer 2 (the link
layer) and layer 3 (the network layer). To higher layer software, MCL appears to
be just another Ethernet link, albeit a virtual link. To lower layer software,
MCL appears to be just another protocol running over the physical link.

I am really impressed to see this work this far along.  I have
been waiting for years to see mesh networking hit the masses … and
this is now getting close.  I’m now going to upgrade some of my
wearable computers to Windows XP just to experiment with this!

Adding a secondary IP address on Fedora Core

Every now and then I find another thing in Linux – or a particular
distribution – that really impresses me.  Tonight was one of those
times.  My home network is connected through a Fedora Core Linux
box, to a wireless link to my communications shed up on the mountain
above my house.  The shed then has a ~6 mile wireless link to my
office up at the Heber City Airport.

I have actually overlaid two different IP subnets on the wireless
network, one being the small subnet provided by my DSL provider, and
then other being a 10.x.x.x subnet that I created for a set of hosts
that I use for testing.  When I set up my Fedora Core Linux box as
a home gateway, I gave it a public IP address that is part of the fixed
range through my DSL modem.  So I can get in and out of my house
just fine, but was unable to access anything on the 10.x.x.x network
from my house.

To resolve this, I had to bind a second address to the same NIC card
… something that I had done with other operatings systems in the
past, but never with Linux.  After a quick search on Google, I
found a couple of articles that outlined a manual method of configuring
this, and then realized that I ought to look at the GUI config tools
that are a part of the Fedora distribution.  I opened the Network
Configuration tool, and simply clicked the “add” button … chose the
ethernet card, and provided address information.  I was impressed
that the tool added the new binding perfectly!  I clicked the
“activate” button, and up came the secondary IP address.

In a matter of 10 minutes I had the secondary binding working, and now
have a host that is connected to both the public IP address range, and
my private 10.x.x.x network through the same ethernet card.  Yeah
… I know … simple stuff.  But I had to comment how impressed I
was that the config tools managed this in such an intuitive way!

[tags: ]

What hacks and trojans are really about

Few people seem to understand what all of these viruses and malware are really about.  Yes, there is a certain amount of spam that you get that is designed to then barrage you with pop-up ads … as though you would say “Wow, what a great ad!  I’m glad I get these pop-ups … I’ll have to go and spend money with these folks!

One of the real business models behind all of this – the real people doing business in this space – are the ones that use large numbers of compromised computers at business and homes to launch DDOS (Distributed Denial Of Service) Attacks.  These attacks are used for good ol’ conventional extortion.  It becomes a very simple case of “Pay me, or your Internet presence will be shut down!”  The person who controls the compromised machines can easily “task” them to attack various web sites, at various times, and for various amounts of time.  Reading this article, you can begin to get the idea that large scale “Internet vandelism” can quickly grow into a profitable – yet illegal – business.  I recently read another article where a California 20-year-old had over 400,000 machines under his control as a massively distributed “botnet” that he could divide up and control as a virtual military force.  Yes … 400,000 machines!

A while back I had one of my Linux boxes compromised through a hole in a Open Source PHP application.  The attackers were able to install and execute a small script that pulled down and ran a larger script.  That one actually attached to an IRC server and waited for additional commands.  I found that they then sent a command to download a DDOS script, and would then begin to run it from time to time attacking various sites.  I discovered this whole scenario when I noticed that my DSL line would get swamped from time to time and isolated the traffic to that Linux box.  I actually had some fun before cleaning everything up.  I did patch the hole, but I modified the DDOS script to simply log information about the command and the target, but not actually generate the traffic.  It was fun to review the log and see that my box was being controlled by a compromised machine in South America, and that I was being to used – at one point – to attack an on-line gambling site.

This got me thinking a lot about what we don’t know that we don’t know about the whole world of the Internet, spam, viruses, and malware.  In addition, it reinforces the levels of indirection that can easily be created to hide the identity of the controller.  But not forever!

Hackers Admit to Wave of Attacks. With their ringleader on the run, two cybervandals own up to using an army of compromised PCs to take down sites for commercial gain. By Kevin Poulsen. [Wired News]

[tags: ]

Internet Infrastructure Ignorance

While at Internet Identity Workshop 2005 this past week, one of the interesting issues that
came up several times related to name spaces.  Specifically, there
were numerous times where people voiced their opinions about how name
spaces “should” map onto the Internet, and they used DNS as an example
of how things “should” be.  The problem is that they demonstrated,
by their words and arguments, that they were ignorant of how DNS
works.  The infrastructure of the Internet has become so
transparent, that it seems to me people have begun to make gross
assumptions about it’s architecture, and this is what is the root of
many of the security and privacy issues that we are seeing today.

I was looking forward to the presentation by Drummond Reed about XRI/XDI.  One of my concerns in any solid digital identity solution is the freedom to choose. 
I am not a big believer in compulsory community membership, but instead
believe that true freedom is represented by our ability to move in and
out of various communities at will, and to create new communities as we
want.  I really like this thought from “The Meaning of Life – Part II“:

There are millions of different social groups in the world, political, economic,
religious, philosophical, and cultural. These groups are all trying to bring
their particular vision into focus and build a life that is related to the
central principles of the group. Each of these groups is an experiment in
progress. As time passes, the ideas that are developed within these groups
either spread to the society as a whole or are abandoned as unworkable. This is
a Darwinian process that develops better ideas in the same way that evolution is
supposed to develop better animals. You can find more information about this
notion of “idea evolution” at the Memes: Introduction site. Even
groups that you personally dislike are working in your behalf, attempting to
build visions of the world that might allow you to interact with the world more
creatively and successfully.

Groups also serve as symbols in the social world. Groups with different
beliefs than your group provide you with viewpoints you wouldn’t have otherwise
considered. They also represent parts of your own mind that you are not focusing
on. However, if you fear those parts of your mind, this representation can
degenerate into projection, which is a bad thing.

What does this have to do with DNS and digital identity?  It is that I want the freedom to NOT
have one name, one identity, or one reference across all
communities.  Yes, there might be some places where I would
benefit from some level of federation.  At Internet Identity Workshop 2005 I actually
saw where OpenID is intended to not only provide Single Sign-On, but
also is specifically designed to cause a level of federation across web
sites.  I DO NOT
want this to be a requirement.  I am ok with it being an option.  It is this flexibility that I
believe will allow a particular solution to become successful and
ubiquitous.

So … I really wanted to hear more about XRI/XDI and i-Names
because I specifically wanted to learn if they were going to try to
“root” the entire name space into one fixed community.  My real
question was:  “Is XRI/XDI yet another Internet ‘tax’ like Domain
Names (DNS), where you have to pay some entity on an annual basis to
use the value of the technology?”  Or, was XRI/XDI simply one
solution that could be “rooted” anywhere, and allow for the emergence
of various communites to use the technology, and have the naming
relative to the community.  To my relief, the latter was
true.  XRI/XDI is based on specified root servers, and so naming
resolution is based on what root servers you choose.  In the end,
what this means is that my i-Name is only relative to the community.  It is not necessarily a globally unique identifier
for me.  It also means that any community can set up their own
root servers, and create name spaces of their own.  In the end
this means that =drummond.reed only refers to Drummond within the context of a particular community!  Bingo!  I like it!

What shocked me was the almost immediate upset expressed by numerous
people at the conference.  They wanted these names to be
absolutely globally unique . .. so that no one would ever be able to
get “my” name, and there would never be any ambiguity about who was
being referred to by an i-Name.  I fully understand the desire,
however what shocked me was the references to DNS as having this
characteristic!  People actually believe that DNS provides an
absolute unique identifier in any context!  The DNS system has
become so transparent, and ubiquitous that people no longer realize
that it is simply one community for naming on the Internet … and
there is nothing locking people into using it.  These people do
not seem to realize that I can set up my own root servers, and resolve
and DNS name to any IP address that I like!  In fact, I’m quite
surprised that the Open Source community has not stepped up to revolt
against the “Intenet tax” imposed by ICANN and re-ignited the efforts
of OpenNIC, AlterNIC, and many of the other early pioneers in creating
a truly free naming system on the Internet.

DNS naming only works because
our servers, workstations and laptops all obey the rules, and the
default configurations imposed on us by our Operating Systems, ISPs,
and DHCP servers.  Anyone who has installed a DNS server could
easily find the default InterNIC root server list in one of the files
on their system …  /var/named/named.ca
on my Fedora Core 4 install.  I could go into my DNS server and
define “www.amazon.com” to be any IP address that I want.  If you
then happened to route through my DNS server (by being on my network)
then you would get *my* name resolution … not InterNICs.  If I
was an ISP, or even an Internet Cafe, there is little that you could
do, and in fact you would most likely just trust
that the DNS server you were using was trustworthy.  Another
common hack used by trojan horses on the net is to modify your local hosts file.  Most all systems have a hosts file that will resolve naming on your local machine without requiring DNS at all!  If I put an entry in your hosts file for “www.amazon.com” then it will never even use DNS to attempt to resolve the name correctly.

There is nothing in DNS that stops me from adding other root servers, and creating my own free
Top Level Domains (TLDs).  It is only because people just fall in
line with the DNS configuration that it works.  It is only because
we allow our machines to automatically join the ICANN community. 
It is only because of our ignorance and lack of education about how all
of this works that we think that DNS names are globally unique in all
situations.  DNS names, and all naming, are the products of
specific communities or contexts.  Although these communities
might grow to be so large that we can’t seem to see anything else,
there still is the something else.  I actually like it that way.

Virtualization Commoditization

I don’t believe that most people really see what is coming with
Virtualization … at multiple levels.  This plan by Microsoft,
IMHO, is common sense … and something that the Linux crowd really
will have to wake up to.  And I do not believe that Xen is it yet … not without full support for Microsoft operating systems.  Qemu is another interesting project that I have been following … they are also continuing to make advances.

With this plan by Microsoft – which I again believe is a great move –
it sounds like they will be expanding what they offer with Virtual PC … something that I use on a daily basis.  Microsoft will now be able to say “Hey … run Linux … as an APPLICATION on Windows!” 
So the installed base of Windows customers *will* be able to experience
the best of both worlds … they will maybe take a look at Linux by
running it on Windows.  They won’t have to give up Windows …
they keep Windows and get Linux too!

From what I understand, the Linux efforts for Xen will eventually be
able to compete as the next generation of processors are made available
from Intel and AMD.  As this next generation of CPUs hit, not only
will we see this advance in virtualization but these same processors
are going to be adding the multi-core, and multi-threaded capabilities
which will even further multiply the ability to virtualize operating
systems, and higher-layer abstractions.

Virtualization is starting become more of a commodity at the “hardware
virtualization” level … and more and more applications are being
written to the LAMP abstraction … another form of
virtualization.  This is all moving rapidly up the stack. 
It’s fun to watch.

News.Com: Microsoft ‘hypervisor’ plan takes shape.
Bob Muglia, senior vice president in the Windows Server Division, said
at Microsoft’s Tech Ed conference here that the software will be “built
directly in Windows and will allow companies to virtualize multiple
operating systems. [Tomalak’s Realm]

Fedora Core 4 … and Up2Date still sucks

I haven’t posted for a bit … too busy with work, play, and
family.  I did get a chance to install Fedora Core 4 though, and
is was a good experience.  I actually did an upgrade of a Fedora
Core 2 box, since that is what most of my servers are running now.

I wanted to see just how hard it might be to bring them all up to
date.  I had tried to do upgrades to Fedora Core 3, but that
failed completely.  There were far too many packages broken when I
tried to do this.  I’ll do some more work with Fedora Core 4 and
see if this might be my next platform to move to.

One thing that I have to comment on is that Red Hat and Fedora still
have a LOT to lern from Microsoft and Windows Update.  Since doing
this upgrade I have tried numerous times to get the server up to date
using the free Up2Date application.  And it fails 90% of the
time.  I fully understand this is free … and uses distributed
servers across the globe … but it still fails.  The packages
begin to download and inevitably there will be one that simply stops
during the download … and it never recovers.  I have left it for
hours, and nothing … just stuck sitting there.  What a piece of
crap.  Anyone that understands network programming has to account
for these types of failures and deal with them gracefully. 
Up2Date still isn’t completely baked …

Anyhow … we’ll see how things go besides that.  I really wanted
to get to Fedora Core 4 so that I could install and get some PHP5
applications working.  I have had to run them on Windows up till
now.  I’ll see if I can get things working by this weekend.

Anti-Spam and what to do today

It seems that anti-spam is, of course, all about verified
identity.  I really liked this article, and it got me looking into
what I can do on my mail servers today.  I realized that there are
things I can do when I read this quote:

Reports indicate that as much as 50 percent of sending domains are
authenticating their outbound e-mail using SIDF and signatures.

Wow … am I behind the times!  I went to Google and did some searching.  I found a great blog talking about SIDF where there are the following links describing where to read more and what to do.

The first link is the Microsoft SenderID page that has a lot of information.  It also has a link to this SenderID Overview Presentation that gives a great overview of the concepts and how it works.  The last link is to the Microsoft Sender ID SPF Record Wizard … which will assist in creating the actual DNS records that you have to configure.

All of this is oriented towards telling the world that your mail server
is the legitimate source of mail for your domains.  Time to add
more identity information about my mail server into DNS … I’m
creating my SPF Records right now …

E-mail authentication. Then what?. Sendmail CEO Dave Anderson explains why we’re approaching the end of e-mail as we know it. [CNET News.com]

Up2date e-mail notifications

A while back I had started to experiment with a way to get e-mail
notifications from my servers when up2date detected that new packages
were available. I am running a series of Fedora Core 1,2, and 3
boxes and it seems that the updates come quite frequently.

I decided that this weekend I would sit down and write a new bash
script that could be run daily by cron. Here’s what I wrote:

#!/bin/bash
# First lets check with up2date …

# have up2date list the available packages …
up2dateOutput=`up2date –nox -l`

# now check to see if packages are listed …
firstUpdatePackage=`echo “$up2dateOutput” | grep -A1
“\-\-\-\-\-\-\-\-\-\-” | awk ‘{if (NR==2) print $1}’`
# take the
output,
# grep for the long hyphen divider
# grabbing that line and the
next line,
# awk the second line to see if there
# is a package name at the
beginning

#echo “First package: |$firstUpdatePackage|”

if [ ! -z “$firstUpdatePackage” ]
# there is a package name
then
  #echo “Sending e-mail …”
  nodeName=`uname -n` # get the host name
  mailSubject=”Up2date – “$nodeName
  # create the e-mail subject line
  `echo “$up2dateOutput” | mail -s “$mailSubject” root`
  # send the e-mail to root
fi
exit 0

So far, it appears to do exactly what I had hoped … an e-mail notice
when there are packages that can be updated on my servers!