Group Outlines … an interesting idea …
This is a cool variation of Esthers article about the conference blogging. The idea that a group of people would build an over all conference outline in real-time … multiple people contributing to the construction of an outline of notes … is very cool. I like this idea of a new form of potential collaboration …

Masukomi sees a connection between instant outlines and conferences.  [Scripting News]

Human forms of real-time telemetry …
This is an excellent article with a prime example of the future implications of mobile/wireless networking and various applications. Esther is writing about what occurred at one of her conferences when some of the attendees were blogging in real time during the conference … providing real-time “telemetry” about the conference proceedings.

We started to do this type of things years ago at Novell when we would use Instant Messaging and Chat during internal road-map review meetings. A large group of us were able to multiplex between the presentations and our group anaylsis without generating verbal side conversations. This also allowed us to ask questions and opinions between the members of our team to minimize the need to interupt the speaker.

All of this culminated with an application that I wrote called LiVote … for Live-Vote … that was an experiment in allowing a group of people to see a group of questions about a presenter or their presentation. Under each issue is a slider that allows each person to “vote” on that issue and rate it between 0 and 10. Below each slider is a bar graph that shows the “average” rating of all of the voters. So I can now be giving real-time feedback on my opinions … to my team … and potentially to the speaker!

I have thought about completing this application in a way that could be utilized at future conferences. Maybe I’ll have to bring it back to life and touch base with Dan Gilmore about it … 😉

Esther Dyson on the connection between blogging and face to face conferences.  [Scripting News]

Still tough Wearable Computer marketplace …
I have been working with the new MA V form Xybernaut for a while now, and although I am impressed with the machine, I can see why Xybernaut is still experiencing problems. I know that ViA is supposed to be back in some form also … but it’s tough times for wearable computer vendors.

Xybernaut Slashes Workforce. Wearable device vendor Still expects growth [allNetDevices Wireless News]

More GPS and mapping software in Open Source …
There is more Open Source software appearing continuously that is experimenting with mapping and geographic visualization. This looks like one that has some nice features and capabilities. It is addressing 3D displays, and also the capabilites to access map data from sources that are being constantly updated. This last feature is one that I have been looking at for a while with some friends. It seems to me that I would want to be downloading and caching map data all the time as I travel and always be fetching the most up to date data … maybe even data that was created by friends or family!

gps3d 1.18. A GPS 3D visualization utility. [freshmeat.net]

Next generation cell phones …
This is a the coolest new cell phone that I have heard of! Wrist-watch cell phone, and it transmits the sound through the bones in your hand to your finger tip … so that you listen to the person you are talking to by putting your finger into your ear! Too cool …

Let Your Fingers Do The Talking [Nooface: In Search of the Post-PC Interface]

Time between posting …
I am starting to learn that my weblog provides a good display of time for me. I can’t believe that it has been almost a month since I caught up on my back-log of reading, and made my last posts.

I have been having a lot of fun though … and am about to venture out and make some more changes to my blog page. I have been in contact with a few open source projects … working on some digital identity work, and getting NoCat working for our http://80211.net project. I also was able to buy and install a new keyboard on my laptop which is making typing a lot easier! I had started to learn whole new ways of typing when my left-hand CTRL and Shift keys started to fail. 😉

Overall the last month has been a blast … I had a few trips on the road – New York to Chase Manhatten for work on a disaster recovery solution – Silicon Valley for the Foresight Institute Senior Associates Gathering and a series of analyst and partner meetings with Vultus. The more I have been working with Vultus the more I am learning about the implications of the Web Services revolution … giving me lots of ideas about the future.

I also ended up doing a couple of presentations on Wearable Computers … one for the State of Utah School systems “Test-out” Challenge talking to a lot of high school students from across the state … another for the Utah Computer Society. I’m also meeting with a variety of companies on my “mesh network” ideas … one that is implementing a last mile solution – UINetworks – and another making some inexpensive radios that will provide me with some peer-to-peer wireless that I have been looking for.

Lastly … I’m completing some work on my video/photo kiosk ideas, my wearables, and my location-based services applictions. Oh yeah … and I have a baby coming in September, so I have been learning a lot about what to expect with fatherhood!

It’s been a fun month … and it’s going to be a fun summer …

Where to Wear your computer?
This article covers a couple of areas of research that are exploring the best ways to integrate wearable computers onto – and into? – the human body. The design of human augmentation and human extensions …

That Computer Looks Great on You. For wearable computers, how your body moves is just as important as how the technology works. Brad King reports from the South-by-Southwest Interactive conference in Austin, Texas. [Wired News]

More Gesture Interface code for Linux …
As wearable computers continue to gain in popularity, people are going to recognize the importance of “gesture interfaces” for input of data. It seems that most people think that we will be talking to our computers using voice recognition, however in many venues this is not a reasonable solution. Imagine sitting on the bus at prime commute time … it’s already difficult to have a private personal or business conversation on your cell phone. In addition, the ability of current voice recognition solutions to discriminate your voice in a loud setting are also limited.

One of the solutions that I am using now is a simple gesture interface and touchpad. I am continuing my work on this solution on the Windows platform, and this appears to be a solution that is coming for Linux! I’m going to install on my RedHat wearable system an give it a try!

Open Source Gesture Interface For Linux [Nooface: In Search of the Post-PC Interface]

Augmented Reality and Location-Based services …
There is much talk in a variety of forums about “virtual reality” … the creation of virtual worlds that all of us will explore using goggles connected to powerful computers. I am not as thilled (yet) about “virtual reality”, as I am by the near term possibilities of “augmented reality”. This is one of the areas that has been the focus of my attention lately.

Augmented reality can be thought of as the enhancement of our senses when viewing the world around us. One of the common examples from the movies are the scenes from Terminator where the views from the eyes of the Terminator are shown with their red tint, and the constantly updating digital information overlayed. This type of visual augmentation is also being explored by many different universities and is slowly progressing in the experimental stages. This type of visual augmentation will eventually be very commonplace, and will be a integrated part of any pair of glasses or sunglasses. The information that will be displayed to you will be completely configurable, and will be based on where you are, what you are doing, and what is going on around you.

Audible Augmented Reality
Another form of augmented reality that in my opinion will be even more important in the next few years will be audible augmentation. Just as a computer can produce visual information that can be overlayed onto my glasses, it can also “tell” me things using recorded voice or synthesized voice. I will be skating through town on my rollerblades, listening to my favorite MP3s, and as the computer detects a Starbucks coming up on my left, I will suddenly hear something like “There is a Starbucks … 3 blocks ahead … on your left.” These types of audible queues will be given to me to assist me in locating places of interest, along with friends and family.

What becomes interesting is thinking about where I will get the information to generate this visual or audible augmentation of the world around me. How will this really occur?

Audible Augmented Reality Project
In my current research, I have started to outline several of the basic requirements of such a system, and also some of my predictions on how this system will be created and evolve. I am currently writing an application to build the foundation on my Audible Augmented Reality Project, and some of my thoughts are:

  • For the foreseeable future I will have “intermittent connectivity” to the Internet as I roam from place to place. My application will have to automatically accomodate the fact that I might switch from no connectivity, to low-speed cellular, to 802.11b, back to no connectivity, and on and on. Due to this fact, I will have to support a variety of real-time and store-and-forward technologies to communicate my location, and retrieve information about the world around me.
  • Since I will roam from indoors to outdoors, cities to country, open spaces to dense cities, my GPS will have a variety of accuracy which will range from excellent to none. My application on my wearable computer will monitor my personal GPS for my location and the quality of the fix. It will keep track of my current fix, and if I do not currently have a good fix, it will remember the “last good fix”.
  • When there is a good fix … or even a last good fix … and I have network connectivity, my application will report my location to one or more Geographic Location Servers. It is very important to understand that I will not be reporting my location to a single place … but instead to numerous services on the Internet. This is what provides the redundancy, privacy, and my ability to control the sharing of this information with a variety of other people.
  • My application will provide me a way to quickly create “LocationMarks” to note and describe a particular point of interest. A LocationMark will consist of (at least) the latitude, longitude, altitude, my name for the place, one or more categories, text descriptions, and maybe even audio, photos, or video. I can then store these LocationMarks on one or more Geographic Location Servers where I can share this information with others, or later lookup and reference these points.
  • More and more people will be running similar applications that will send their current geographic location information to one or more Geographic Location Servers, and they will also be saving and sharing their LocationMarks through these servers. We will all have our “personal” LocationMarks stored on our wearable computers, along with “public” LocationMarks that we have accessed or used recently. These LocationMarks will become as popular as web pages are today … providing information that is associated with a specific geographic location, and that can be used to augment my experience when I am at or near that location.
  • My application will give me the ability to configure a set of “persistant queries” against any number of Geographic Location Servers. These queries will be to search for any LocationMarks, meeting a defined set of criteria, within a defined distance from me. I might have several of these queries constantly searching for LocationMarks of interest … such as Starbucks Coffee locations. Further, I might actually be sending queries to Starbucks own Geographic Location Server so that I am getting this information directly from the source!
  • Lastly, my application will constantly be generating audible notifications when these points of interest come into range of me as I travel.

I currently have an application which implements some of the first two points, and I am working on the next couple of points. I have several other ideas and theories on how location-based services will evolve from here, but this is a start.

This article really caused me to document what I am working on now, to get some of these ideas in writing and out on the web. I’ll post more on the HumanXtensions web site as I make progress. This is still all a part of my original quest to build a “Metropolitan Area Roller Tag Game” using wearable computers … but that is going to be documented in another post. 😉

Location-Based Internet Communities. Geographic Information System technology has traditionally been relegated to the domain of generating maps and driving directions online. What would happen if you combined a modern GIS system and an online community? [kuro5hin.org]

Updates to the wearable infrastructure …
This is a cool site, and a cool project. I am going to download and install this on my wearable and see how it goes. One other thing that I found on this site was a good set of links that included a cool Wiki that has been created for wearables. Look for my contributions there also …

jAugment 2.3.182 (Unstable). A software infrastructure for wearable computers. [freshmeat.net]