October 08, 2007

How to use an old Mac laptop as a passable external display for a new one

This is a quick one. The executive summary:

New Mac + ScreenRecycler + IP over Firewire = Passable external display

(no need to give up your main machine's precious ethernet connection)

Why? Well, I've got an old Mac laptop, which isn't too much use by itself (PB 1Ghz with 768mb of RAM against a Dual 2.1Ghz MacBook with 3Gb). But, it is an extra display. In fact, I've always thought it a shame that laptops and systems with permanent displays don't come with a video bypass, so that when the machine itself is past its useful lifetime, the display can still be used. This is a less power-efficient option to that end.

How? First, purchase a license to ScreenRecycler, a tool that basically lets you use any machine you can run VNC on as an external display for a Mac (via a virtual display driver). It currently costs $25... which seems pretty decent for getting an extra display. But, because it uses the network to connect the two machines, you want that connection to be as fast as possible. While you can trivially use an Ethernet cable (Macs have, for a long time, done auto-crossover, so any ethernet cable will do just fine), I want to keep my Ethernet port, because Ethernet is precious where I go to school, and also more reliable.

How can you connect the two machines, assuming you don't have access to an appropriate ethernet switch? Firewire! If you're not using your firewire ports for anything else, just enable "Built-in Firewire" in your Network preferences pane of each Mac. In fact, if you then turn on Internet connection sharing in the Sharing preference pane (share your Built-in ethernet to your Built-in Firewire, naturally), the "screen" machine will even be able to connect to the network. I use this so that I can run Foldershare on both machines, and have a constant backup of several folders on my main machine.

The final step is to plug the firewire cable in between machines, install ScreenRecycler, and start up the screen sharing. I used the JollysFastVNC that ScreenRecycler includes with it, and have it set up with the name of my primary machine and the settings for auto-reconnect and fullscreen turned on... I just have to wake the extra machine up from sleep, and it returns to being the 'extra' screen as soon as it has found the network connection. I further use a tool called MarcoPolo to detect when I'm sitting in this configuration and launch both ScreenRecycler as well as changing the Network Location to the one where I have this network setup enabled.

Simple enough? No screenshots, 'cause I figure this is still a fairly advanced tutorial. If you have trouble following my steps, post a comment, and I'll beef things up.

Technorati Tags: ,

December 14, 2006

Talk about chilling effects!

From a FAQ on a new Internet-hosted service I've been tinkering with:

Q34. Why can't I open or collaborate with music and video files?
A. There are many copyright-related issues when it comes to music/ video files, and we felt that allowing users to collaborate on, or share, such files would perhaps cause copyright or DRM (Digital Rights Management) violations if those users were not the actual copyright holders of the content they were sharing/ collaborating on. This entire topic is a legal minefield, and at this point we're sidestepping the entire mess by simply disallowing opening/ collaboration/ sharing of music and video files. After all, you wouldn't want mysterious organizations like the RIAA, MPAA and MAFIAA after you, would you? While we may allow this at some time in the future, for now, we're sorry, but you cannot open, share or collaborate on music/ video files. original link

Wow. "You wouldn't want mysterious organizations..." is cutting-edge new services are self-censoring. Can someone come up with a catchy phrase for this mess? The War On Culture? The War On Getting-Things-Done? The War On Thought-Crime? Oh, getting ahead of myself...

Technorati Tags: , ,

December 13, 2006

The case of the missing contact data.....

After frustration over the years about their earlier spam-your-contacts policies, I finally warmed up to Plaxo earlier this year. They had finally started making a Mac sync product, that worked directly with the built-in Address Book application. I'd been using Address Book for the last several years, so I was excited to be able to take advantage of Plaxo's sync/automatic address book updating/etc. features with my existing contact store.

All was fine for several months. I updated several contacts with more current information after Plaxo discovered they had Plaxo accounts - this is one of their nicer features. A few weeks ago I noticed that scanr had a business card feature, and that it could be connected to Plaxo. Neat! Snap a phone-cam photo (or, in my case, use your digital cam to take the shots) of your business cards, then send the pics to Scanr does the recognition, sends the updates to Plaxo, who actually sync in changes to existing contacts, as well as adding new content for new contacts. Wonderful! The Plaxo<->Mac sync means that, from sending the photo by e-mail, you don't have to do anything to get business cards into your Address Book. Handy, fast, surprisingly accurate.

Well, except when Plaxo becomes a liability. Yesterday I was sending an e-mail when my mail client froze up on contact lookup. Weird. So, I open up Address Book to see what's going on. Huh, it's blank. No contacts. Not the usual 601 contacts, 0. That's odd, I didn't remember deleting all my contacts. I figured I could just go to Plaxo and re-sync back all my contacts... until, to my shock, I find that Plaxo is already empty, too. That's not good.

Longer story short: Plaxo seems to have actually been at fault. When I restore my Address Book backup from a few weeks back, it lives for a few minutes, until the mad-Plaxo-demon takes over and removes all my contacts again. Some kind of friendly sync you've got there, er, I mean, "sink".

When I contacted Plaxo support, they were nice enough to tell me I had a ton of contacts in my "Trash". Which, of course, is only accessible from their Premium account. Huh. You delete my data, then charge me to get it back? Sounds like a bad data-ransom scheme.

Something's very wrong at Plaxo. I hope that they fix this problem, and start treating customers' data like the gold that it is - that which they must preserve, above all else, to satisfy their customers. I'll update this post if they offer me any sort of remedy. In the mean time, if you have Plaxo, I might suggest you turn off "Auto Sync", and make a backup of your data before you hit the manual sync button.

Technorati Tags: , ,

November 09, 2006

Going out on a limb - ways to reach me.

Well, since I constantly play with new VoIP and other services, people often complain that it's hard to reach me. To further complicate this, I'm going to publicly post a new set of contact info.

Phone: (650) 963-4822

Those of you who know me know that neither of those are contact info I've given out before. Why am I willing to post this publicly to my blog? Well, as a sort of experiment. The e-mail address is from, which has the nifty model of offering mail forwarding if:
1. The sender is pre-authorized -or-
2. The sender passes a captcha -or-
3. The sender pays to have the message delivered.

Neat, eh? Want to reach my inbox? Know me, prove you're a human (and willing to spend the time), or pay me for the right for me to see your message.

The phone number is my GrandCentral number. They seem pretty well funded, so, let's hope this one will last for a while. It's local for my Bay Area friends. It actually does reach me anywhere, at least, with a far higher probability than my own experiments. And, if you start spam-calling me, they have at least some facilities to let me block the call.

I guess we'll see how well it works to publish working but some intermediated contact info. And, for those of you who can't keep up with my contact info - give these methods a try if you want to try to find me.

Technorati Tags: , ,

September 21, 2006

Quick SocialText RSS from Bloglines tip

I've been playing with SocialText's publicaly hosted solution lately. More on that later, but, first, a quick tip.

As with many on-the-curve companies, SocialText's products extensively export web standards - RSS, SOAP, REST (in beta). I use Bloglines for reading lots of things, so, subscribing to changes to a shared wiki seemed obvious. Except that SocialText workspaces are, by default, authenticated and password protected. However, Bloglines has no explanation of how to subscribe to authenticated feeds... worse, when I found an explanation of how to do it, it didn't actually work with my SocialText username and password - because my SocialText username is an e-mail address, containing its own, conflicting, @ sign.

The fix is to rewrite the e-mail address in HTTP encoded mode. If you wanted to subscribe to a SocialText RSS feed like, with e-mail address and password password, then follow these steps:

  1. Rewrite your e-mail address, replacing @ with %40. ie:
  2. Generate the new feed URL: http://{rewrittenemail}:{password}@{rest of original URL}. In this example that would be
  3. Use the newly generated URL to subscribe with Bloglines.
  4. ... profit?

Now, you're subscribed to that RSS feed. Maybe someone will read this an generate the requisite bookmarklet to automate the rewrite/subscribe to Bloglines process. Anyone?

(P.S. Anyone know why Technorati refuses to update my blog, especially noticing the Technorati Tags?)

Technorati Tags: ,

August 17, 2006

Fun with torrents, and Amazon S3

I've been watching Amazon's S3 service since it was first announced (the land grab is over, BTW - S3 announced a revision which allows you to use your own domain name in the hosting). I'm generally a fan of services priced pay-as-you go, especially when they're done with good technology following best-practices. S3 does all of these things. And it's about as cheap as reliable bandwidth, storage, and scale can come these days, too.

But, since I don't have any Web 2.0 startup ideas, nor any large files to distribute, I haven't gotten to play around with S3 too heavily. That all change this morning, when a new Nerd Vittles tutorial went up (check it out here). This was perfect - there was no torrent download, and a plea for help (in the form of a "downloads only available while the $ for bandwidth remain plea). As the post covers VoIP with open source tools, Virtual Machines, and a chance to tinker with S3 - well, most who know me can guess I was salivating at the thought of putting all three together.

And, that's what I did. I downloaded the original file, uploaded it to a bucket I have on S3. Then, I copied the generated S3 .torrent to my own server, and told the Nerd Vittles folks where to find it. Meanwhile, I had copied the original file up to my hosted server, and started up a seed there, as well (this saved me having to pay for an extra download from S3).

So, now I had Amazon's super-reliable service providing a seed, a tracker, and there was now a bittorrent flashmob forming. I could augment S3's seed with one of my own servers and whatever other bandwidth I had around, but I could also sit back and know that S3 would keep things alive, no matter what else I did.

S3 actually provides a pretty reliable backing store - it appears to provide about 75-100 Kb/s to each peer that shows up. This can add up pretty quickly, at least, if there are enough people in the mob to keep things moving, and you have (relatively) free other bandwidth to contribute. So, not content just to watch people transfer the torrent and slowly tick up my S3 bill (the entire project has cost a little over $1 so far today), I had to experiment with one more variable. Objects in S3 have an ACL associated with them. According to the docs, you have to make an object "public-read" for it to become available as a torrent source. This is true, if only because you can't get the initial .torrent created otherwise.

However, it occurred to me S3's tracker might, possibly, continue running, even if you change the ACL so that the seeder would have to drop out of the swarm. So, I changed the ACL on the original S3 object to remove "public-read", and waited. To check that things were still swarming along nicely, I even started a download on another machine. Happily, the S3 tracker is still playing the tracker role, but the S3 seeder has stopped racking up bandwidth. Since I'm still running another seed on my own machines, I can be sure that the swarm will stay healthy, but I can also pay Amazon only for the super-reliable tracker infrastructure, and the (modest) cost of storing an inert copy of the file.

Now, I just gotta' point a directional antenna at the newly-launched Google WiFi, which I can't currently get indoors, and bump up the seeding a bit more. That is, assuming Google wasn't savvy enough to limit bittorrent bandwidth on its WiFi. More on that if I get a chance to test it.

Some conclusions:

  1. Host the .torrent yourself, rather than using S3's url?torrent REST trick. This means you can continue to distribute the .torrent file even if you knock the S3 seeder out of the swarm.

  2. Hosting the .torrent also means that it's difficult (impossible?) for leachers to find the original URL on S3. Otherwise, if you link directly to the torrent URL on S3, it is possible for savvy users to just use normal HTTP download, and make you foot the bill for the entire download (S3's http is very fast - this would be a tempting trick, depending on the state of the BT swarm).

  3. If you have other seeds, or trust that your swarm is well-enough established to keep running on its own, you can remove the "public-read" ACL and still have S3 host the tracker on its reliable infrastructure. There are several open requests with Amazon to provide tools for how to manage bittorrent usage. So far, this is the only tool that I know of that helps, and it's not exactly automatic.

Technorati Tags: , ,

August 03, 2006

Wifi, wifi, everywhere!

As many of you might know, I recently moved to Mountain View. That turns out to be good timing, since Google is in the midst of preparing Google Wifi for launch. Actually, I've had a chance to try it. So far so good, although, it really highlights the difference in Wifi reception between my old 800Mhz iBook and my 12" Powerbook...

Meanwhile, I caught a mention yesterday on GigaOm of a very local startup building mesh networking hardware. I've always found the idea of self-meshing networks intriguing, both from a "kick the ISP out of the loop" long-term perspective, and for various short term reasons ("why can't I just add more APs to make my network coverage better" being prime). Looking them up, it turns out they're close enough to where I just moved that there's even a chance of hitting them with a high-gain antenna. So, I contacted them - and, well, there's a chance we might try to set up a mesh to cover the distance. We'll see. It might turn out to be handy to have a sibling that lives about half way in between, too. Meraki's $50 meshing wifi router does have impressive specs, especially for its price. I wonder how long it would run on one of my old 12v electric scooter batteries?

Technorati Tags: ,

May 25, 2006

The coming VoIP insta-pricewar

A service provider I had mostly shelved for my VoIP experiments must have noticed that it was getting shelved a lot. Figures - that's what happens when your rates are higher than your competition, and you don't offer any other features... you stop getting business.

However, these folks have innovated, and come back with an interesting solution. They've published an API (and included example scripts to drop-in to off-the-shelf open source tools, in this case, Asterisk) that let you query their prices on a per-call basis. This, plus a new set of rates that are much cheaper in at least some cases, put them back on my radar.

I had to stop and think about the implications, though - they're impressive. Of course, it's only a matter of time until other providers start offering a similar instant price lookup service. Then, we'll really have an instant pricing market for VoIP services.... not getting enough traffic on the day? Well, drop your rates another fraction of a percent - thus getting more calls from more customers, but slightly eroding your profit margin. Continue this until whoever has the volume leverage beats out the little guys. Meanwhile, collect marketing stats as to which calling destinations you should be negotiating better rates for, given the query stream.

This makes for a lovely downward spiral - perfect market efficiency, on the spot market for VoIP call termination services. At least, until someone finally innovates, creating a new type of service, and shuts off the downward spiral. Here's hoping. I've mostly stopped with my aggressive VoIP experimenting, because, as a consumer, there aren't a lot of new building blocks with which to build new things. Right now, it's all just telephony replacement, and, well, I've replaced my telephone services with cheaper VoIP ones.

In case anyone wants to play with the current leader in VoIP rate erosion - check out VoicePulse connect.

Technorati Tags:

May 02, 2006

Google Calendars published to a website....

I've previously been using iCal + DAV + phpicalendar to publish my schedule to the web. I recently decided to try out Google Calendar. Here're some comments:

1) There's no way to remove the "default" Google Calendar. Doing so deletes your Gcal account. 2) There's no web view exposed of your calendars, even if you explicitly share them - a user receiving the share has to use some other tool (Gcal or otherwise) to view a shared calendar. 3) There are still a lot of "slow" spots in the UI

To fix #2, I brewed up a combination of my old PHP iCalendar view to suck data back from Google Calendar. How? This script:



for NEXT in `cat $BASEPATH/.sources`
  NAME=${CALPATH}/`echo $NEXT | cut -d, -f 1`
  URL=`echo $NEXT | cut -d, -f 2`
  curl -L -f -o $NAME.ics -R -z $NAME.ics "$URL"

Then, of course, fix up the PATH lines as appropriate, create BASEPATH/.sources to contain a series of lines of the form "calendarname,Gcal ICS link", then set up a cron to run things. Of course, my phpicalendar install pre-existed, so that is left as an exercise to the reader.

Finally, because of a bug in Gcal, I had to comment out the following two lines from functions/ical_parser.php

$summary ='**PRIVATE**';
$description ='**PRIVATE**';

(just add // to the beginning of each line). These are lines 186 and 187 in the version of phpicalendar I have. Presumably, Google will eventually fix the way it exports events, so that this hack-around isn't necessary.

In the end, a handy web-view of my Google Calendar content.

Technorati Tags:

March 21, 2006

FON arrives at the Geekdom compound....

As with most things wireless and new, I jumped on FON when I first saw it. When they offered a discounted Linksys router, I signed up.

That was February 8th. On the 20th, I got the opportunity to actually order the thing. It shipped on March 9th, and finally arrived yesterday, March 20th. Alas, they're a startup, with minimal US operations.

Anyway, the out of box experience was pretty good. I plugged it in as the 1-page flier suggested to do, and registered the device using my FON account. Immediately, it started offering FON-authenticated wireless service. Not bad.

However, it didn't come with any documentation on how to make other changes or customizations. I had to figure out that you had to use the LAN ports, and that would give you a shot at tweaking the internals of the beast. FON routers, currently, are DD-WRT-based (itself based on OpenWRT). In other words, they're a customized open Linux "firmware", a replacement for the stock Linksys behaviors.

FON has the basic experience right. I can share my internet using the device. It forces anyone who's using it to have a FON account, or, I can give out local accounts. In any case, just as with most for-pay wireless in coffee shops and airports and the like, you have to offer some sort of credential to it before you can connect. In the long run, they have a plan for revenue sharing, and/or bandwidth sharing, to encourage people to deploy FON-enabled stuff more aggressively.

It does have a few rough spots. Though the shipped router works out of the box, it had no process (or documentation) for local user usage. Eventually, a user will only be able to log in a discrete number of times, meaning that just buying a FON router for home use can get complicated - do all of the other members of your household have to pay to use your connection?

Also, there are still rough edges. At the moment, there's no rate-limiting controls, so there's no way to "share" your connection but not "give up total control, and be at the mercy of the abusive downloader next store". The Linksys box can be convinced to do this, so I'm sure it will show up as a feature as the beta process roles along.

Worse, you currently seem to have to re-login each time you reconnect to the wireless. For those of us with laptops that sleep/resume quickly (cough Apple cough), this can be a hassle, since we're used to just closing the lid on our laptop for a few minutes between tasks. Having services disconnect and forcing a re-login cycle on any user, local or guest, isn't really necessary.... unless they're a paying user and just went over their time-unit rollover and have to pay again.

I'd also really like to see FON push into the mesh/overlay network space. If there's continuous FON connectivity down my block, why should my device have to be smart about roaming/reconnecting as it goes? Ideally, there'd be some approximation of a unified network across different pools of connectivity. This will become more of an issue as there are more places where there is continuous connectivity, and people start doing things like roaming around with their Wifi VoIP phones. And that day, for the Bay Area, might be coming sometime this year...

Update: here's a coverage map for the FON network in my area. Quite a few dots. Oddly, the dot for my access point is about a mile from where it should be. FON map

Technorati Tags:

March 15, 2006

S3, and the land grab in progress....

First of all, kudos to Amazon, whose new S3 looks to be an incredible product. Having tinkered a bit lately with the likes of the Nutch/Hadoop project, distributed, ultra-reliable storage has been on my mind.

Now, I can finally actually store all of my digital photos online. In a way I control. But also affordably. Alright, it's not quite as cheap as Streamload is working out to be, but the business model is clearly different (and Streamload's been having some responsiveness problems of late).

I'm eager to see what kinds of tools, both real web service and modifications to existing open source tools, come along. Finally, an uber-cheap LAMP host can provide all you really need to have all of the stuff you want at your fingertips - if you're willing to pay the monthly S3 bill that goes along with your current data usage. How many days until someone builds a version of the fine Gallery photo tool that stores your photos directly into S3?

The various "great things" S3 enables have been well covered on other blogs by now. Highlights, in my opinion, are guilt-free storage scaling, trivial https and (instant) torrent access for anyone, as well as the extremely low cost of entry. I didn't have an Amazon web service account until today, but it took literally 1 minute to add such functions to my Amazon account. I didn't even have to pull out my credit card, since Amazon already had it on file.

Finally, I have to (guiltily) comment on the land-grab in progress. S3 uses "buckets" as the top-level naming convention exposed to an account-holder. You declare that you want a bucket, and, if you're the first (or, presumably, it's currently unclaimed) you get it. The "bucket" is actually the first level of depth on the URL, ie, the "bucketname" in a url like this: You can "only" have 100 on an account, but it costs nothing to grab one. I actually thought I'd screwed up my account credentials earlier today, because I couldn't create a bucket called "test" when I first started playing around - I guess I wasn't the first who wanted to test things out today. So, now that I figured that out, I went on a land grab - I now have the buckets "dist", "gadgetguy", "geekdom", "mirror", "source", and "torrent", among a few others. Not that there's any real reason to covet any of those buckets, but, at least for those that relate to domains I have, I won't have to fight with anyone.

Technorati Tags: , , ,

March 09, 2006

Writely, now with Google!

I first heard about Writely in late September last year... it offered a pretty nice web-based document editing interface. It allowed semi-live collaboration. And it exported to commonly used document formats. Nice!

Of course, I don't write many collaborative documents, and those I do are usually restricted to living behind a firewall, so I didn't get far beyond testing out the live-collaboration features.

Nonetheless, Google seems to have seen how good the tool was, and decided to add it to their portfolio. I expect Google Office (or the equivalent) can't be that far off - there's a Calendar in the works, Gmail's pretty solid, and Google Base seems to be up to many hosted DB tasks.

Anyway, I went back to re-evaluate Writely, and see how they were keeping up under the renewed load caused by all the press. So far, so good... this blog post is even coming strait from a Writely document. I imagine they'll re-open for general signups soon enough.

January 27, 2006

Looks like my longbet idea has support...

Way back in 2003, I postulated that it would eventually be possible to cheaply get one's entire genome sequence. Looks like XPrize, who already succeeded in encouraging a low-cost civilian space race, is now considering pushing for the same thing to happen in the gene sequencing field. Hopefully, someone will succeed by 2013, so I can prove Joy wrong...

Technorati Tags: ,

December 18, 2005

How to turn an empty Altoids tin into a Christmas present:

As a Christmas gift this year, I decided to make Joy's sister a cheap VoIP adapter. She's the perfect target for one: broadband, no land-line, but too-limited of a cell plan to be able to talk. Add to that, she's got a newborn, and a big family who probably wants to chat... well, wouldn't it be nice if she could plug in a phone and pay zilch to talk to family!

So, I set out to build a semi-consumer-grade device. My last model, which I eventually ported to a 2inch square card, is still caseless and kind of ugly. This build would need to be insulated, contained in a case, and, perhaps, attractive.

Here's the result:
Scene setting perspective

And here's the gag (though working) shot:
Action Shot!

(the whole set of documentary evidence can be found here)

It's another instance of this design, actually with much lower tolerances on the components, and a few more hacks, but it seems to work. Sorry, no build photos... but construction involved purple spray paint, an Altoids tin, 2 faux "Discover" cards, and about $6 (liberally) worth of parts from Radio Shack and Fry's. Unlike the last build, this one involves all acquired components - no old junk from my closet in this Christmas gift. Well, except the "Discover" cards, at least. Construction involved a soldering iron, a pocket knife, a Dremel tool, and way too many hours of pondering how to lay things out to fit them into the tin.

Sadly, it is a Christmas present, so I won't be able to gloat too much showing it to friends, since it will be halfway around the country in a few days. Hopefully, the photo set will serve as enough of a record.

Technorati Tags:

December 05, 2005

AdSense accuracy?

Ah, now, there's the rub. Last week, I explained how I was now tracking Google Adsense clicks using Google Analytics. Now, the method used is a little less than reliable, but it appears to record clicks accurately. Since then, I've been enjoying better Adsense stats than the Adsense site shows - Analytics lets you see the sequence of page views that led to each "goal" accomplishment. As well, it reports things like which keyword hits on search engines led to what percentage of ad clicks. Also, Analytics tracks user sessions, so I can see how many clicks per browsing session, vs. exposures and page views, which is usually all that's available.

My blog is fairly low traffic, and my traffic is bimodal - readers: people truly browsing the web or even specifically reading my blog, and searchers: people who are actively searching the web and encounter my page for one reason or another. I gather, based on the early Analytics reports, that my suspicions about these two traffic classes are both true: readers don't tend to click links, and web searchers have a pretty high tendency to do so.

Anyone, with such small numbers, it's easy to notice another problem. Remember, the Adsense+Analytics hack from last week should only report less or the same clicks than actually happen, since it just logs based on ad text clicks. It also does so without modifying the code (which might fall afoul of of the Adsense terms).

Any AdSense ad code, search box code, or referral code must be pasted directly into Web pages without modification. AdSense participants are not allowed to alter any portion of the ad code or change the layout, behavior, targeting, or delivery of ads for any reason.

At least, I take this to be true. The intermediary code doesn't change any of the pasted Adsense code, nor change the behavior of the ads - it does ad additional telemetry to the system. Perhaps this is why this clause sounds so broad: they want to prevent people from being able to notice what I have: Adsense-counted clicks are quite a bit less than recorded clicks. Since last Tuesday, Adsense has reported the following per-day clicks: 7,0,1,0,0,2, while Analytics has recorded 2,4,5,4,2,5.

I know Google discounts certain clicks, to prevent automated clicking from happening, and folks getting credit for clicking on their own links. Thing is, I never click on my own links. So, since there's no oversight, how am I to prevent Google from essentially never paying me for 1/2 of my ad clicks?

I'll be eager to track how this develops. I wonder if there's something wrong in my data collection, or if Google really thinks that such a high percentage of my scarce links are bogus. In which case, Google or its advertisers are probably getting a great deal on small-time blogs like mine which attract visitors which click on ads. I'll definitely post again once there's a larger pool of data to compare against.

Technorati Tags: ,

November 30, 2005

AdSense + Analytics

After Google Analytics came out, I've been having fun playing with the, admittedly sparse for my site, statistics it generates. One thing that's missing, though, is the ability to see how many Google Adsense clicks are being generated. This is actually what Analytics is ideal for - it's oriented more toward marketing than toward raw stats (IMO), and, the only even remotely marketing related function of this blog is occasionally distracting visitors to ads for the joy of sponsoring most of my VoIP experiments.

Searching around the web, I found that the answer had already been solved. This post at SEO Book describes how, except that it needs a tweak (not to mention that the code is gross, needing to track mouse locations because of a bug in Mozilla-based browsers). Unfortunately, it will record an ad click whenever any iframe content click causes a page change. I added the one line that makes it only do so when a Google iframe is clicked. My updated astrack.js is here.

Technorati Tags: ,

November 03, 2005

Pentax Optio60 review

In an extremely belated birthday gesture, I bought Joy a new camera. I'd picked it out a while ago, having spotted it on some gadget blog when it was initially announced this summer. The camera? The Pentax Optio 60.

Now, at first, the specs on this thing sound great, especially for a ~$200 camera (I actually got it for $180 including shipping). It's 6.0 megapixel, 3x optical zoom, runs on AA batteries and takes SD, yet it's still small enough to be fairly pocketable. It even has a little bit of built in memory, so you can take a couple (well, 3, at high quality, 6mp) photos with no memory card in the beast. Add to that the Aperture and Shutter override modes, rare on low-end cameras I've come across so far, and it sounds like a decent little beast.

Unfortunately, as I suspected, you get what you pay for. The camera produces pictures with tons of color shimmer... you need the extra mega pixels so you can throw them away. It also seems to have a lot of trouble focusing in lower-light conditions... even with the half-press option, it takes a half dozen tries or more to get it to take an in-focus picture in the low evening light of our apartment: this is for flash-assisted photos, so I'm not specifically complaining about the normal mistake of assuming you can take a good shot with too little light.

Verdict: You get what you pay for. This camera will probably be fine for outdoor photography, and/or newly starting digital camera users. For anyone with a lesser existing camera that takes good pictures, or an interest in reliability/quality, there's nothing to see here.


October 13, 2005

Profit assurance, modern day

I recently decided that, given my various VoIP experiments, and the resulting affect on number of cell minutes I was using, that it was high time I ditched the expensive, though gadget-friendly cell-phone service plan I'd been living with.

My solution was to find a decent replacement. At the rate I was paying, I'll come out multiple hundreds of dollars ahead on the year using a phone like this for my mobile cavorting. And, with fun automated dialing hacks, who needs to give anyone their cellphone number anymore anyway?

Unfortunately, pay-as-you-go phones come at a price... they are sold without credit checks, so there tends to be a lot more, er, scrutiny, in transactions associated with them. When I went to T-Mo's website tonight to put my first recharge on the phone's minutes, I wasn't expecting nearly the process that follows.

  1. Fill out form on website. Technically, anyone who wants to can buy me cell minutes, if they know my cell number. Of course, you'd have to go through the rest of this process, too.
  2. As a matter of course when filling out the online requests, you, as payer (not as recipient of minutes) are asked for a number you'll be reachable at during the next hour, for “fullfillment” purposes. I assumed this meant a number they could use if the destination phone number for credit was wrong. Not so...
  3. After waiting almost 2 hours, and wondering where my $100 had just gone, I called the number provided in the “confirmation” page from the website:
    1. Identify self.
    2. Explain what has happened (or rather, hasn't).
    3. Get put on hold while the friendly representative checks to see if he can find the “representative” who was handling my order.
    4. “That representative is busy, I'm going to help you”
    5. Round one of personal credit questions. Check address. Check partial SSN (mind you, I've never given T-Mo, or the web order form, any of this information - so it seems a little odd that they even can corroborate such information). At this point, I ask who I'm talking to, since I don't normally start handing out SSN info without knowing why. The guy sounds totally understanding, and explains he works for a company that does these kinds of checks for several other companies in the business... I think he said “Besta”, but I'm having a hard time pulling up a Google reference. In the very least, they handle Cingular and T-Mobile, though I think he mentioned others, as well.
    6. After providing the basics, I'm put on hold again. In another minute or two, the guy comes back. This time, he's asking questions that could only have been pulled from my credit record.... only, in a very interesting format. One of the questions identified where I had had a previous address. The question was “Answer yes or no to the following question” followed by the names of several counties (all in the general vicinity of previous addresses of mine, though scattered around two of the previous states I've lived in). After passing that question, I was asked an age-range question regarding someone who I'm closely related to... same format, “24-31”, “31-45”, etc. Pick the right answer. Not hard, amusing for the way they build in the ability for people to answer correctly, without having to remember/know the exact details anymore. Contrast this to Safeway who's always asking me my telephone number. Like I remember exactly which telephone number I've ever given to Safeway?
    7. After all of that, my payment was processed immediately, and I was assured that future transactions using the same card would go through without delay.

Now, I understand this process. Companies are always trying to avoid getting hit for a pile of services on a stolen credit card, since they generally eat much of the cost of services rendered in such situations. And, I did purchase the biggest and baddest unit of currency - also, of course, the one with the most appropriate rate: also the easiest unit of retail for a would-be-theif to negotiate without having to constantly go re-up the plan.

Was it necessary? Perhaps, in the long run, this is the right thing to do. But, I gave them the same personal information, and credit card, when I bought the phone at the local (company) retail store. It all got put into the computer (anyone thinking prepaid phones are anonymous is likely to be sorely mistaken). I was buying currency for my own phone. Seems like they could short-circuit this process. The ~13 minutes of operator time involved costs them money, too.

Amusing aside: On prepaid plans, you can rarely get into Voicemail and other services without using up (relatively expensive) minutes. VoIP to the rescue - dialing the Voicemail number directly won't allow access, even for leaving a message. But, calling the voicemail number with caller ID set to the number of the cell phone - well, that works much better. And it only costs me 1.1cent/minute, vs. 10 cents or more. Woohoo for phone hacking.

September 20, 2005

Peerflix, first takes

So, I signed up with this service called Peerflix in late July. I tend to tinker with new web services, so this should be no surprise.

Here's a basic description of Peerflix:

Mediated DVD-by mail exchange system. You send off movies you have, get credits, use those to acquire other movies. They charge a fee (currently $0.99) for each movie you receive. The sender pays postage.

Now, apparently, despite a name clearly meant to sound like Netflix, and having a usage model that would've easily worked a lot like Netflix, they apparently originally thought that users would probably just exchange movies they liked for those they didn't, and be done with it. How odd. I'd think it would be obvious that some folks would fund into the system with a few DVDs they could manage to trade in, and then trade continuously after that. I guess they finally figured it out, though.

Since July, I've sent off about 8 discs of my own, all movies I didn't really want to own anymore. In exchange, I've gotten about 5 more movies (one more is still in the mail). That's less, but the movies I sent originally were really lousy movies, so, it's not such a big surprise.

Bottom line: if you've got easy access to a printer, this service is pretty easy to use, and it's worth checking out... especially if you've got any movies sitting around that you don't want.... you can essentially treat it as Netflix, but without having to pay an ongoing subscription. Instead, you pay $1/trade, whenever you do them, and they don't hassle you about sending discs. They do reward referrals, though, so ask me for an invite. As a bonus, they make it a lot easier to trade movies with folks who are your “friends”.

They dropped the beta moniker from their site today, but they're still doing promotions to get new signups... the first trade is free, they give your first credit (equal to about 1/2 of a typical DVD, 1/3rd of a new release), and, once you do your first trade and ante-up, they'll send you a DVD (from a finite list they supply, but some of which are decent) for free. In the very least, you can trade that one, and end up with enough pool credit to trade a new release disc right off the bat.

Update: Seems Slashdot actually covered a topic quickly. Here's the slashdot story, rife with misinformation about the doctrine of first sale, and the cost of Peerflix vs. Netflix.

Tags: ,

August 30, 2005

MT 3.2 with FastCGI under Apache

Well, Brad Choate explained how to do it for LightTPD, but for those of us with boring old LAMP installs, here's a brief howto for getting MT 3.2 working in FastCGI mode:

  • First, make sure you've got FastCGI support in apache. On my Debian machine, that's the libapache-mod-fastcgi package, or libapache2-mod-fastcgi for apache2 (make sure you have non-free in your packages list).
  • Likewise, you'll want CGI::Fast for Perl... get it via CPAN, or, again, the Debian package is libcgi-fast-perl
  • Then, you'll need to add the support for mod-fastcgi to whatever directory you run MT from. I added it explicitly, but it appears that the Debian FastCGI package takes care of this already. Of course, you'll also need Options +ExecCGI for that directory, but, you've already done that, 'cause you've got a working MT install, right? If you want to add it explicitly, use "AddHandler fastcgi-script .fcgi", either in your .htaccess, or in the appropriate slot in your httpd.conf
  • Now, modify Brad's example code a little. In particular, I had to add a line before the section of “use” lines

    use lib "lib";

    (this puts the mt/lib dir into the Perl library path) and a couple of lines that you'll have to edit to match your config: (I put these after the “use” lines, but I doubt that's necessary)

$ENV{"PERL5LIB"} = "/path/to/mt/lib";
$ENV{"MTHOME"} = "/path/to/mt";
CONFIG"} = "/path/to/mt/mt.cfg";

Name all of that mt.fcgi, in the same directory as your mt.cgi is now. I made hardlinks to the other names:

ln mt.fcgi mt-comments.fcgi  
ln mt.fcgi mt-tb.fcgi
ln mt.fcgi mt-search.fcgi
ln mt.fcgi mt-view.fcgi  
ln mt.fcgi mt-atom.fcgi

Then, I just use mt.fcgi instead of mt.cgi in my web requests. Nice!

So far, I haven't quite figured out how to get everything using the .fcgi interface... perhaps it's a problem with the PHP dynamic templates, but, after rebuilding my main page, all of the links that would've used .cgi now point to .fcgi. However, all of my single-entry archive pages still point to .cgi.

Anyway, there ya' go. Having problems? Comment below, maybe I can help.


How to fix all things to use the .fcgi's is explained by Brad in the comments. Also, for those of us with weak LAMP installs, and low-traffic sites, some tuning suggestions: Add something like the following to your Apache configs to lower the default limits (file is /etc/apache/conf.d/fastcgi.conf on Debian)

FastCGIConfig -autoUpdate -idle-timeout 120 -killInterval 3600 -maxClassProcesses 3 -maxProcesses 15

You need to raise the idle timeout unless you have a super-quick machine. I was using the 125 row view for comments and trackbacks to clear out old ones, and it was regularly timing out - taking more than the default 30 seconds to rebuild. This can, probably, lead to bad behavior, if not just on-screen error messages.

Raise the kill interval to reduce the regularity of killing the processes. Why pay the overhead of restarting fastcgi processes on a low-volume site? You'd think you'd be able to specify a kill interval in terms of jobs served, but it's time based for whatever reason.

The last two numbers specify how many copies of a given script are kept around, and the total site-wide. It was silly to have 10 copies of mt.fcgi loaded since I'm the only one who uses the admin interface, but why would I want to have 10 copies of the comment script hanging around, either? The virtual memory impact of running all those MT processes was getting too large.

I'm sure there's some way to use the same trick that Brad outlines for LightTPD to use a single instance to handle all the requests. Probably some combination of SetHandler and some other Apache directives. I'll post an update if I figure it out, or anyone tells me...

Tags: ,

August 25, 2005

Gmail From: now, finally, configurable...

Gmail has now added the ability to totally change your return address... before, you could change the “Reply-To”, meaning that anyone hitting “reply” on a message you sent via g-mail would go where you directed it. They still left the From: as your gmail account, so friends who copy that kind of thing into their address books would still get your gmail address, rather than the address you masquerade as (perhaps because you run your own domain, as I do.)

So, bottom line, now you can set up Gmail so that any address you can receive e-mail to can be your return address. Just go to “Settings”, “Accounts”, add a new address, follow the confirmation process, then go back into Settings/Accounts and “make default”. You'll also be able to choose which of your identities to use to send a message as from a message compose window.

Since gmail doesn't brand your outgoing messages, you can now effectively use gmail as a mail client. All you need is an account capable of mail forwarding. The only traces of gmail on your sent messages are the headers - originating mail servers and the domain keys signature they still stick on. Not bad.


August 19, 2005

TiddlyWiki, reloaded

After my previous, less happy experience trying to get TiddlyWiki to be useful as a lightweight publishing platform, I've finally gotten a system together that's a little less reprehensible to use. In particular, it uses pytw, and, well, I can publish to it live. Plus, I finally managed to badger it to running under mod_python, a system I have no experience in.

So, while TiddlyWiki still have some work to do to be useful, I will probably occasionally link off into its space for reference. Here's hoping I can manage to keep from breaking the install.

Also: Anyone want to help Markdown-ify TiddlyWiki? There's a partial solution waiting around to be brought to bear on the problem...

Oh, right, the link:

Tags: , ,

August 09, 2005

Notes from tonight's Web 2.0 BayCHI talk

Here are my notes (Yes, I'm experimenting with TiddlyWiki... note quite there, but fun to tinker with).

Update: Use this notes link instead.


April 25, 2005

Gmail spam joke?

So, not sure if this has caught up to everyone yet, but Gmail now supports mini-RSS headlines at the top of message lists. I got this sometime late last week.

But, it wasn't until this morning that I noticed that the Spam and Trash category both seem to have their own, highly entertaining specialized feeds. In particular, Spam seems to be a search of some kind, perhaps against Spam recipes. Here's an example:

Trash does something similar, and keeps coming up with tidbits about recycling.

April 15, 2005

Many scooters out at lunch today...

I actually saw about 2.5x as many scooters today as I took pictures of. The others were a variety. One was a Aprilia, I think. There was also an old beat-up Yamaha I regularly see on California Ave.

Anyway, here're a few pics, because, well, I have them. The first one is of my bike and a friend's, parked at PARC. The second is an all-electric bike that Joy has been considering, although it has pretty crappy range characteristics (“45 miles”, but those numbers are always exaggerated). No scale shots, but it's a little bit smaller than the Piaggio, probably about the size of the Metropolitan (which is too small for me to sit in comfortably).

Spotted at PARC's motorcycle parking Spotted along California Ave in Palo Alto

I know my scooter photography leaves something to be desired. I'll try to get better at it...

April 13, 2005 - rapid blogging, early, many bugs.

So, Bubbler seems to be a client-only blog system. All in all, it's not a horrible thing, though it's very obvious that it's very young. Seem my test page for a self-documenting idea of how many bugs there are in my first 10-minute session.

Those out there who use Ecto will find it to be similar, but more all-encompasing, and very very fast. The latest Ecto takes way too long to update its list of existing blog entries, for instance. Bubbler has no such problem, although, at the cost of being, so far, entirely proprietary. I can only hope they'll support a few of the normal posting interfaces at some point, so that they can get access to other existing clients (some of which, like mobile phone post agents, they're not likely to want to have to reinvent, and probably wouldn't do well at, anyway).

Where did Bubbler come from? I found it from an Google Ad running on this very site... however, when I Google around for or I find virtually nothing. Anyone got a story on this thing? It's just a bit too slick to be someone's side project... heck, the client even auto-searches your local network when you first launch, trying to find a local bubbler server, if you let it. Whoever's building it is thinking pretty big. And maybe not doing such a bad job. As Peter Norvig said at the BayCHI last night: it's better not to make it too easy for people to post to blogs. But this could settle in nicely in the blog/wiki/collaboration space, especially for organizations that are somewhat geographically distributed, but not so stuffy to make it hard for people to communicate using new publishing tools.

April 12, 2005

BayCHI: Search Engine Technologists Panel

My notes from tonight's BayCHI are here. I can't say it wasn't a little interesting, but, on the other hand, it really wasn't really enlightening. Nonetheless, here are my notes.

March 01, 2005

Gmail tip: Short-term file storage

This might be obvious, but still worth pointing out....

Now that Gmail has drafts, it's very easy to stash a file, “short-term”, by merely attaching it to a message and saving it as a draft. Revisit the draft to fetch the file, discard it when you're done. This is somewhat easier than mailing it to yourself, wastes quite a few less resources, and saves you having to dig out the “Trash” option buried in the menus, as well.


February 03, 2005

Dang you, Microsoft!

Much to my dismay, I've discovered yet another way that Microsoft has built pathetically unreliable stuff.

I currently own one machine running Windows (and this is part of why). It's an eMachines from about a year ago, running Windows XP Home. When I got it, I stuck a recently purchased 160gb drive into it, so that it had two drives. I use the 160gb drive for my pics, media staging, family video project work, etc.

Apparently, that drive has been failing for some time. How do I know this? I went to preview some photos last night, and Adobe Photoshop Album (er, Elements 3.0 or whatever it is now) complained it couldn't find a recent pic it needed to show. I went looking. The directory was missing, and its parent directory appeared empty, even though that parent directory should contain subdirectories holding all of my roughly 5400 pictures. Hrm.

So, I go about diagnosing the problem. The other subdirectories read, so Windows is being screwy about finding files. Odd. Run a scandisk, get this:

Waah? That's not helpful. Really, just “Ok”, no “further info”, no “scream, then call for help”. That's from running scandisk, folks.

So, I figure, grab what I can, and start looking for missing bits from backups (I should at least have backups of all or nearly all of my photos.). I drag the offending directory, which probably does still hold about 3500 or so photos (none of which show up in Explorer) to an external network drive. That quickly results in this message:

Verrry helpful. Thank you so much! No files at all were copied before I got this message.

Much more digging, far beyond the level of the average XP Home user, discovers that, starting on January 17th, my machine has been logging disk sector errors for that drive. Yes, 2 1/2 weeks ago, it logged enough sequential disk errors that it, reasonably, should have alerted the user, yet I was told nothing. The average XP Home user has never even heard of the Event Log, so wouldn't even know to look there for an explanation given what I've already encountered.

Give me a break. This is pathetic error handling. There's no excuse for the first notification I get of this coming from application code unable to load files from a filesystem. And there's really no excuse for giving the user absolutely no help when apparently catastrophic problems occur. Heck, had I not gone looking, I might've just assumed that drive was ok, and continued to throw my precious bits into the great bitbucket in the sky. This needs to be fixed, more than we need a new graphics layer, or support for PCI Express, or any other new feature.

Microsoft, are you listening?

January 11, 2005

Talk: Google UI

I attended today's BayCHI talk @ PARC tonight. The speaker is one of the original UI designers at Google, where she's still involved in the process. It was a good talk, with a lot of useful information on why Google's UI is still so clean, and how they've managed to maintain that competitive advantage over so much time and new features>

Click here to read my notes

August 04, 2004

Gmail tip: Add your comments to a thread

I happened upon this idea this morning, when trying to figure out how to leave a comment for why I was keeping a rather-long weekly e-mail in my collection. It covered about 4 different topics, and I sure as heck wasn't going to re-read it for the whole thing... I needed a way of annotating it. Preferably, in place, 'cause Gmail does a good job of collecting all other information in place.

Solution? I created a junk account,, which goes exactly where you'd think it does (but, for anyone other than me, use something else, 'cause do you really trust me to always send those bits to their death?). Don't use a completely non-existent address, though, 'cause it'll result in your getting a bunch of bounce messages.

Now, I just reply to the message, send the reply off to die, and it gets tacked onto the conversation in Gmail. Refer to it as "Comments" in the address part of the message, and you'll be able to search for it easily with the search box ("comments", I mean). Even better, you can create a filter to a label... filter on "To:", and assign it a suitable label - then have an easy auto-collection of all entries you've placed comments on.

Might seem pretty simple, but it's pretty useful to me.

July 07, 2004

Gmail documentation clarifies "y"

To follow up on previous posts about Gmail and how I think that the "y" key, aka "Archive" really means the same thing that "delete" did in m previous mail system - Gmail's help center seems to have clarified. From the Gmail Help Center post on what the keyboard shortcuts are, it now specifies:

*Remove from current view
Automatically removes the message or conversation from your current view.
  • In Inbox View, 'y' means Archives
  • In Starred View, 'y' means Unstar
  • In Spam View, 'y' means Unmark as spam and move to Inbox
  • In Trash View, 'y' means move to Inbox
  • In Label View, 'y' means Remove the label

So, they admit that "y" means a different "archive" than the normal "archive" option from the Gmail drop down menu. Good to know that it unmarks Spam, though, I hadn't discovered that little gem...

June 04, 2004

Another reason Gmail rocks...

After having Gmail for a month and a half, I have to admit, the search functionality that it's billed for isn't something I use constantly. But, that's really not too surprising, either. Since Gmail makes organizing and dealing with e-mail overload so easy, by grouping threads and hiding redundant text in them, I rarely use search for what it always used to be used for - finding a message relevant to a discussion in progress.

But, the search really is nice. With the keyboard shortcuts, I'm at most a keystroke or two from doing a search at any one point, and a search takes place very quickly. Where my old mail systems ( + IMAP, or the Eudora+IMAP that preceded it) was too slow to justify a search unless it was, you know, important, or really worth it, I actually do use search on Gmail casually. Come to think of it, maybe I'm under accounting for how often I do searches. I wonder if the guys at Gmail can look up that kind of information for me?

One to my point.... it was doing such a casual search today when I noticed a neat feature of Gmail's search that I hadn't seen before - search windows update continuously. Well, more or less... they update as regularly as Gmail refreshes the screen to notify of new mail, or remove notices displayed (like "That message marked as Spam", or whatever).

Why is this cool? Well, since search results windows look and act just like mailboxes on other systems, you can basically get a keyword-based mailbox. Or enter a search for a message you're expecting to get soon, and see it pop up, all by itself, as soon as it comes in.

April 17, 2004

On car shopping and environmentalism...

Mostly a musing, but perhaps someone out there will read this and offer some commentary...

I greatly approve of the effort to reduce the emissions of cars, and to reduce their impact on the environment. And I agree that cars like the Prius would seem to go a long way that way..... but, there's something missing. What's the total affect of a Prius on the environment? It has to be built, right? Assuming most things are equal, the Prius and another mid-to-small sedan are going to cost about the same to the environment, right?

But wait, the Prius has an extra motor, some more complicated electronics, and a huge battery array which, though perhaps very sturdy, will eventually need to be replaced. And, before you argue that everything is recyclable, realize that I'm talking about total effect on the environment - if it was perfectly recyclable, but required 3 tankers full of oil to be burned to build, then there's still a net loss on the environment.

If you're going to get a new car, I'm pretty sure there's some sort of environmental advantage to the Prius. But, as we've recently seen, even small devices like computers can have an effect on the environment, perhaps we should wear-out our old cars first? What's the tradeoff in fossil fuels?

(Anyone have the link to the website a few of us saw a year or so ago that estimates how many acres of land the earth needs to support your individual lifestyle?)

April 14, 2004

Gmail thoughts...

Thanks to Jason Shellen's little Gmail article, that kwc pointed me to, I now have a gmail account of my very own.

The requisite comments:

  • Yeah, it's pretty darn fast. And keyboard shortcuts, though they're taking a bit of getting used to, will really make this service roll. I kind of wish they'd adopt a reliable set of keys (perhaps [ and ]) that scrolls between items in the current view (currently: j,k scroll message lists, n,p items within a conversation)

  • Archive really means delete... just don't stick it in the actual trash, in going with Google's approach of having your mail always searchable. If you archive something while reading from a "label" view, it gets its label stripped. This makes sense, now that I think of it, but took me by surprise. We'll see how that goes... I forwarded my last ~1 month of mail, both incoming and outgoing (how else to see conversations in their full form?), and I've already used up 9Mb. Let's just hope they decide to sell more space, if this ends up working out.

  • Minor quibble: I used bounce on Pine to move the old mail to gmail to have some stuff in place. Gmail, I guess since it focuses on conversations, records the date of a conversation as the date that something was last added to it. This means that, for all of the mail I bounced over that way, the dates are various times today. Once you open a conversation, it does display the correct date for each message, however.

Things that are missing: * Some way to combine/split conversations. This can never really be done totally automatically... someone replies to an old e-mail about a new thing, or changes too much of the message and it doesn't get globbed in correctly. It's probably a power-user feature, but it's going to drive me nuts if they don't put it in.

  • Multiple addresses recognized as one person, and the ability to search based on a person, rather than an e-mail address.

  • Safari support (who needs a nifty spell check, when you can just spell-check-as-you-go)

  • "Conversations" are cool... they definitely open quickly, and the system often catches when text has been quoted in subsequent messages and hides it by default. This is much better than the way my current preferred mail client, Apple's groups things together in "threads". Same idea, just a little better in Gmail... if only because in Apple's Mail you have to consciously set it to browse Sent-Mail in addition to whichever mailbox you're browsing. Otherwise you don't see your side of the thread.

UPDATE: See the comments for some discussion of "Archive": it appears kwc and I were actually talking about different things. I was using the "y" keyboard shortcut, which does, in fact, act as I described (stripping the current label from a message, when in a Label view). kwc was comparing to the actions of the UI when choosing to perform an "Archive" operation.... which does just strip "Inbox" from a message that was still there.

Either this is a bug in the implementation of the "y" key, or it's mislabelled. I like it this way, though, so hopefully it will stay. Now, if only there were a keyboard-based way of applying labels. sigh