I have seen the future

Lots of people, including me, have wondered where the internet will be going in the future. What will be the next big boom? What will the next big technology be? Too late – Google has whooped everyone’s asses to the punch.

It’s been predicted before and, maybe, I’m just a little late in “getting it”, but XMLHttpRequest (aka AJAX) and web services (ie. SOAP) are totally where the web is heading. Combine this with micro formats and the semantic web and you can start to see where things are heading.

In the next five years it will no longer be acceptable to simple post XHTML to a website. Users will start demanding semantic XML delivered via HTTP through SOAP or XML-RPC so they can consume the information you post online in any manner they wish. The browser space will explode with new, alternate viewing methods (RSS aggregators being the first in a long line of new products that will alter the way we consume online materials).

If having an Internet connection isn’t already a requirement it will be within 10 years because, at that point, your operating system will rely on an Internet connection to function. If Google has anything to say about it you’ll most likely be booting off of their massive cluster. This has been widely anticipated for some time and I can see it’s already happening (GMail being a good example).

So, where are the opportunities? I see huge opportunities in the consulting end with helping people to bring web services online quickly. I also see a huge arena for people to create new specialized applications geared around specific web services. I also see new protocols popping up based on web services (ie. every photo sharing site publishing API’s that work the same for each site). The specialized applications I mentioned before will utilize these new protocols to create rich new applications that allow you to consume information in a totally new way. For instance, I could browse to img://cameron@flickr.com to view Cam’s photos. Or you could go to acronym://SOAP and have it give you a definition for SOAP. My favorite would be someone who would create info://Joe+Stump which would leverage the various API’s from Wikipedia, Google, etc. into a single “fact sheet” for the term provided. The possibilities are endless in this arena.

The interesting thing to mention here is that programming languages, for the most part, will become a moot point. It won’t matter what I program my web services in or what programming language you use to consume my web services. With AJAX, XML and web services the future is looking very interesting on the web.

On Apple moving to Intel

A lot of people on the Internet are pissed about Apple moving to Intel processors. A lot of people could care less. Most are shocked this even happened. I, personally, don’t really care. At first it confused me that they would leave the 64-bit PPC platform in favor of a 32-bit x86 architecture, but I have since seen that they are planning on using 64-bit chips that Intel has yet to release. So here are my only concerns with the fact that Apple is moving Intel.

  1. Apple must continue to ship their machines with the OSX goodness I have grown to love.
  2. Any computer running OSX on Intel chips must be designed by Apple.

I could care less about everything else. As long as I get faster, cheaper and better performing computers that are designed by Apple and run OSX I could care less about what chip they run on. Though I would have been bummed out if they switched back to 32-bit CPU’s.

Copy Songs from your Ipod

Lauren’s iBook died recently and, with it, went all of her songs. I told her we could most likely recover the songs from her iPod so it shouldn’t be a big deal. However, as many of you probably know, you can’t simply restore your music library from your iPod. Thankfully, there is hope! Being the UNIX geek I am I whipped up a shell script to get all the songs of the iPod and it worked like a charm.


#!/bin/sh

IPODPATH=/Volumes/iPod/iPod_Control
DST=/tmp
TYPES="mp3 mp4 m4a m4b"

for ext in $TYPES
do
    find $IPODPATH -name *.$ext -print -exec cp {} $DST ;
done

I’m sure I’ll be getting a letter from Apple any time now, but there it is. Using simple UNIX commands you can easily recover the songs from your iPod. Maybe Apple will listen to it’s consumers and create an avenue to back up and recover one’s music library. Until then, UNIX is your friend. By the way, for those of you wondering, the m4b extension is for books. You may have other extensions on your iPod as well. To find what extensions you have I found the following command to be most helpful.


find /Volumes/iPod/iPod_Control/ -print | awk -F . '{print $2}' | sort | uniq

Of course you’ll need to change the names of the directories above to the actual name of your iPod (mine is simply named “iPod”). Once mounted it should show up in your /Volumes folder.

By the way, those of you with Cygwin installed should be able to use these scripts, after modifying the paths as well on your Windows iPods.

Lauren gets all the cool toys

Argh! Lauren has been getting all sorts of fun new toys in the last few weeks. What do I get? Nothing. For the first time in my life my significant other has what is, arguably, a better machine as far as technical specs go. The following is a list of news toys she has gotten in the last few weeks, amazingly with my blessing.

  1. 1.2GHz iBook – Faster CPU than my machine, but I’ve got a 15 inch screen, much more RAM and a larger hard drive. It’s a zippy little machine for a mere $999. Probably one of the best buys for laptops right now.
  2. 4GB iPod Mini – She’s training for a half marathon and running with a full size iPod was a little cumbersome for her. So we got her a mini with an arm band. She’s all set now.

To her credit she managed to take me to the Apple store twice in less than a week and keep me from buying myself one of those hot new Apple displays.

Backups / Failed RAID Array

So, you’re really cool. You’ve set up a huge RAID1 array on your server. You sleep at night knowing that the chances of losing two disks before you can rebuild the array is pretty small. And then on a nice cool Monday morning your systems administrator tells you that a drive in the array has failed. You order a replacement and have it sent overnight. On Tuesday night the systems administrator calls you with the worst possible news; the second drive is failing and the array has not been rebuilt. You frantically try everything you can; upgrade the kernel, check for bad ram, fsck the hard drives. All provide no solution. Your data, it appears, is lost forever.

You assess the situation. You, personally, have lost 3 years of email. A client of yours has lost a database application with about 80GB of data. Other clients have lost various amounts of email as well. Sound like something that doesn’t happen? Think again. It happened to me this week. I’ve learned my lesson: never trust RAID. It doesn’t matter how many drives you have, never trust the array. So what are my options?

  1. Drink heavily to dull the pain.
  2. Contact a hard drive recover operation and find out what your options are.

It looks like it’s going to cost me $100 to get the drive evaluated. If they can recover any of the files off the drive they will send me a list of files and a solid quote (which will be between $500 and $2400). I should know by the end of next week as to what the outcome of the evaluation is. I can’t afford $2400, but I’d think long and hard about the $500. I didn’t even know that such places exist, but if it works it will be a total life saver.

93% of companies that lost their data center for 10 days or more due to a disaster filed for bankruptcy within one year of the disaster. 50% of businesses that found themselves without data management for this same time period filed for bankruptcy immediately.

Needless to say I’ve spent a lot of time over the last couple of days setting up rsync scripts to back up email, databases and web folders. I first rsync database dumps, mail and web folders into the user’s home directory. I then rsync my own directories and other important directories down to my 300GB firewire hard drive.

If I end up doing the disk recovery service I’ll post a review to the site and let everyone know how it goes.

YATR

Let me be the first to say that Tiger isn’t as great as everyone is saying it is. In fact, the only reason to upgrade that I can see is Spotlight and the new Mail.app. Supposedly it’s supposed to be faster, but I haven’t noticed any increase in speed on my 1GHz/768MB PowerBook. This could be due to me merely upgrading instead of doing a clean install. So what have I noticed that doesn’t have me roaring over Tiger? (C’mon, an entire review without using a tiger cliche?)

  1. If you have lots of email in Mail.app currently, watch out. The import into the new version of Mail.app barfed on one of my folders, which I had to delete in order for it to finish importing my email (about 25,000 emails). After that was over the fun had just begun. Every time I clicked on a message Mail crashed. After a reboot and some tinkering I finally got it working, but this left a really bad taste in my mouth.
  2. If you’ve been mucking around the UNIX settings then you need to watch out. The upgrade process appears to have modified, updated or simply deleted many of my changes from files such as /etc/profile and some of my VIM syntax files. Quite annoying.
  3. Spotlight is slow. I’m not sure if this has anything to do with me not doing a clean install or not, but it’s not the “lightning fast” that Apple purports it to be. Also, unless I’m a retard (which is likely) it matches “any” of the search terms and doesn’t allow you to specify that documents must match “all” of the search terms. Quite annoying. Also, while at the BASH prompt I can’t cd into a Smart Folder. This, simply, sucks. If I created a Smart Folder for “Word Documents” it should act like a regular folder, which would allow me to create intelligent backups and use rsync to back them up. This definitely sucks for people like me who use scp to move files from computer to computer.

So are there any things that I do like? Yeah, there are a few. I don’t think Apple touted the new Mail application enough. It’s, quite simply, a great upgrade. How did I live without Smart Folders before? I’ve added Smart Folders for “Today”, “Yesterday”, “Flagged” and “Pictures”,
which let me quickly find emails matching those criteria.

Also, Dashboard is a welcome addition for me. A lot of people, including me, ranted about it being a rip-off of Konfabulator. The thing I like about Dashboard is that a quick keystroke lets me see time, AirPort status, a dictionary, Wikipedia, etc. I’m totally addicted to browsing Wikipedia articles in this fashion. Could someone please offer a PHP/MySQL lookup module?

I think Apple would have gotten a LOT more fanfare if they had waited a few months and released the new version of iLife with Tiger. As it stands, the two major upgrades (Spotlight and Dashboard) are great additions, but probably shouldn’t have stood on their own as the main reason to upgrade.

Restarting the Dock in OS X

The problem is that when your Dock freezes in OS X it can be difficult to get to programs to restart it or anything pretty much. Luckily, when mine froze today I had a Term open and this is how I restarted it without having to power down my damn computer.


jstump@Joseph-Stumps-Computer
jstump$ ps auxw | grep Dock
jstump  3231   0.0  0.5   169272   4204  ??  S     9:48AM
 0:04.75 /System/Library/CoreServices/Dock.app/Contents/MacOS/Dock -psn_0
jstump  6868   0.0  0.0    18644     92 std  R+   11:45AM
0:00.00 grep Dock
jstump@Joseph-Stumps-Computer
jstump$ kill -HUP 3231

The second field in ps‘s output is the pid for that process (Process ID). Using kill we can restart the Dock without crashing everything. You’ll see your Dock disappear for a second and reappear in working form a second later.

Subversion vs. CVS Review

Like many, many of coders before me I have used CVS to maintain my code. I’m not as disciplined as others, but I am comfortable using CVS and I think it’s fairly essential for a company to version their code (CVS and Subversion aren’t the only kids on the block in that aspect).

As part of setting up a development environment at Enotes.com after our migration I needed to finally bring the code into a versioning system of some sort. It was at this point that I thought I should look into Subversion. I was more than capable of setting up CVS, but CVS had some serious flaws that really annoyed me. Here are the reasons I went with Subversion instead.

  1. Subversion allows you to delete files and folders from the repository with svn delete. CVS lets you delete files, but isn’t very kind in handling the removal of folders. SVN, on the other hand, recursively deletes folders and you don’t have to physically remove the file before removing it from the repository like you do in CVS (rm -f file.php && cvs remove file.php)
  2. Subversion utilizes Apache 2.0 as a way to access repositories. I liked the idea of being able to use an application I was used to configuring as the access point for my repositories. How do you authenticate? HTTP AUTH. How do you connect via SSL? Buy an SSL certificate. Pretty simple and you can browse the repository from a web browser with mod_dav_svn by default.
  3. Hooks in CVS where always confusing to me. I’m not sure why, but I never got around to setting them up. In Subversion there is a directory named hooks in each repository that lets you create shell scripts that are fired at different points in the checkout/commit process.

The final note is that Subversion emulates a lot of CVS commands. For most of what you have to do replacing cvs with svn will work just fine. Overall, I like Subversion. I don’t think it’s exceptionally better than CVS, just easier to work with.