I’ve been meaning to write a blog for the past few days, but I wanted to leave the PHP on Google’s App Engine post up top for a little while to help people find it. Hopefully its been helpful to a few people.
Lately I’ve been watching Season 2 of Terminator: The Sarah Connor Chronicles and thinking a lot about computers, robots, etc. I’m unsure if robots could just start slaughtering us tomorrow, but if they do I’d like to either be responsible for it or responsible for putting up a good counter attack. None of this bystander stuff for me.
On a slightly more realizable route, I’ve been thinking a lot about what tends to be classified as “Artificial Intelligence” and the whole AI field. One of my favorite topics, is cars that drives themselves. If I didn’t have so much school work due this week, I would start working on this problem. As I’ve read, a lot of the attempts out there deal with two pieces: 1) Teaching the computer how to drive a car and 2) Testing your car by having it navigate across the desert.
I think both of these ideas are pretty poor ways to approach the problem. First off, why write a program that supports the ‘learning’ function of a vehicle? Its far easier to write a program to emulate a human driver than to develop a program to facilitate the development of a human driver. I think of the car that taught itself how to drive only to freeze up when approaching a bridge, because there was no green grass on the side of the road. My limited understanding (reading an article a year or two ago) led me to believe the car determined that grass was important to have, something any of us with a brain know it untrue. Any ‘manuallly programmed’ approach to the problem wouldn’t be freaked out by water, mulch, or leaves.. the road is what’s imporant!
Then there’s the issue of testing cars in the desert. There is a large race thing in the desert each year where teams have their cars try to complete a course without any human interaction. Cars have been getting better and better at it over the years, but its not a very realistic scenario. Personally, I would be a bit freaked out if my Accord just started offroading it to get to the destination. Yes, there are similiar obsticles in the desert and the highways of america, but the highways of america also have paved surfaces, gaurdrails, painted lines, curbs, etc. As I see it the problem isn’t designing a program to best avoid a pothole, but to avoid other human-drive vehicles and kids running out into the road chasing those pesky balls.
I don’t think anything about driving a car is programmatically challenging. Sure, there are a few minor computer vision issues to be solved here or there, but once the tools are available I don’t see it being terribly hard to pull off… at least a drivers-ed level computerized pilot.
Most people will just write off what I’m saying as the rambling of someone who doesn’t know what he’s talking about. I encourage that, the less people expecting me to do anything the less likely I am to get complaints when things perform strangely.
A correlary to all this has been some thinking about intelligence, what it really means to be intelligent, and what it means to act human. I think we do a lot of emulating, and very little actual intelligent things. If we were all good at the intelligent part, there would be far more Einsteins in the world and far fewer telemarketers. Not that telemarkets are dumb, they are quite good at emulating things. From a young age we don’t ‘develop’ many new skills on our own. None of us evented the concept of walking, talking, typing, or doing math. We all saw someone else doing it, and figured we should copy them. With enough practice we managed to figure out how to copy them suffecient enough to produce the desired result. I guess we can ‘adapt’ in some sense… which is really the process of combining emulations to form something a bit different. Its far less exciting to think of the magical learning process when you’re really just applying a fancy copycat algorithm.