Sunday, September 27, 2009

The Skeptics Guide 218

I do not even know how to tackle this episode.

I'll start with the easy bit, Bill Maher.  Dr. Novella discussed during the news segment that Bill Maher is to receive an award with Richard Dawkins name attached to it for promoting atheism. I already have discussed my thoughts in a previous post.  I believe Dr. Novella was correct in placing Maher's position regarding religion as not based upon reason but as a pure ideologue.  Suffice it to say how a person can denigrate "Western Medicine" which is based upon science vs. other forms of medicine based in part upon magical thinking and be so vehemently against religion which is purely based upon magical thinking/ faith is beyond me.  Really, my upset is not so much against Maher who seems to be a Hollywood player than it is against Dawkins who still, as far as I know, has not come out against his name being attached to an award Maher shall receive.

The interview was with Michael Vassar who will be speaking as a part of the singularity institute about problems with artificial intelligence.   First, Vassar sounded as if he spoke with a lisp. Well, lisper's of the world unite and take over, I did not have an issue with it.  Vassar did not seem as if he was overly cagey on being interviewed as a whole.  We cannot all be Neil deGrasse Tyson.  All of that being noted, I really had a hard time following his interview.  It is quite possible I am not of sufficient intellect to follow his train of thought, but he appeared to be arguing that inevitably artificially intelligent computers, once they are able to self improve in order to reach a given goal, could wipe out humanity as being irrelevant to a computer's goal.  He also seemed to think that whether the A.I. computer gained consciousness or not was irrelevant.  (It sounds a lot like the Terminator's Skynet, but without the necessity of consciousness.)  Where I become exceedingly confused was how he proposed to prevent the A.I. computer from actually destroying humanity.  Dr. Novella I think proposed a set of rules to constrain the A.I., but Vassar thought such constraints would not work, but . . . then I dunno.  It was a frustrating interview.  I don't mean to be such a cop out, but I lost it.  If anyone can explain to me in a modest amount of space at a middle school level what Vassar was proposing feel free to email or comment.

The interview on a certain level reminded me more of some past interviews with believers than it did with scientists or skeptical advocates.  Vassar had his ideas, and to a certain extent blew off the questions, or ignored them.  It was a different interview to be sure.  I am wondering if the Rogues knew what they were delving into when they booked the interview.  It seems to me Steve and Bob were a bit off their game as if they were surprised by (to put it gingerly) Vassar's communication style.

Finally, Rebecca Watson noted on the show she is moving to the "Mother Country" and going to live in London.  I wonder if Rebecca over time will gain a English accent.  She did not have a Boston accent, but a Jersey accent is a hard thing to break.  We wish her well, but I do hope that she does not forget about us on this side of the pond.

1 comment:

  1. i thought the same basic thing about the interview with Vassar. He came off as a bit condescending to the SGU team. He sounded like someone who assumes people who question him just don't understand what he is trying to say. it might be that case, even in this interview, but i felt he did a poor job of addressing some of their questions.

    i am going off the top of my head after a couple of weeks since hearing the podcast, but i think the main difference vasser was trying to address in the "constraints" debate was that if you create the machines properly, you don't need constraints, and if you have to impose constraints, it is, in a way, too late. i think bob and steve were arguing whether you define what the "intelligence" can do before or after you create it is irrelevant, you are still imposing constraints. now, i don't know if they really understood what vasser was getting at and i am sure i don't, but i believe you probably have to know more about the details of how they create this intelligence to understand what vasser meant. it is so far beyond me to comprehend how you create these entities that i am not surprised i don't understand how you can create an entity that could not ever want or see the advantage of destroying humanity without telling it not to, but it seems he was arguing that part of the core "code" would be the value of humanity, therefor never needing to tell it "don't destroy humanity." while that seems like a "duh" on the surface, so does "Don't kill yourself" to a human. it seems like you need both- a core system that has no reason to ever want to kill us along with an override that tells them "hey, in case it seems like a good idea, don't kill people" haha.

    ReplyDelete