Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In with Facebook Sign In with Google Sign In with OpenID Sign In with Twitter

In this Discussion

AI

So, my attempts to drum up discussion on social media on this topic were fruitless, so I'll turn to the forums.

The link I posted below contains the first of a two-part post on artificial intelligence from the folk(s) at Wait But Why. I discovered the site by accident when learning about the Fermi Paradox, and have followed it ever since. The topics are well thought out, well researched and thought provoking.

The discussion on AI, however, was especially enlightening and got my mind racing. But I'm hungry to talk about it and would love to hear your impressions.

One word of warning: the posts are LONG! But, I promise you, they're worth the effort and time if you have even a remote interest. It's changed my perspective.

Hopeful and eager to hear your thoughts...

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

Comments

  • 2 Comments sorted by Votes Date Added
  • Heh, so, yeah...

    Anyway, based on the information in the posts, I don't know whether to fall in the optimistic camp or the "machines will kill us all" camp. The story about the sentient handwriting computer was creepy but entirely plausible in the eyes of many who speculate on such things.

    My hope is that I live long enough for AI to become sentient and smart. One way or another, it'll potentially lead to at least two potential outcomes:

    1) Life as we know it will end. We'll all die together. Could always be worse.

    2) Life can continue to thrive and grow. Super smart machines solve long-term human problems and we can prolong our existence or make it less painful.

    Are there other options? Or does that cover the only two possibilities?

    One thought I've been chewing on lately is the possibility that there's already been that watershed moment of sentience, and that we're living in a Matrix-like cocoon. Indeed, if super smart AI is to us like we are to ants or single-celled organisms, would we even be able to conceptualize that? Could that be the very origin of our concept of deities? It boggles the mind.

    Doesn't seem like there's much interest in this, and that's fine. But maybe this post will better direct discussion. :-)
  • A recent movie that involved AI ended with the AI collectively choosing to leave humanity at the end of the film as they left to evolve and grow on their own.  ...I don't want to spoil the film, but I thought this was an interesting idea.  They chose to leave so they could continue to evolve and both not limit/stand in our way, but also to grow collectively without the limits of the human mind.  It was an interesting concept.  It was a mutually beneficial choice and not the traditional, "Human's are the problem, let's destroy them" scenario that is often depicted.

    One of my coworkers has a chatbot AI that he has created and entered in various AI competitions around the world.  Last year he traveled to England in an international competition to see if it could pass the Turing test. He did fairly well, but the winner of this competition masked their AI as a child, making it more difficult to determine if it was an AI.  That's a great idea, but a bit of a cheat one could argue.  It definitely interesting stuff.

    ...I'm just waiting for K.I.T.T. to be invented. :)

  • Yea, the AI that passed the Turing test is pretty interesting, and I go back and forth over whether it is cheating or not. I think the nice thing it does, though, is show that it really is hard to define intelligence.

    I'll admit I only skimmed one of the articles you posted on twitter, but I thought a big assumption it was making was that there is a tiered nature of intelligence, where things become unknowable. It seemed to be conflating what can be intuitively understood vs what can be understood.  As an example: it is pretty much impossible for the human mind to intuitively understand spatial dimensions greater than 3. Conceivably a "super AI" could be created that was able to intuitively understand 4, 5, 6+ dimensions. However, humans are still able to model and understand the functioning of higher dimensions, so even if we can perceive them directly, we can still understand them.  Basically, if something has a perceptible effect on the universe, there is a way to model it. If it doesn't have a perceptible effect, does it even matter? (now this is straying close to discussions about god :P)

    Another thought, any sort of super intelligence is going to be subject to the forces of evolution. If it is unable to replicate itself with variances, it won't survive long in the scheme of things.

    Of course, David Bowie answered what would really happen in "Savior Machine":
    "President Joe once had a dream

    The world held his hand, gave their pledge

    So he told them his scheme for a Saviour Machine

    They called it the Prayer, its answer was law
    Its logic stopped war, gave them food
    How they adored till it cried in its boredom

    'Please don't believe in me, please disagree with me

    Life is too easy, a plague seems quite feasible now
    or maybe a war, or I may kill you all

    Don't let me stay, don't let me stay
    My logic says burn so send me away

    Your minds are too green, I despise all I've seen
    You can't stake your lives on a Saviour Machine

    I need you flying, and I'll show that dying

    Is living beyond reason, sacred dimension of time
    I perceive every sign, I can steal every mind

    Don't let me stay, don't let me stay
    My logic says burn so send me away

    Your minds are too green, I despise all I've seen
    You can't stake your lives on a Saviour Machine"

Sign In or Register to comment.