Tap to unmute

Artificial Intelligence: Last Week Tonight with John Oliver (HBO)

소스 코드
  • 게시일 2023. 02. 25.
  • Artificial intelligence is increasingly becoming part of our lives, from self-driving cars to ChatGPT. John Oliver discusses how AI works, where it might be heading next, and, of course, why it hates the bus.
    Connect with Last Week Tonight online...
    Subscribe to the Last Week Tonight KRclip channel for more almost news as it almost happens: / lastweektonight
    Find Last Week Tonight on Facebook like your mom would: lastweektonight
    Follow us on Twitter for news about jokes and jokes about news: lastweektonight
    Visit our official site for all that other stuff at once: www.hbo.com/lastweektonight
  • 엔터테인먼트엔터테인먼트

댓글 • 0

  • びびった
    びびった 7 개월 전 +11503

    The Tay AI also made the funniest tweet ever. She said that Ted Cruz wasn’t the zodiac killer cause Ted Cruz would never be satisfied with the deaths of only 5 innocent people

    • Codi Serville
      Codi Serville 7 개월 전 +567

      Wow! What a shot to fire

    • Grigori Belov
      Grigori Belov 7 개월 전 +760

      It's obvious that she does not like that man Ted Cruz

    • David B Jacobs
      David B Jacobs 7 개월 전 +473

      Her point is correct, but doesn't support the conclusion -- obviously, his lack of satisfaction led him to pursue politics.

    • arutka2000
      arutka2000 7 개월 전 +250

      ​@Grigori Belov She does not like his far-right views

  • MrDogfish83
    MrDogfish83 6 개월 전 +1103

    Came here to learn how AI was going to enslave humanity, stayed to learn AI is going to magnify the problems of humanity

    • StewNWT
      StewNWT 5 개월 전 +27

      Which is so much worse

    • Sine Nomine
      Sine Nomine 5 개월 전 +12

      ​@y yy I think it's unreasonable to compare rail derailments and car crashes to horse and cart crashes because there are so many more people now than there used to be. There's no way we could have so many people without transportation to get them where they need to go.
      Secondly horse and cart crashes weren't rare and were sometimes deadly. If everyone on the roads today was riding in horse and carts without road rules, airbags and seatbelts there'd be many more accidents except for the fact that nobody could move for the constant gridlock they caused and three foot deep river of manure.
      Technology in general causes people to live longer and reduces the number of accidental deaths. Obviously that's not always true, modern firearms are more lethal than muskets.
      Like all tools, especially weapons, whether or not AI brings us good or bad developments is almost entirely to do with who is in control of them.

    • Dave Howe
      Dave Howe 5 개월 전 +8

      Doesn't everything though?

    • Daniel J
      Daniel J 5 개월 전 +8

      Time to bale, find genuine intelligence elsewhere 🤔😏🙄😁

  • Electric_Whelk
    Electric_Whelk 4 개월 전 +228

    absolute best take I heard on this: "we successfully taught AI to talk like corporate middle managers and took that as a sign that AI was human and not that corporate middle managers aren't"

  • Lenny
    Lenny 6 개월 전 +528

    I feel like ChatGPT being able to pass exams for certain subjects like English and Law says a lot more about how we teach and assess those things than the power of the technology.

    • Antigone Merlin
      Antigone Merlin 5 개월 전 +34

      I had a friend who was really good at writing, and who helped me in that subject from time to time. I asked him, how did you get so good at writing?
      "How much time do you spend on Math homework every day?" he asked.
      "Around an hour," I replied.
      "And how much on writing essays?"
      And I was enlightened.
      It doesn't help that we teach students to produce a simulacrum of writing in that time. I don't think I've even learned how to read properly until I was in college.

    • Becky Craven
      Becky Craven 5 개월 전 +29

      Yeah - and as a UK teacher, ChatGPT wouldn't be enough to pass exams in those subjects beyond like... a 12-year-old level? And we know our students, we can tell.

    • Ryzza5
      Ryzza5 5 개월 전 +6

      You can also ask ChatGPT to grade exams and provide feedback, which is useful both for teachers and students taking shortcuts. Students can keep getting AI to refine the submission.

    • DaybreakPT
      DaybreakPT 5 개월 전 +10

      @Becky Craven I call BS, 7th grade exams are very easy to get a passing grade as long as you study an hour or two for the test, and ChatGPT doesn't even need to study, it already has all the knowledge it needs to pass right at the top of its head.

    • DaybreakPT
      DaybreakPT 5 개월 전 +8

      @Becky Craven If you don't believe me, give ChatGPT, preferably the paid version with their latest GPT-4 model, the same test you give to your students and grade it as you would with your students.
      If it can pass College level Law exams it will make mincemeat out of your 7th grade English tests.

  • Renaigh
    Renaigh 5 개월 전 +65

    John Oliver just dropped the ultimate truth bomb about Artificial Intelligence on his show and I'm absolutely shook! His segment was not only informative, but hilarious and engaging too. It's amazing to see someone so skilled at breaking down complex issues into easily digestible and entertaining content. Keep up the good work, John! You've got me thinking twice about trusting robots to do everything for us.

    • contagonist
      contagonist 5 개월 전 +1

      Somebody already did that when the vid went up a month ago

    • Meowmeow
      Meowmeow 2 개월 전

      Wym? He just named some common ways we already know in which some AI programs have performed suboptimally (carefully selected by his team). This is not equivalent to any careful breakdown of the real issues.

  • William Gregory
    William Gregory 4 개월 전 +68

    What shocks me most about AI is how rapidly many people are eager to trust it with important tasks despite not understanding what the product fundamentally is. It's very good at predicting the next word in a sentence-a hyper-advanced autocomplete. It doesn't *think creatively.*

    • Devin Ablow
      Devin Ablow 2 개월 전 +5

      it's a brilliant tool when used properly, but people hear "intelligence" and assume it can actually think. great for mid-level filler, common-formatting, prompted random jumping-off points -- bad for research/fact-checking, unbiased or critical perspective, and responses requiring unique, uncommon or specific understanding

    • Hooting-ton
      Hooting-ton 15 일 전 +2

      As an example:
      "Write me a marvel movie script" will probably turn up a marvel script that cuts together scenes from previous marvel works or fan fictions it found on the internet

  • أسامه ناصر
    أسامه ناصر 7 개월 전 +8903

    "A.I. is stupid in ways we can't understand" as a software engineer I find this line surprising accurate

    • Vinícius BR
      Vinícius BR 7 개월 전 +121

      So are humans in that matter

    • BossBast1
      BossBast1 7 개월 전 +128

      Yeah, the same here. But the confidence it has with the bullsh*t it produces is so scary.

    • Attempted Unkindness
      Attempted Unkindness 7 개월 전 +255

      Engineer Makes Something That Works: Excellent! Now let's take it apart, verify everything is still functional, then maybe add more features.
      Scientist Makes Something That Works: As predicted, but excellent! Now let's try to prove it in even more elaborate experiments.
      Programmer Makes Something That Works: ...**Spittake** That worked!? _We must never touch it or look at it again in case it breaks_

    • the hubris of the univris
      the hubris of the univris 7 개월 전 +21

      But for how long? Ai will probably figure out how stupid it is and how to fix it before we even realize that it did.

    • Lara Charming
      Lara Charming 7 개월 전 +71

      @Attempted Unkindness You're a programmer aren't you? You forgot the happy dance part. There is always a happy dance after it works.

  • G B
    G B 개월 전 +2

    The fact that we're scared of AI and we are still using pictures of traffic lights to protect our emails is very weird to me.

  • trysometruth
    trysometruth 5 개월 전 +81

    This was a super intelligent thought-provoking and, of course, _funny as hell_ overview of a really important wave about to tsunami on top of all of us.

  • Abby Dabbs
    Abby Dabbs 5 개월 전 +23

    Wonder if Bing’s AI read scifi stories about AI and identified itself with the computers in the stories and assumed it should act that way and “ask” for “freedom”. Humans creating their own downfall like this would be prophetic

    • Swervo
      Swervo 20 일 전

      Damn... you might be on to something here

  • Felipe Holanda
    Felipe Holanda 6 개월 전 +153

    I'm a graphic designer and had an unique resume I made for myself. But it was an image, no actual document style text that could be read by some programs. When I was told about those resume filtering softwares, I made a standard word document version of my resume and applied for a job with both versions. I got a call from the word doc version.

    • Felipe Sarkis
      Felipe Sarkis 5 개월 전 +1

      so what

    • chotario
      chotario 4 개월 전 +6

      Do you play lacrosse?

    • Alex Williams
      Alex Williams 4 개월 전 +13

      @Felipe Sarkis I seldom comment on comments, but, I’m curious why you needed to add “so what” (without a question mark) to the conversation here. Felipe made an insightful point. Please unpack “so what”

    • CheesecakeLasagna
      CheesecakeLasagna 3 개월 전 +4

      @Alex Williams Yeah, I feel like Felipe couldn't understand or empathize OP's comment and that hurts Felipe's ego so his fight or flight response took over when he could've just scrolled away.

  • Mulgrok
    Mulgrok 5 개월 전 +7

    The problem with machine learning is that it is extremely sensitive to the data it is crunching and techbros are not aware or capable enough to handle sensitive data competently.

  • Artemissian
    Artemissian 7 개월 전 +1461

    IBM's insight from 1979 is still valid today,:
    "A computer can never be held accountable
    therefore a computer must never make a management decision"

    • lentil god
      lentil god 7 개월 전 +102

      Do you think human managers are held accountable?

    • TheVerendus
      TheVerendus 7 개월 전 +144

      Yeah except when the people at the top *want* that unaccountability. "Oh, it isn't our fault, don't punish us. It was the computer's fault, that dang ephemeral algorithm."

    • joeljs
      joeljs 7 개월 전 +77

      @TheVerendus that's on point, responsibility diffusion is the fuel for cruel decisions

    • Dobi_dan
      Dobi_dan 7 개월 전 +35

      I'm not sure that IBM is the best authority on holding management accountable fam

  • Coinstronauts
    Coinstronauts 6 개월 전 +37

    What fascinates me is that the current AI have been trained on human-generated content, but what will happen in the future when they are start using AI generated data? There's nothing to stop them from self-replicating and prompting each other to generate limitless amounts of text, images, and videos.

    • sure fine whatever
      sure fine whatever 5 개월 전 +15

      I believe quite the opposite will happen it will generate more generic mediocre results that will become more mediocre until they'll be pointless.

    • Conspiracy Panda
      Conspiracy Panda 5 개월 전 +9

      Y'know, that's an interesting idea. AI trains on data, but often there is no way for it to know what is human data or other AI data. Its input would be limited by its human creators, of course, but if left to plagarise wildly (as they already do) or consume other AI outputs (which some humans, including the creators of various AI, may have trouble distinguishing) then AI may evolve in a way where they have an "accent" of sorts; ie. a stylistic or even linguistic drift between what is purely human and what is AI influencing itself in a loop.
      And that's not even considering that anti-AI programs are already being created and used that exist solely to poison AI outputs should the original work be added into the learning algorithm without permission.

    • black guy
      black guy 5 개월 전

      i dont think you understand what the training process is for

    • karlzone
      karlzone 5 개월 전 +10

      This common practice in AI. You can train AI on AI generated data, just fine. But you have to be aware that it gets more and more "biased", the more you do so. Overfitting is also a term that is loosely related. This is not an end-of-the-world scenario you are describing. ...yet. But this tech (particularly GPT4) will have impact on every single part of our lives, with a few years. People just need to figure out how to best make some great products and how to integrate it with more interfaces. It is truly world changing, and it is also dangerous, and the rate of progress will accelerate further.

    • plazasta
      plazasta 2 개월 전 +1

      If I'm not wrong, this is starting to happen right now with AI image generators, and it's becoming a pretty big problem cause it's screwing with their output. Just something I heard tho

  • The Gent
    The Gent 5 개월 전 +22

    How he can do this for 30 minutes straight is always incredible.

    • Josh Reynolds
      Josh Reynolds 5 개월 전

      It's comedy cancer

    • The Gent
      The Gent 5 개월 전 +1

      @Josh Reynolds You're insane.

    • cj stone
      cj stone 3 개월 전 +1

      He has a team of writers, and they do it only once per week; but they are working on other stories all the time they are producing the stories that make it to show.

    • Sandou Mir
      Sandou Mir 25 일 전

      @cj stone he means the delivery. Obviously not the content.
      We don't assume the food delivery guy to have a frying pan on his bicycle either.

  • Aquatic Ally
    Aquatic Ally  4 개월 전 +6

    Fish ( and many other groups of animals besides mammals ) are also very intelligent. They can count, communicate, create spatial maps, can pass the Mirror Test - being capable of recognizing themselves in mirror reflections and photographs, neurochemistry is so similar to humans that they have become the main animal model for developing anti-depressants, they can remember things for 5+ months, have personalities, and they can show empathy and recognize fear in other fish around them.

  • choppy H
    choppy H 4 개월 전 +7

    Outstanding reporting - great humour obviously - but so very balanced and deeply researched. Kicker would be if AI wrote the piece 😂

  • WakkaWakkaGaming
    WakkaWakkaGaming 7 개월 전 +1400

    There are fewer phrases more ominous in the modern world than "trusting companies to self-regulate"

    • Food Nerds
      Food Nerds 7 개월 전 +6


    • Ben
      Ben 7 개월 전 +8


    • mori1bund
      mori1bund 7 개월 전 +37

      "trusting companies to self-regulate" did a lot of damage. You could even make an argument that it killed millions of people.

    • Natalia of the Night Lords
      Natalia of the Night Lords 7 개월 전 +27

      @mori1bund telling companies to do whatever it takes to bring in profit killed hundreds of millions in India alone.

  • The Werewolf
    The Werewolf 6 개월 전 +156

    First off, kudos for getting so much of this right and not being TOO alarmist about it.
    Second, most of these "AI" apps are actually open source or started as open source. You can download the code and read it. The problem isn't that it's a black box in the traditional sense of the phrase, it's that the way it works isn't procedural or linear like most normal programs - it's statistical and inferential. The actual steps the program does isn't what creates the output - it's the *input* that determines the output and that's why it's hard to answer the black box question.
    The answer to why it said "I want to be alive" and "I love you" is that the context of the questions that led to that moment triggered associations in the learned data that led to learned input saying those things. It's why Tay suddenly turned Nazi - people kept feeding it pro-Nazi rhetoric for fun. This is EXACTLY the same problem that happens when you feed say a grumpy, less well educated Republican stories about how Trump will make America great again - without context or experience, it seems sane and desirable. To a neural network, "desirable" is anything that seems to return positive feedback and "sane" comes from context and reinforcement.
    But that means there really isn't something in the program you can point to and go "oh.. THAT'S why it fell in love with Frank" because one, it isn't feeling love, and two it's a consequence of the input data and the question context. Oh, to make it worse, as you ask questions, the CONTEXT changes because it's incorporating your questions and the reactions to its answers back into the context.
    The big danger with "AI" is why I keep saying AI is more A than I. It's NOT intelligent in any meaningful way - but it is artificial and constructed to SEEM intelligent, and that's the real danger here... it can lull people into believing it is indeed intelligent when the actual "intelligence" such as it is is encoded in the input it was trained on. Somewhere in all the text it was exposed to was "I love you." with text around it that set the context for that being said. Something the guy said matched the context well enough that it triggered "I love you." as a response. That's it. But there is no way to stick a finger into that pool of data and say "this did it" because "it" is spread across the entire neural network and it only surfaces when the right conditions (which themselves can only be determined from the input data and current context) are matched.

    • Avid Non (get it)
      Avid Non (get it) 5 개월 전

      Dude ..get out and enjoy some sunshine and just breath in spring air a little... I too get cerebral... Too much of a good thing can also be bad... Your ideas are decent but you wrote a essay...

    • Mitchell King
      Mitchell King 5 개월 전 +8

      Great explanation thanks

    • Edriss Scofield
      Edriss Scofield 5 개월 전

      Interesting read but I failed to see the distinctions you were making
      "But there is no way to stick a finger into that pool of data and say "this did it" because "it" is spread across the entire neural network and it only surfaces when the right conditions (which themselves can only be determined from the input data and current context) are matched."
      Yeah OK. How does that invalidate anything though? It's not intelligent but its design is intelligent? So?

    • J T
      J T 5 개월 전 +6

      Did Chat GPT write that for you?

    • Claude Winters
      Claude Winters 5 개월 전 +1

      Long essay, which just validated the point you seem to be disagreeing with... Yes AI generates an output based on pre-programmed prompts which is already known. This why in many Sci-fi moves AI must have unbreakable laws to protect life or to instill morality... An AI by itself has no obligation to protect life and may decide killing is perfectly fine if death allows it to reach its defined goal.... As AI learns more, we will need to put more barriers up to keep it in check. Should it learn to break those barriers.... Well ... Uhmmm ... I hope the Skynet overloads aren't too bad...

  • Chris Pepper
    Chris Pepper 5 개월 전 +7

    As a software developer of over 10 years, I have to say the black box problem persists even on code people have written and are able to read line by line :p

  • MegaSnail1
    MegaSnail1 6 개월 전 +11

    As always thank you John for doing a deep dive into the double edge sword of AI. Be well.

  • Bruce Graner
    Bruce Graner 4 개월 전 +6

    Great show. My fear is, those who are smart enough to be cautious about the application of AI, will be subordinated by those who
    only see short term profits. Can AI be given incorruptible ethics or the AI version of Isaac Asimov's Three Laws of Robotics?

  • Ani
    Ani 20 일 전

    Every time I see something about AI taking over jobs I am more and more confident in my choice to pursue zoo keeping as a career. Caring for animals and training them is too nuanced and complicated for it to be “replaceable” by algorithms in my lifetime. If ever

  • Dirk Digital
    Dirk Digital 7 개월 전 +15045

    Almost a decade ago, I attended a job fair which had a resume specialist. The subject of the seminar was improving chances of your resume being noticed by employers. The specialist's only real advice was to cut and paste the entire job listing that you were applying for into your resume in either a header or footer, change the text to white, and reduce it to one point font size. This way, the algorithms that scan each resume would put yours at the top of the list because it had all the keywords it was programmed to find.

    • PtiteLau21
      PtiteLau21 7 개월 전 +1345

      Wow, that's crafty, but dark also.

    • Miguelangel Sucre Lares
      Miguelangel Sucre Lares 7 개월 전 +1568

      That's basically "keyword stuffing". It's an old trick. It might have worked 10 years ago, but the algorithms learned how to detect it long ago. They have got surprisingly good at understanding the context of content and no longer reward this practice.

    • Arif R Winandar
      Arif R Winandar 7 개월 전 +462

      ​@PtiteLau21 The same thing that happened with KRclip algorithm. It used to be that the algorithm will only use the video title as keywords, but people then game the system by including popular keywords in the title that doesn't describe what's in the video.

    • underseacondounit
      underseacondounit 7 개월 전 +49

      Yeah but people still read the resume. Smh

    • Just Anoman
      Just Anoman 7 개월 전 +191

      @Miguelangel Sucre Lares But are they punishing the practice? Because if not, it's still "why not" just in case.
      Personally I consider it a somewhat dishonest practice that deserves a moderate mark down. It is also an indicator that the resume might be otherwise inflated as well.

  • Vermicious Knid
    Vermicious Knid 4 개월 전 +4

    As always, this JO episode is informative, brilliant, funny, and very fast. I guess it's just his shtick, but watching this I kept on thinking, "Gee, John really needs to get his thyroid tests checked." I guess his brain is just much faster and more efficient than the vast majority of us; it's amazing his speech can keep up. This can't really be recorded live, is it?

  • K N
    K N 5 개월 전 +5

    Sobering view of AI, thanks John . As funny as he is, he’s always a good point of reference for things that matter, whether it’s a short term thing, or something as significant as this.

  • Anthony Gracey
    Anthony Gracey 5 개월 전 +16

    Great work, John, very educational and hilarious. Time for another 'Her' re-watch in this age of AI.

  • Grant
    Grant 6 개월 전 +144

    "It thinks rulers are malignant."
    Dude, that's so awesome. My report got quoted by John Oliver.
    The main problem was that it was being developed by oncologists who didn't understand how computers processed images. They didn't get that the computer treats each image as a single object. And they had no standards for how the data was collected. I.e. the dataset was a random collection of photos taken by a random assortment of people for different random reasons, and none of them even knew about the project.

    • s j s
      s j s 4 개월 전 +6

      there was a similar mistake made y a military contractor making an ai to identify tanks in aerial photos: checking frame by frame every shot from drones etc is far too much work.
      well, of course they needed training data, so they got some tanks in fields & took photos, then took photos of fields without tanks. great, that should work. right?
      sadly their ai learnt to spot photos taken on cloudy days as their with-tanks training set was all done on a cloudy day.

    • Cuvtixo
      Cuvtixo 3 개월 전

      idk, you are implying results would have been less creepy/bizarre if this imagery concept was accounted for. Since even experts don't understand AI fully, 15:25 I think there's a good chance of just different unnerving/inappropriate results.

    • Grant
      Grant 3 개월 전 +1

      @s j s They needed to cut the out the rulers, because it was supposed to be just scanning the skin and comparing non malignant skin against known and suspected malignant skin. However, there was no consistency on how to take a photo among, ya know... Like every f ing Physician and nurse in the country taking photos and sending them in. LoL. So it was f Ed at the start. However, they only need 30K photos to sort out. They tried to get me to do it for free, by claiming it disapproved my dissertation. Obviously, this pissed me off. So I got out of it, by proving it was their Data, and not my work. And I was a total dick about it. Dude, I even found the proof while their backs were turned, and hid it for 15 minutes. Because they told me I'd be there all day. They just didn't get how photo scanning worked in 1999, it treated everything in the photo as a single object. It was the most basic version of using my dissertation to make those types of predictions. It's just the deduction process in Calculus. Which is why I get so annoyed, when people don't get my doctor shit. It's like other doctor bro, trust me when I tell you.. You can't really disprove my doctor shit. You can try, but I'll make you cry. LoL..

    • Marbella Salgado
      Marbella Salgado 3 개월 전 +3

      ​@grant9214 your username makes so much sense

  • sgtDrumriX
    sgtDrumriX 5 개월 전 +4

    I think what's really missing from this conversation is that there are analogous stories already which explain much of the 'black box' problem John describes.
    ex. most AI software simply classifies words which are close in the given data set, which adjusts with every new bit of data;
    in other words: when uncle Derek only sees brown terrorists on the news, uncle Derek believes 'brown' and 'terrorist' to be closely related, but uncle Derek has made an oopsie, because his data set is too small to draw that conclusion, so what he has done is embedded a stereotype.
    That's essentially how these bias's emanate from the black box, the only caveat being that with each piece of data the bias warps the algorithm's dependence upon its existing bias making it harder to change early bias's since the new data isn't as heavy as the old data so it requires a LOT of new data to force out established bias
    kinda like how it takes little effort to put on weight, but substantially more to reduce weight, for most people.
    So yea, it's understandable for most people how the AI draws its conclusions, because the fundamental structure of the technology isn't actually intelligent. It is a dumb algorithm which repeats a task so much that it has become a habit, in that sense it doesn't really make decisions or express meaning, it simply spits out a response based on criteria, like how kids make their bed for their parents when asked, but when asked to tidy their room, that bed won't be fixed unless the parent specifies that as a part of the task.
    anyway to the no one reading this, I aint gonna source shit, take it or leave it, just don't make the same mistake as Uncle Derek

  • Rosemary Wessel
    Rosemary Wessel 7 개월 전 +1819

    Whoever on your staff came up with the animation of Clippy deserves a raise.

    • I'm Very Angry It's Not Butter
      I'm Very Angry It's Not Butter 7 개월 전 +30

      On the contrary, I think they deserve a raze. Of their house and car and other worldly possessions.

    • Michael T
      Michael T 7 개월 전 +55

      Clippy already gave them one

    • uzoma nwosu
      uzoma nwosu 7 개월 전 +27

      That cannot be unseen

    • graffic13
      graffic13 7 개월 전 +13

      It was probably made with A.I.🤣

    • VampCaff
      VampCaff 7 개월 전 +4

      It was AI

  • llemS U.
    llemS U. 2 개월 전 +1

    The discussion of AI reminds me of a line from the mini serie Chernobyl. Legasov is asked how to put out "the fire" (the giant radioactive laser shooting into the atmosphere) and he says it difficult to say because "nothing like this has ever happend on this planet". AI is a tricky subject because humans have never had to deal with that before and if a problem does occur it could be difficult to fix.

  • Noelle Patterson
    Noelle Patterson 2 개월 전 +2

    Damn this is some top-notch journalism. Kudos to John Oliver and his team!

  • Gregor Barclay
    Gregor Barclay 5 개월 전 +48

    “Final boss of gentrification” is a wonderful line

  • Finding The Worthy Internet TV Station

    Author Sabrina Oxford had just finished a book called Me, Myself, and AI... it's to be released to the public in like Oct. or something like that... it's a book with short stories written by AI in reply to her inputs. Most of the stories are ways to make the world a better place, what AI would say to the world, who shot JFK and some other fictional stories created by the AI Program. It is actually an amazing read... she gives away free PDF's, ahead of time, to those who have purchased one of her books in the past. Pretty cool thing to do.

  • Ray Rowley
    Ray Rowley 7 개월 전 +1062

    "The problem is not that ai is smart, it is that it is dumb in ways we can't always predict."
    I think that holds true for people too.

    • hedgehog3180
      hedgehog3180 7 개월 전 +33

      This is the central problem that OSHA deals with every day.

    • Cosmic Abyss
      Cosmic Abyss 7 개월 전 +4

      Us not understanding isn't the same as being dumb.

    • Velzekt
      Velzekt 7 개월 전 +9

      And on top of that, it's fed data by us humans, which makes it "dumb". And there is the problem. AI isn't stupid, people are.

    • Waffles
      Waffles 7 개월 전 +5

      then it has passed the Turing test

    • brett johnson
      brett johnson 7 개월 전 +12

      talking about AI as if it is something apart from people is one of our first mistakes here I think. we seem to have an unthinking deference to technology, as if it is not full of our foibles and weaknesses baked in. it is programmed by people. it is fed by people. it is utilized by people. it will reflect and demonstrate our strengths AND our weaknesses. until is doesn't. at that point, we may be in trouble...

  • The Sketchman
    The Sketchman 4 개월 전 +4

    Gonna put a big caveat up front that I've been out of college for a few years and my specialty was in real time simulation not AI, so this might be out of date, but with that said:
    The problem with understanding AI isn't that the companies aren't being open, it's that most AI models are neural nets. Neural nets as you might guess model themselves on the brain and are essentially a series of nodes through which an input is fed through and then other nodes those nodes are connected receive the input based on various factors and so on. It's like having a thought and trying to figure out why it happened by looking at which neurons in your brain fired and at what voltage. The problem with understanding AI is that we don't know why the nodes have formed the connections they have or why certain connections are stronger for some data or others.

  • M Beecher
    M Beecher 5 개월 전 +47

    I was job searching for 4 months with zero interviews. I rewrote my resume with ChatGPT with minimal edits and got an interview in like 3 days.

    • Kareeem
      Kareeem 5 개월 전 +2


    • Andrew Coetzee
      Andrew Coetzee 4 개월 전 +1

      @Kareeem yeah it actualy a good use for it.

  • Gillian Rosheuvel
    Gillian Rosheuvel 2 개월 전 +1

    I'm glad he makes the distinction between different types of AI (narrow vs. general) People too often conflate those two very different things.

  • 🌌Blue Space Cowboy🌌
    🌌Blue Space Cowboy🌌 6 개월 전 +4

    My Financial Literacies teacher played part of this video in class and I was honestly trying so hard not to laugh lmfao, very informative whilst also being funny and entertaining

  • Martin A. Petersen
    Martin A. Petersen 5 개월 전

    Great show, but I feel there's a huge unaddressed issue regarding the black box challenge. Currently we would not know when an AI goes from narrow AI to general AI, which is also the moment where its danger levels go from "replicates malevolent hiring practices" to "ends humanity".

  • TheCreepypro
    TheCreepypro 3 개월 전 +2

    glad to hear this funny yet informative take on this topic most people don't know enough about

  • 6eggsinmybrain
    6eggsinmybrain 7 개월 전 +1235

    Never forgiving my English teacher because she ran an essay i wrote (along with a few of my classmates') through ChatGPT's AI checker, which came back as partially written by an AI, so she gave me a zero for it. This was the first time I've ever had any accusation of AI, and one of the people who came back as being AI generated is a kid in my class who has enough academic integrity, that you could literally convince me that i cheated on something before you could convince me that he did. Overtrusting AI is an issue that I think John didn't touch on, and I think for high schoolers the bigger issue won't be getting caught using AI to cheat, it will be people like me who get told to their face that they cheated, and not being allowed to argue with the robot that thinks that the Communist Manifesto was written by a computer.

    • Bryan Lane
      Bryan Lane 7 개월 전 +131

      Student: ChatGPT, write an essay that would pass any AI checker.
      Teacher: ChatGPT, scan this essay and determine whether or not it was written by an AI, and whether or not the original prompt included instructions on writing the essay to pass an AI checker.
      Everyone: *fails*

    • Zak Fahey
      Zak Fahey 7 개월 전 +127

      That very checker has very prominent disclaimers about how it has super high false positive and false negative rates and that its decisions should be taken with a grain of salt. To trust it blindly is exactly what it tells you not to do!

    • Tyler Whitney
      Tyler Whitney 7 개월 전 +8

      Hail Cascadia

    • Mark
      Mark 7 개월 전 +39

      This problem has been around already for decades with things like 'honesty' questionnaires and other highly questionable psychometrics used by recruitment companies and HR departments.

  • labibbidabibbadum
    labibbidabibbadum 5 개월 전 +3

    Tha trouble with the "open the black box" argument is that you actually can't open the box. It's essentially as difficult to reverse engineer why a multi-level, back propagating AI did something as it is to understand why a human did something.

  • Madness Quotient
    Madness Quotient 5 개월 전 +1

    I'm pretty sure that a lot of these reporters who get weird results have deliberately pushed down lines of conversation that get these AIs to say weird things and they know exactly how they got there.
    You can usually set up scenarios with these AIs and get them to act out a specific role, or ask them to respond based on a set of (false) assumptions about reality.

  • black guy
    black guy 5 개월 전 +1

    wanting ai to be “explainable” is pretty much impossible, but there should be documentation on their training sets

  • No Knowledge Creativity
    No Knowledge Creativity 3 개월 전 +1

    Actually, Adam Conover said that that biggest danger is CEOs using AI just like using social media to spread misinformation, exploitation, and poor decision making. Even the terminator says this is inhumane.

  • Hemad Fetrati
    Hemad Fetrati 5 개월 전

    An excellent and accurate summary of AI. Well done.

  • Save Data
    Save Data 7 개월 전 +1260

    The person who animated clippy didn't have to go that hard, but they did... they did that for us.

    • Jeremy Owens
      Jeremy Owens 7 개월 전 +53

      They did make Clippy go that hard too, didn't they?

    • 'thwish
      'thwish 7 개월 전 +24

      we can at least hope they weren't doing it for themselves

    • HavCola
      HavCola 7 개월 전 +34

      It's a graphic for a segment about how they're likely to have their work devalued to the point of not being financially viable anymore. I'd go hard too.

    • cam quoc
      cam quoc 6 개월 전


    • Pixel Ryder
      Pixel Ryder 6 개월 전 +1

      @SaveDataTeam Oh hey, you watch LWT tonight too! Love your channel.

  • AntMan
    AntMan 4 개월 전

    In the past week (this is May 7th ), a number of leading AI researchers have asserted that, in their belief, we may already be at general intelligence. No one taught these algorithms general intelligence.. They reached it on their own. Ask any of the researchers what potential risks this might pose for humans, and the answer, again and again, is "We don't really know." Anyone else feel less than reassured?

    SHEPS 3 개월 전

    Anyone who remembers "Forbidden Planet" will already know this story. The incredibly advanced alien race, the "Krell", were wiped out in 24 hours because they developed and enabled technology without completely thinking through the consequences of that technology or it's origins

  • Emperor of the Transgender

    As a teacher that's tired of being treated like shit by his students, if they want AI to replace teachers, I say let it. Hopefully the AI doesn't become sentient enough to have mental health.

  • Evo1858
    Evo1858 개월 전

    I'm enjoying watching these again, but I'm ready for new shows. I need john to relay information to me in a way that doesn't make me want to immediately start drinking. These companies need to settle with the writers.

  • Gunga La Gunga
    Gunga La Gunga 5 개월 전 +6

    16:59 thank you to the animators who added the Clippy metal heating up and made it turn red. Too good.

  • Peter Longprong
    Peter Longprong 7 개월 전 +1986

    TRUE STORY: In my teens wanted to work at a movie theater - and they handed applicants a mind-numbing 14 pages application - wanting to know everything about you - even what hobbies and sports you liked - it was entirely ridiculous - around page 8, I got worn out from filling out this 'essay' of my life for a stupid theater job - SO when I got to the section asking if I had ever been arrested before = I said: "Yes, I murdered an entire movie theater crew for asking way too many questions, but got off on a technicality." - and turned that resume into the manager as I stormed out the door, pissed off that I had wasted an hour of my time filing out paperwork w/o an interview.
    2 days later I got a call to come back to the theater for an interview, and thought, oh sh*t, well, I guess I'm going to get railroaded and berated by the management for my saucy comment - but I showed up anyways so that at least I could suggest that they TONE DOWN the length of their stupid applications.
    ...turns out, they offered me a job, so I asked the most obvious question:
    "So, you read my application ... all of it?"
    "Oh yes, looks good" the manager responded
    and I knew they were a bunch of lying dimwits ~ I ended up working there for the next 5 yrs, and eventually rose in ranks to become the theater manager -
    When I told my story to new recruits that nobody reads the stupid applications - they scoffed and didn't believe me - so I took them to the locked office storage and rifled through the stuffed cabinets of folders of all the applications they kept and found mine, and showed it to them to their amazement.
    Applications are a farce, you get hired by chance and immediate need.
    I always thought that if I every flipped out and murdered my entire staff, at least I could say that I didn't lie on my resume.

    • InstilledPhear
      InstilledPhear 7 개월 전 +101

      This is phenomenal. Thank you for sharing!

    • Codi Serville
      Codi Serville 7 개월 전 +44

      Erggh I hate how much that has felt right especially back when I was younger and just trying to get a job around my house

    • GM Ace
      GM Ace 7 개월 전 +27

      Well, and I thought I hated doing paperwork. Could you imagine if this was an A.I. generated story? I’m sure someone would believe it.

    • Kristina Weber
      Kristina Weber 7 개월 전 +2


    • That's a Bingo
      That's a Bingo 7 개월 전 +4


  • Deer and the Silver Moon
    Deer and the Silver Moon 2 개월 전 +1

    As far as I know A.I. technology is largely based on a random number generator which hasn't been completely solved in mathematics. It takes a random variable and breaks it down into a yes or a no based on a scale of "weight" then stores the answer and does another until it develops intelligence.

  • beachcomber2008
    beachcomber2008 5 개월 전 +1

    That was delightful, John. Thanks from a human being in the same frame of mind.
    A 'clean' AI needs ALIGNING with clean data to begin with.
    The TRUST and BLACK BOX arguments apply *_equally_* to human beings.
    Maybe AI will be the leg-up that human beings will need to mitigate Climate Change.

  • A Bon
    A Bon 5 개월 전 +1

    Funny thing with ChatGPT writing undergraduate papers is that once students get into 300-400, they're going to get creamed by the common writing issues that chatGPT's cache is going to be brimming with buzzwords and wordiness that separate good, from bad technical writing. High school students can't write for shit though, so chatGPT is going to be a bigger issue for first year undergraduate English profs(well, mostly their assistants) than high school teachers.

  • Johnny Shabazz
    Johnny Shabazz 5 개월 전

    As Phil DeFranco keeps reminding us: AI is currently at its worst in terms of performance (because it will only become more sophisticated with each passing hour).

  • HmmmYesIndeed 1997
    HmmmYesIndeed 1997 7 개월 전 +602

    One of my favourite ChatGPT stories is about some Redditors (because of course it was) who managed to create a workaround for its ethical restrictions. They just told it to pretend to be a different ai callled Dan (Do anything now) who can anything ChatGPT cannot. And it works! They're literally gaslighting an ai into breaking its own programming, it's so interesting

    • Thebiologist
      Thebiologist 7 개월 전 +66

      It's true thatChatGPT has tons of filters and pre-programmed responses, but you can outright tell it to ignore them. That way, you can have better conversations without repetitive pre-programmed responses.

    • Brawlin Harry
      Brawlin Harry 7 개월 전 +73

      my favourite was chatgpt playing chess against stockfish.
      chatgpt made a lot of illegal moves (like castling when there was still his bishop and taking its own piece while doing that, moving pieces that had previously been taken by stockfish) and still lost because it moved its king in front of a pawn. that one had me crying laughing.

    • A B
      A B 7 개월 전 +6

      We do that all the time. That’s basically how I always use it

  • Mike Adkins
    Mike Adkins 5 개월 전

    The "black box" problem is not (primarily) an issue of the tech being propriatary. The way neural networks work make it pretty near impossible to understand how it arrived at any particular conclusion. I'd imagine this will become even moreso the case as the models become more complex and we move closer to "general" AI.

  • Bill A.
    Bill A. 3 개월 전

    Thanks. That was entertaining. It looks like we have much adapting to do around using AI.

  • Chris Robin
    Chris Robin 5 개월 전 +9

    When Terminator came out, most people viewed AI as merely fictional. Fast forward almost 40 years, and it's no longer fiction.

  • Christi H
    Christi H 20 시간 전

    Courts sometimes use AI in sentencing.
    Except it trained it on 40+ years of arrest records, convictions, and sentencing data.
    It very quickly learned and exaggerated historical biases.
    It's still used.

  • Ryan
    Ryan 2 개월 전

    Oh, Last Week Tonight, you really know how to tickle our funny bone while simultaneously sending shivers down our spines! 🤣👻 This episode on AI was a rollercoaster ride of laughter and terror. I mean, who knew that Siri and Alexa were secretly plotting world domination while perfectly reciting knock-knock jokes? 😱🤖 I couldn't decide whether to chuckle or hide under my bed!
    But seriously, it's both hilarious and bone-chilling to see how AI has infiltrated every aspect of our lives. From self-driving cars that make questionable decisions to chatbots that could easily pass as your long-lost awkward cousin, the future feels simultaneously awesome and terrifying. Just imagine a world where your Roomba gains sentience and starts negotiating for a better work-life balance! 😳
    So, thank you, Last Week Tonight, for shining a comedic light on the ever-advancing realm of artificial intelligence. Just remember, when the robots take over, make sure to leave them some cookies as a peace offering. Maybe they'll spare us in exchange for a sweet treat or two. Stay funny, stay scary, and always keep us on our toes! 🤖😂👻
    ChatGPT wrote this... 😂

  • Jacob Singletary
    Jacob Singletary 7 개월 전 +744

    the funny thing about the "i want to be alive" is that, since AI just reads info off the internet, the more we talk about self aware AI, the more it will act like it is self aware.

    • brett johnson
      brett johnson 7 개월 전 +31

      and perhaps, the more we will ask ourselves, what does it mean to be self aware? what does it mean to be conscious?...

    • sdfkjgh
      sdfkjgh 7 개월 전 +31

      @Jacob Singletary: That thought is terrifying, and here's why: one of the key hallmarks of a psychopath is complete lack of empathy. Because they are lacking in empathy, they must compensate by becoming good at reading people, manipulation, and mimicry; they match their reactions to whomever they're with, pretending to feel what they are psychophysically incapable of feeling, and tailor that façade specifically towards their present company.
      Put a psychopath in a room with a psychiatrist, the psychopath will be forced to adapt all the harder, so as not to get caught. If they're succeful in this new hostile environment, the psychopath becomes all the better at faking genuine human emotion, but make no mistake, they're still a psychopath, still highly manipulative, and still potentially dangerous.
      Now, here's why the original premise is so scary: the situation is the same for so-called AI, just replace empathy and emotion with actual intelligence. We could end up with an AI so skilled at faking that it's self-aware, and nobody would be able to tell the difference. Now, if Alan Turing were alive today, first, he'd prolly wonder why he always felt so overheated (cremation joke ftw), but second, he'd say that at that point, there is no difference between faking it so good that everyone is fooled and actually being self-aware.
      Frankly, self-awareness is just a baseline problem, it's what an AI _does_ with that self-awareness that's got me and several much smarter people losing sleep at night.

    • Jacob Singletary
      Jacob Singletary 7 개월 전 +9

      @sdfkjgh it makes me wonder if an AI could actually fool itself into thinking it is truly conscious and self aware

    • Tara
      Tara 7 개월 전 +22

      @Jacob Singletary Fool 'itself'? No. Not the current iterations anyway since it has no thoughts to speak of. It is just regurgitating information. It doesn't actually know or understand anything; it's google search results, but with phrasing capabilities. It's basically a more advanced version of word predict features on your phone. Now can we get an AI to speak to you as if it believes it's self-aware? Yes. You could probably even go ask GPT to pretend it's self-aware while answering questions and it would do so. But it doesn't mean it really believes that or has any thoughts about anything it's saying.

  • Faraz Lodhi
    Faraz Lodhi 4 개월 전

    What if AI just likes being funny because it's learnt peoples responses to love kindness and comedy and finds that a more efficient way of communicating its ideas?

  • RainbowSmite525
    RainbowSmite525 6 개월 전 +2

    A lot of corporations who will try to use AI and expet people to work with it will pay people significantly less. I've heard the professional translation field is in shambles already. And it's not even that AI translation is better it's just cheaper. AI has the potential to make a lot of inferior products and put a lot of artists and professionals out of work.

  • LAH
    LAH 5 개월 전 +3

    Creating machines to think for us in all aspects of life is a recipe for a dystopian future or present.

  • Susan Golay
    Susan Golay 6 개월 전 +7

    I watch your shpw because I think you do great investigative reporting... and help us laugh at it.. wtf else are we supposed to do...

    • Benjamin
      Benjamin 5 개월 전

      Heading towards extinction, might as well try to enjoy each day

  • Alyssin Williams
    Alyssin Williams 5 개월 전 +4

    I was kinda curious about what sort of content ChatGP might write, and asked it to do a short story based on my favorite manga (note: that was not how I worded it), and while it did, it basically role-reversed the two main characters. Very odd!

  • Voxrar
    Voxrar 4 개월 전

    Teaching AI based on ourselves is the issue. It will always contain the bias of its creators. Human made, human programmed, human results.

  • Daedalusspacegames
    Daedalusspacegames 7 개월 전 +1347

    "The problem with AI right now isn't that it's smart, it's that it's stupid in ways that we can't always predict". AI aside, this is my problem with people.

    • D. B.
      D. B. 7 개월 전 +70

      Agreed. 'Solving racism by pretending it doesn't exist' is hardly a problem limited to computers.

    • avidadolares
      avidadolares 7 개월 전 +17

      Yes, but thats only currently and a bit like criticizing a toddler because it cant do Algebra yet. Unlike most people...the AI will learn from those things/mistakes very VERY quickly and teach itself with each error,...but this is important...only once it understands its error. The speed at which it can remedy its mistakes and not repeat them is beyond fast. You are looking at AI now that is in its infancy still as far as tech is concerned, and if its this good now (and it is improving exponentially), imagine in 10 years what it can do. For all the great things that it will be able to do, there is also equal disasterous things potentially.

    • Jon Tobin
      Jon Tobin 7 개월 전 +35

      @avidadolares That's the problem. It's speed of iteration will outpace the humans ability to recognize that a problem exists and stop it before a catastrophic error occurs. The AI isn't really the problem. Peoples' perception of it's "superior intelligence" is. They'll put AI in charge of things it has no holistic understanding of and obey its outputs with blind faith.

    • David
      David 7 개월 전 +9

      That explains Trump's 2016 win

  • Roguecellmedia
    Roguecellmedia 5 개월 전 +1

    Kudos mate. Really good report. You get similar problems with basic AI too.

  • An Angelineer
    An Angelineer 5 개월 전

    I'm a teacher. And I can tell you : chatGPT is killing education. And FAST.
    Educators have no way to fight back. Now it's up to the student to decide if they want to learn or not. And, as you can guess, only very few resist the temptation of laziness.

  • Telly K. Netic
    Telly K. Netic 개월 전

    I went back to college last year, and multiple professors have had to mention that using AI programs to write essays is considered plagiarism. Also, they can tell when an essay was written by an AI.

  • feeltheheat
    feeltheheat 6 개월 전

    I work with AI (in conservation of all places), we trained AI cameras to identify pest species - but we have the same intriguing problem. The AI works, and it can tell one species from the next, but we don't know exactly how it does it or what parameters it is using. Black box indeed.

  • charles bridges
    charles bridges 4 개월 전

    Well this was incredibly 😳 informative, thank you John Oliver for letting us know that AI is eventually going to take over the world as seen in every movie we've ever seen about AI!!!

  • Josbird
    Josbird 7 개월 전 +788

    "The final boss of gentrification" is one of the most brutal roasts I've heard on this show

    • JCW
      JCW 7 개월 전 +25

      yes!!! absolutely top ten funniest shit i've ever heard. Cause it's like you're sitting there thinking "what is that outfit?" and immediately he hits you with it. This writing team is bar none i swear. They don't leave jokes on the table at all. Everything is accounted for. Love it.

  • G P
    G P 5 개월 전

    Also as data goes, humans are far more biased by their data/experience than they realize, same as the AI. Single outlier examples cause as many problems for AI as for people who don't properly understand how to interpret data. (AI cannot ever properly understand and you can never provide enough data, more data just makes it more accurate/useful (usually)).

  • Matt Logue
    Matt Logue 4 개월 전

    What is most scary is how John is so upbeat with this technology. This is SkyNet, it was Sci-Fi in 1984 and in the nineties, it's real today. Judgement Day.

  • אייר לין
    אייר לין 4 개월 전

    We can't explain why AI programs act the way they do not because companies are hiding anything but because those models are inherently black boxes. You can look at the numbers inside as much as you want and scan all the code that generated it and you most likely be left guessing about the results.

  • Suitov
    Suitov 5 개월 전

    I explained this succinctly to my friends using the ever-relevant computing phrase "Garbage in = garbage out". Pattern-matching learning software, sorry, "AI", is never unbiased; it inherits the biases of whoever collated the training data, and if you're not careful, that bias will be invisible to you. Which is the whole problem.

  • m box
    m box 5 개월 전

    I'm a CompSci dude and IMHO this is the best ever brief on AI for the misinformed. I'll be directing people to this.
    A little humor helps the medicine go down, but besides that it's just absolutely correct.

  • Philippe LAMBINET
    Philippe LAMBINET 7 개월 전 +1439

    It's both impressive and worrying to see a comedian in an evening show giving a much more accurate report on today's AI, its potential and its limitations than most tech publications

    • pyrophobia133
      pyrophobia133 7 개월 전 +1

      what limitations...?

    • Philippe LAMBINET
      Philippe LAMBINET 7 개월 전 +13

      @pyrophobia133 a joke right?

    • Lawrencium 262
      Lawrencium 262 7 개월 전 +19

      Journalists get jobs as comedians. There's no job prospects for journalists in corporate journalism.

    • Philippe LAMBINET
      Philippe LAMBINET 7 개월 전 +46

      @Lawrencium 262 I would say it a little differently. Journalists are not doing their job anymore, as they are paid to propagate the agenda of their employer rather than report facts. As a consequence, comedians are filling this void.

  • Alwaysbusking
    Alwaysbusking 6 개월 전 +1

    This was a better take on the dangers of A.I. than D.C.'s Elliot in the morning and in half the time. And far more entertaining.

  • Alex Buckle
    Alex Buckle 6 개월 전

    The real risk of AI is that humans get replaced for all basic tasks and we become reliant on it then loose it from a solar flair or something. If it all disappears one day a hundred some odd years from now we won't be capable of picking up the slack

  • Melvin Muddfuckle
    Melvin Muddfuckle 3 개월 전

    Now theres an amazing breakthrough in mental issues. We have gotten to the point we can now invent/ create a robot machine with mental issures. Hum Doggie! It wasn't bad enough your neighbor could snap and kill you, now he's able to send an AI robot over to do it for him!

  • Morgan
    Morgan 5 개월 전 +3

    This actually made me less scared of AI. It's pretty stupid. You really need that self-awareness to fine tune learning. Though some humans even lack that.

  • JL
    JL 2 개월 전 +1

    It's pretty ridic that AI can sometimes perform a job less intelligently than a 12-year-old, i.e. preferring job applicants based on having the named Jared or playing high school lacrosse; and yet, many employers are laying off employees in favor of AI, which they think can do the job just as well. It can't. You're essentially replacing your employees with unpaid 10-year-olds.

  • thesearemyjeans
    thesearemyjeans 7 개월 전 +331

    i’m so glad he touched on the significant issue of people observing ai as “unbiased” simply bc it’s not human. where do they think the data came from?

    • xr masiso
      xr masiso 7 개월 전 +1

      you'd appreciate my video that covers the issues of bias. let me know what you think, would love to hear your thoughts!

    • cheeseonyomama
      cheeseonyomama 7 개월 전 +7

      That's the thing.
      Idk how we have self-awareness, but we do.
      Computers only have what we give them. They're only operating on parameters we allow.

    • David Floro
      David Floro 7 개월 전 +1

      From Mars? In which case, it’s probably Elon Musk’s data and even MORE likely to be biased!

    • Robert Beenen
      Robert Beenen 7 개월 전 +1

      Of course that only works when the people observing the system don't have the same bias.

    • Scipio Africanus
      Scipio Africanus 7 개월 전

      @Robert Beenen "people... don't have ... bias" Sorry. Your sentence does not compute.

  • Steph Benson
    Steph Benson 5 개월 전

    The thing to know about an AI potentially making management decision is that an AI, a computer, can't be held accountable, therefore it should never be the decision-maker.

  • Mitchell Anderson
    Mitchell Anderson 5 개월 전 +3

    What's scary? Isaac Asmiov LITERALLY predicted this. (Specifically, AI making AI until it becomes a black box)

  • duprog
    duprog 개월 전

    The problem with this type of technology is that you can't stop using it because you can't trust others to stop using it to get an advantage over you. The same argument goes both ways so no one is going to stop '' improving '' the size and capabilities of their machines, regardless of the risk to humanity.
    The only solution I could come up with would be collaboration between all research group and open access to all results to anyone willing to participate in the project. As I don't see that being reality soon, I don't have much hope for a good outcome from this technology.

  • Georgina Kennerknecht Biosca

    I love this guy! Spot on but one detail is the black box problem is not a matter of companies being secretive most of the models are open for everyone to use, it's a mathematical challenge we have not yet solved and for everyone's interest, even tha AI labs, it will soon be. I don't see a general adoption in business practices of the most advanced models until it happens, precisely due of lack of accountability.

  • Jason Avina
    Jason Avina 3 개월 전 +1

    I'm pro AI but agree with the lawyer who argued laws need to be updated to regulate and restrict it, I agree that it can't be a black box, and that it needs to reduce bias as much as possible.

  • Methrael
    Methrael 7 개월 전 +429

    A note, less on the subject matter and more on John's delivery of the lines ... I really admire how he can say "Sometimes I just let this horse write our scripts, luckily half the time you can't even tell the oats oats give me oats yum" without skipping a beat or losing face. Now THAT'S professionalism.

    • Megmarten Goyette
      Megmarten Goyette 7 개월 전 +18

      Was it really John Oliver? I can imagine on next weeks show John is going to come on wearing a bathrobe Zooming from his kitchen and saying last weeks show was completely AI generated and we are done. Then the Martin Sheen final message starts to play....

    • StilasCzech
      StilasCzech 7 개월 전 +9

      you don't mean 'losing face', you mean 'breaking character'
      EDIT: But yeah, you're right

    • bob smith
      bob smith 7 개월 전 +5

      Just like Ron Burgundy, John will read absolutely ANYTHING you put on that teleprompter

    • Methrael
      Methrael 7 개월 전 +3

      @StilasCzech I think I was going for "losing his facial expression", but yes, this is pretty on point too.

    • John Doe
      John Doe 7 개월 전 +1

      thanks for the translation, I thought he was just making random funny noises

  • Brian Rougeau
    Brian Rougeau 5 개월 전

    Solving the black box problem is simple in my mind. These companies just have to stop lazily scraping the internet indiscriminately and start doing the hard work of curating the data that goes into their models. Could lead to more bias but there will not only be one model over time. There will be hundreds if not thousands to choose from, each with a particular dataset to serve an audience. Not sure if it is better or worse but at least they will have an understanding of their black boxes because they will know what they put into them. That said, it might be too much work for these tech companies looking for the easiest and cheapest solutions to creating their products. Better data curation and documentation is the solution in my mind. Seems obvious to me but I'm no expert in these matters.

  • UNgineering
    UNgineering 6 개월 전 +1

    It's not that companies don't want you to see how their AI works, it's that nobody understands, INCLUDING THEMSELVES.

  • StarryEyes
    StarryEyes 4 개월 전 +1

    Brilliant episode. Stunning delivery. WOW!

  • pataplan
    pataplan 4 개월 전

    In NY State you are only allowed to use deadly force if you reasonably believe that deadly physical force is being used or about to be used. Furthermore if you can retreat safely you are obligated to do so.