This is exactly how AI will change your life in five years | Will Cain Podcast

This is exactly how AI will change your life in five years | Will Cain Podcast

Today, Will is joined by Neil Sahota, an IBM Master Inventor and United Nations A.I. Advisor, and Reid Blackman, Ph.D., Founder and CEO of Virtue and author of “Ethical Machines,” to discuss the potential benefits and dangers of A.I. They highlight the opportunities for increased efficiency and productivity but also emphasize the need for caution and ethical considerations.

Listen to the full podcast:

Subscribe to the Will Cain Podcast:

Follow Will On Twitter: @WillCain

Subscribe to Fox News!
Watch more Fox News Video:
Watch Fox News Channel Live:

FOX News Channel (FNC) is a 24-hour all-encompassing news service delivering breaking news as well as political and business news. The number one network in cable, FNC has been the most-watched television news channel for 18 consecutive years. According to a 2020 Brand Keys Consumer Loyalty Engagement Index report, FOX News is the top brand in the country for morning and evening news coverage. A 2019 Suffolk University poll named FOX News as the most trusted source for television news or commentary, while a 2019 Brand Keys Emotion Engagement Analysis survey found that FOX News was the most trusted cable news brand. A 2017 Gallup/Knight Foundation survey also found that among Americans who could name an objective news source, FOX News was the top-cited outlet. Owned by FOX Corporation, FNC is available in nearly 90 million homes and dominates the cable news landscape, routinely notching the top ten programs in the genre.

Watch full episodes of your favorite shows
The Five:
Special Report with Bret Baier:
Fox News Primetime:
Tucker Carlson Tonight:
The Ingraham Angle:
Fox News @ Night:

Follow Fox News on Facebook:
Follow Fox News on Twitter:
Follow Fox News on Instagram:

Reed Blackman Neil sahoda I'm so glad to Have you today on the will Kane podcast To discuss what in my estimation is the Only story that there is to discuss I Think when we're talking about politics Culture when we're talking about Religion almost everything will be Impacted in my estimation by artificial Intelligence and so I've had small Conversations with both of you why don't We start this one out by just saying I'd Love to hear your perspectives on Whether or not you are an optimist or a Pessimist about the effect of AI on Humanity I think I would like to start With Neil All right well I am definitely an Optimist AI like all technology is just a tool It's about how we as people choose to Wield it and I for one believe that most People are good and will do good things With it And read Optimist or pessimist Definitely not a an unfettered optimist Like Neil one reason is that it's not Just a tool that if we use it well then We'll do okay because there's lots of Ways that it operates that you Unintentionally create a tremendous Amount of harm so there's the Intentional harms and then there's the Unintentional Um like I think lots of people I I

Definitely think there will be many People thousands of people benefited in Amazing Ways by it I also think there Will be many many people deeply hurt by It Okay I think you both think your initial Answers have actually touched on what Will be the bulk of the conversation and That is the effect that human beings Have the amount of control we truly will Be able to exert over AI before it Possibly begins to exert its control Over Humanity so Neil you said you Believe you it's interesting you went Philosophical very quickly and you said You believe that most of humanity is Good in that answer I think you Acknowledged that one of the risks we Run is who controls the AI and what is Their purpose and what is their moral Bearing do you feel like in the end AI Will simply be a product of humanity and It's up to us to determine who are the Good stewards of AI Let me to a degree well I think You're you're talking about AI is kind Of a product and you know let's be Honest let's just call it that you know AI is the child of humanity and how we Raise it is going to play a big role What it winds up doing well like every Parent we can't fully know but I think You know to Read's Point yeah there's Some things that bad people will do but

More importantly there's some things That are actually going on that people Don't fully understand we have Technologists working on things they Don't fully understand how it works Anymore we call that interpretability Issue and that that's the key thing can We jump ahead of this yeah are we trying That's the challenge Henry you you immediately brought up um Unintended consequences that that sort Of we're playing with something that we Can't even possibly imagine regardless Of our motivations right regardless of Good people or bad people we can't even Imagine how this child grows up Yeah that's right I mean I don't think That you know I don't think that a lot Of the bad stuff that's happened with AI Or that can happen with AI is because The data scientists or the engineers who Built it intended to harm it's just that There's so many decisions that they make Along the way in building and deploying AI that have ethical implications and They're not aware of it and even if they Are aware of it they're not centrally Equipped with their resources to make Wise informed decisions because after All other training is in data science And usually they're you know they've got A PhD in physics and they've sort of Been trained up to be data scientists so They don't have the training experience

Background or even authority to Responsibly address the ethical issues So they're making ethical they're making Decisions with massive ethical implant Impacts without even knowing that They're doing that so give me an example Right and so give me an example and and I love how you describe that you don't Even know you're toying with morality You don't even know you're making Ethical Um evaluations as you simply plug in Ones and zeros into the potential of Artificial intelligence but give me give Me an example how many of today's data Scientists programming AI could be Making ethical decisions and not even Knowing so Yeah so I'll give you I'm going to give You two two kinds of cases so when we Talk about AI first of all I think it's Helpful to distinguish between Narrow AI which is you know these Predictive models that make judgments About what are they going to default on The mortgage are they going to be a good Candidate for the job are they going to Be good at the University should they Get insurance what should their premium Be those are really specific there's They're they're specific to a use case And then there's general purpose AI That's chat GPT that's the the you know The stuff that is putting people's hair

On fire right now because it's general Purpose it's not built for a particular Thing it's built for all kinds of tasks So you can use chat B to create Marketing material make medical Diagnoses write a play there's just you Know there's so many different kinds of Things you can use it for now those the People who are building that general Purpose stuff like Chachi PT they're Just building capacity right and they Don't even always know what kinds of Capacities it has and then they release It to all these developers Downstream We're going to use it for countless use Cases And then they're also not really Transparent about how they built that Base model so they're making ethical Decisions and how they built that base Model that everyone else Builds on and Then they're making decisions about how Transparent or not they're being about What they did when they built it in the First place to those Downstream Developers which can inhibit them from Doing their real due diligence so those Are ethical decisions that they're Making there's other kinds of ethical Decisions like hey let's use this data Set without even thinking about the Privacy implications or let's use this Training data let's let's train our our Uh let's say our mortgage our mortgage

Or look Famously at one Healthcare institution Created an AI that discriminated against Black people it said hey doctors and Nurses pay more attention to White Patients than to sicker black patients Because they didn't really think that oh You know what we should do Training data that has to do with how People have spent money on Health Care Let that be an indicator of who needs Attention of who needs Healthcare and it Turns out that of course you can't spend Money you don't have and white people Spend more money on health care because Black people don't have as much money in The states They didn't realize that they weren't Intending to discriminate against black People they just chose some training Data that they thought would help them Create an accurate powerful AI but they Accidentally chose something that that Had discriminatory impacts and so you Know not bad people just bad oversight Lack of over I think lack of Understanding what they were doing I think at this point for my benefit and For perhaps the benefit of everybody Listening as well let's do this Neil Would you help me understand how AI Works what it is it you know in the Simplest of terms possible for the Layman what it is as a technological

Tool and how it works Real real simply put it's a computer System that can do tasks that require Some level of cognition some level of Thought to it it's not just a plug and Play and the way it's able to do is that Is through training data and actual Training So we actually have to teach the machine How to do these things and we help it You know with the cognitive aspect by Giving something called the ground truth Rules on how to make decisions And if those rules are flawed or as Reach talking about the train data is Flawed It'll learn incorrectly it'll be you can Have disparate impact it can have Implicit bias so it's just like trying To teach a child if we don't do a great Job of it or we intentionally skew the Training Yeah well yeah this is the only Difference it just accepts that as the Truth can I give a slightly different Spin Sure and I think this is this is not This is not income incompatible with Anything Neil said it's just a different Way of thinking about it that I think is Totally compatible it's just software That learns by example it's all it's Software that learns by example everyone Knows what software is you know you're

You're listening using software right Now you use Microsoft Word that's a Piece of software any website you go to That's a piece of software and it learns By example so you want to learn what Your dog Pepe looks like give it a bunch Of examples of what your dog Pepe looks Like even a bunch of photos of Pepe and Then when you take a new photo of Pepe It's going to be like yeah that's Pepe Actually I learned from all those other Examples what Pepe looks like in photos You want to know what a mortgage worthy Application looks like give it a bunch Of examples of mortgage worthy Applications and it'll learn what they Look like so when you give it a new Mortgage application it'll be like yeah That's the one that Um looks like a mortgage worthy Application or no it doesn't would give It you know keep going you can just keep Going with this is this person going to Develop diabetes in the next two years Well give it a bunch of examples of Medical profiles of people who develop Diabetes within the next two years and Learn from those examples if you want a Fancy word for example use the word data It's software that learns by example a Software that learns by data that's the Core of AI it's just software that Learns by example or software that Learns by data but once you see that I

Couldn't learn by example it can learn Once you see that it learns by example And examples are just data the data can Be private data so you have privacy Violations the examples can be Discriminatory in nature so it learns to Discriminate Um the patterns with the examples that It's learning from what it sees what it Learns it can be really really really Mathematically complex and then it comes Up with conclusions that you don't Understand that's the Black Box problem Or dealers refer to it as the Interpretability problem but that's all That's the core of it it's just software That learns by example let's come back To the black box problem in just one Moment so Um software that learns by example as You were describing that read the most Obvious example that popped into my head And you you can and I'm asking you to Correct me if I'm wrong is I'm trying to Think of already already in use popular Applications of AI is that what my IPhone is doing when it's coming up with Facial recognition of here are other Photos of this person is that give me Some examples that are already in use Both of you in fact let's start with Neil what's an example right now that Everybody listening probably doesn't Even realize they're interacting with AI

Well will you talked about facial Recognition it's not just like trying to Memorize the way you look in your face There's certain points so you know it's Measuring like the bridge of your nose To your cheekbone or to your your Distance between your eyes so try to do Those kinds of comparisons and that's How it's going to account for changes in Weight or if you have facial hair or Some of these other things so it's it's Essentially some trying to calculate Some way we're doing similarly you use Social media or you get a news feed They're actually algorithms behind the Scenes that are looking at okay what are You clicking on what articles how long Are you spending on reading them trying To figure out what are your interests What are your opinions so that can serve You content so it's trying to serve your Content so Netflix is Netflix's recommendation algorithm That's a that'd be a good example All right read Yeah I mean though I'd give similar Kinds of examples I I don't actually Think I think it's actually a and Neil You can correct me if I'm wrong Um it's not actually looking at your Cheekbones in the bridge of your nose It's looking at the pixels in the Picture and the mathematical relations Among those pixels it doesn't see eyes

And ears and a nose it just sees pixels Arranged and then it's looking for Thousands of mathematical relations on Those thousands of pixels which is why You write the Black Box because we don't Look at things at the pixel level you And I see Eyes Ears Nose oh and that Person has the same eyes ears and nose It must be a picture of the same person The AI the software is just looking at Those pixels and the mathematical Relations among those pixels and then Sees it those same mathematical Relations are present in another picture And if so it's like yep that's will Again yeah so that that's so that's one Example so let me jump in because it Reads making a good point We are giving it a ground truth on okay Which mathematical distance between the Pixels are important but his pixel Example is actually important because it Shows that machines actually think Differently about things if we see like A picture we're looking for objects But as Rita saying with the AI it's Looking at pixels if we've seen this you Add just a little bit of noise you take A picture of like a panda you kind of Muck up a handful of the pixels it goes From knowing that's the panda and Thinking it's like uh you know a gibbon Or something else even though it's Clearly a candle right and so I love how

You two are building off of one another And it leads us into the black box Understanding the Black Box concept Which will then lead us I think into Some of the more philosophical and Futuristic applications of AI so you Guys have used for example my face or a Human face Neil you're referencing the Panda the one I've heard referenced is a Cat so like how you described it Reed it Learns by example so you want you want Your software to understand what is the Physical embodiment of a cat and it sees Example after example after example and It accepts yeses or no's at some point Humans are helping it I assume on Correct or incorrect examples but at Some point as well it takes off on its Own maybe it encounters a fox and it Thinks that fox is a cat it needs a Human one more time to tell it no no no That's slightly different and it Calculates the differences in those Pixels and then continues on but we Don't in a way I've understood it and I'm open to both of you telling me well You got this slightly wrong will We don't actually understand how it's Translating all of those pixels into Conclusions in other words that's the Black Box we don't understand how it is Learning which is a wild thing for me to Think about it's hard for me to Understand that it's hard for I'm sure a

Lot of people listening as well We can't figure out how it's getting so Smart Read well I mean we we know how it's Getting so smart we don't but the thing That it's smart enough to learn so to Speak is what we don't get so we there's Exceptions to that but it's that that Pattern that will Kane pattern all Pictures of Will Kane You know that could be a hundred page Mathematical formula we can't keep that In our heads at the same you know that's That's that's what's unintelligible to Us not that it's making certain kinds of Predictions but that the rules that Transform inputs to outputs input is new Photo of will output is yes that's will Or no it's not will that mathematical Rule is just too big for us to get into Our heads And you know we're talking about Pictures of you know pictures of people Pictures of dogs and wolves and cats you Might think well who cares who cares That we don't understand as long as it Gets it right I just want the right Photos to go in the right folders But if it's high stakes if it's you're Going to have diabetes in two years You're gonna you'd be a bad applicant a Bad candidate for this job so you're Denied an interview Um you know you're denied admission to

The college you're given a high risk Rating and so you don't get bail High-risk stuff you start to think maybe We really need to understand why we're Getting these outputs Yeah and and Neil the the way I'm I'm Gonna have you jump in here but the way I've also understood it and I would I'd Love you to introduce me to the concept A little bit of of neural Nets I've read A little bit about neural Nets and this Is sort of the layered ability of itself To learn on top of its previous learning From what I've understood uh not only is It as Reed describes that we couldn't Hold that mathematical computation in Our head but at some point we cease to Understand the math that it is doing Even if we could write it out so it's Developed its own level of of Intelligence within mathematics in many Cases its own language where we can't Even figure out we can't we can't Deconstruct it back to any Understandable level Yeah so that's a really good point will In that we don't so much program the AI Systems it's it's learning it's wearing Its own neural network So based off the Training data human teaching the data It's analyzing is kind of wiring its own Network much like our brains wire some Not synapses but sometimes we don't know Actually how that's working you know

Reed actually mentioned dogs and walls There was actually a research project to See a kid we traded AI system to detect The difference between a wolf or a dock And it seemed to be working then we Started showing pictures and it's sort Of saying things like well this is Actually a wolf even though it was Clearly a dog and so we started going Back going through what's going on and We realized was in the training data Every picture we showed of it with a Wolf had snow in the background So we didn't really train an AI since Then to see the difference between a dog Or a wolf we trained a snow detecting AI System because that's that's the Association that made And to us we're like that was stupid but To the machine that's unfortunately how It works and we did we didn't understand That okay I wanna I wanna stop here for A moment I want to talk about human Influence then okay one of the first Things and I know nothing about either Of your politics okay you both have been On Fox and Friends with me at separate Occasions and I know nothing of either Of your politics and I find that Um I find it inconsequential but I think That one of the things that we saw in The first couple iterations of chat GPT That was somewhat recognized and I think That even uh is it open AI went back and

Tried to address was it was beginning to Show or was already through its birth Showing liberal bias which I have to Assume was the manifestation of the Human influence on that system so you Know let's play in that area for a Minute of conversation let's play in the Area of how much control and how you Know this is kind of where you started Neil uh forgive me it's not gonna I'm Not gonna use left and right as um Proxies for good and bad but I am going To use that to say we definitely have The ability to put perspective on the Ai's value system is that is that Correct Neil Yeah that's a very fair observation well So go ahead that has massive long-term Implications Reed Yeah absolutely Um There's there's two there's there's at Least three things one is you could Intentionally say you know say to a chat GPT You know favor the left or favor the Right or whatever and that would be a Way of influencing it you might also say Some things that seem to you to be Politically neutral but in fact have Political implications right just it Could be a coincidence Um to give you to give you an example When this is a slightly different thing

But it'll help bring out the point I Think when Amazon was training its Resume reading AI its resume reading Software to it was trying to get us to Not discriminate against women because The thing that the first Learned was That we don't hire women around here That's the pattern that it found they Said don't ignore all that ignore the Woman's stuff but then the resume the Software started picking out resumes That use the word execute like I Executed on a strategy and it turns out That men use that more than women do and So they still got discriminatory you Know male favoring outputs even though They were trying to get rid of because The word execute highly correlates with Being a man in a resume but wait wait Didn't Amazon give it the value but Didn't Amazon give it the value in your In your example to not discriminate Against women so I would have assumed to Have the opposite effect if you use the Word execute it's booting male resumes What they said was Um so of course they just gave it you Know like 10 years of higher of hiring Data like here are the resumes that we Got in the past 10 years and the ones That were given a thumbs up and a thumbs Down Learn from these examples which are the Good resumes and which are the bad ones

What's the pattern what's that Mathematical pattern in the as were the Pixels of the resumes And it learned that we don't hire women Around here so then when you gave it a New resume if it all else equal if it Was a men's resume got a green light it Was a woman's resume got a red light They said and they learned oh you know What it's picking up on things like Women's NCAA basketball or the name of a College that's a women's only college or The name of the person Mary right they Said Let's ignore all that stuff and now Redo it and it would find other things That highly correlated with being a man Or a woman and then make judgments based On that my point is that they were Actively trying to Not discriminated against women but it Kept finding what seemingly gender Neutral Um features That would correlate with gendered Features and the reason I'm bringing This up is that you could train your Chat GPT You know be mean to liberals or be mean To the conservatives but you could also Say you know whatever do X and Accidentally that correlates with being Right or left or and then the last thing I want to point out I asked someone Um a VP of research at Microsoft just

Talking to him yesterday or two days ago And I said you know chatgpt has a Certain personality you know it it makes Things up sometimes it what they call Hallucinations it'll BS it's not and When you try to correct it it's opposite It'll say no no I'm not wrong I'm not Wrong you're wrong I'm right and I said Why does it have that personality why Why is it modest why can't it be Intellectually modest and say you know What I really don't know I said so it Has this kind of personality why does it Have that personality rather than an Intellectually humble personality this Guy had access to open to Chachi PT for Six months before anybody else did and He said I don't know They don't know So so there's there's direct ways to Make it say politically biased there are Accidental ways to make it politically Biased and then there are ways that it Gets politically biased where they're Just like I have no we have no idea how This happened Neil I can see you want to jump in Yeah because Reed's making a really Important point and his Amazon example Is actually a powerful You know example of how we don't Understand things because even though This was meant to you know find quality Candidates

One of the things they actually did with Their training was we'll look at the Past 10 years of people we hired didn't Hired blah blah blah but they probably Primarily hired men during that time Period and what we apparently didn't Know what the time was at least Amazon Didn't his men and women use different Words and so they picked up on those Male words like Rita's talking about so It wasn't like an intentional thing it Wasn't for the linguistic analysis they Understood what's going on it's the Exact same thing if you look at Political parties or which way you lean Liberal or conservative people are using Different words and the AI is going to Pick up on that and that that's the Biggest challenge so you'll love this World the UN wants AI robot judges right Tons of data and all that we had to Study where's the implicit bias at you Know what the biggest bias in our Judicial system is How hungry the judges Strip out that bias yeah I've seen the Data on uh people going up before parole Boards actually and that those that show Up to the parole board before lunchtime Have a higher granted rate of parole Than those that show up uh late for Lunch you know what I mean or so in Other words if the if the board is Sitting there hungry and wanting to get

To lunch I'm sorry you're gonna spend Another couple years in jail Um but so so you know okay you know the The political side of me is concerned About the short-term implications of a World where and I don't need you guys to Respond to this because again I don't Know your politics we're liberal Ideology seems to control what is Considered Um what is considered mainstream and Appropriate and conservative ideology is Increasingly marginalized as radical Whether or not that's in pop culture or In media and I'm scared of it showing up In AI but I'm going to be real with you Guys I'm more scared of AI to both of Your points becoming something that has Its own bias independent of the human Programmer which in the end seems the Inevitability that seems I think you Could put Bernie Sanders in charge of Programming Ai and he'll he has no idea What its end result will be he cannot Predict whether or not AI is a socialist That's probably where I think I'm headed Because I think this thing sounds and I Go back to you on this Neil because You're the one that was The unfettered Optimist at the outset this thing sounds Absolutely to me uncontrollable Uh right right now it only does what We've taught it to do and it's not like Sitting there thinking trying to learn

More skills will it happen one day maybe But I think that's the biggest challenge Is Yeah AI is going to probably start Developing some of these things that we Don't fully understand but it's on us as People to also not just try and manage That but jump ahead of it we've always Been good historically about when new Technology comes out rising to the Challenge You know taking on the more complex work Trying to you know figure it out but the Rate of change is so fast we don't have A lot of time to course correct I think That's the real challenge I I have faith In society but we can't just sit in our Laurels thinking there's lots more People working on that we each need to Do our own part here okay let's come Back to a minute what society's control Mechanisms may be hey Reid are you Familiar I'm sure that you are because You're immersed in this AI world are you Familiar with the the pay-per-clip Optimization scenario Yeah so would you explain that I find That very illustrative and I don't know Whether or not both of you do or not but The the idea that you you create an AI Whose sole objective is to make the most Paper clips as efficiently as possible And how that directive in that algorithm Can end up ruling Humanity

Right so you tell an AI it's not just an AI but it's an AI That's it's not just a Piece of software that's got to be Hooked up to the world in some way such That can have an impact right it's not It doesn't just sit there on your Computer right so you tell it You know create as many paper clips as You can you maximize quantity of paper Clip output and then it starts you know Tearing apart buildings and just Destroying all of humanity taking away All the mountains and you know building Paper clips maybe you know there's Probably some some materials within the Human biological system that could be Used to make more paper clips so it Starts killing people to make more paper Clips that's the that's and it's not Evil it has no emotions it's simply Pursuing its directive of maximizing as Many potential efficiently created paper Clips Yeah I mean so so first of all one thing To say is that we're very we're far away From that kind of thing For now because we don't yet have Anything like You know robots with infinite access to You know to resources we don't have that Even if you sort of had a a little robot Like from uh the boss Dynamics guys Right who have the those robots doing Flips and stuff off of crates and you

You told it maximize paper clips you Know if they went to a they don't have Money they you know they could be taken Down by some police they don't have Authorization to the nuclear codes or Anything like that but look what that Draws out I think what that what what The real thing that we should the Pragmatic real thing that we should walk Away from a story like that is that Every application needs sufficient due Diligence from a safety perspective Ethical and safety perspective Because there's there's tests right and This goes for especially if we're Talking about a black box model Especially if we're talking about a Thing that we don't really understand How it works we need to figure out okay What are the appropriate benchmarks for Safe enough to deploy this thing right We do that with planes right planes you Can't just build a plane and start Saying okay everyone I'm selling tickets Hop on you got to meet certain Benchmarks for what constitutes a safe Enough airplane or a safe enough car and So we need benchmarks for what Constitutes safe enough for deployment For AI Okay so then we're in trouble go ahead And so okay I'll play I'll play the dumb Devil's advocate here okay so all right To me in a way that sounds like

A grand plan for ants to control the Problem with Humanity these humans are Going to destroy our ant colonies and we Got to figure out a way to put some Control mechanisms on these humans From what I've been told at least maybe Not where we are today but in not too Distant future AI is going to so far Outpace Us in intelligence that we have A hard time really comprehending the Scale it's not Einstein to to somebody With with a very below average IQ it is Exponentially larger than that and so if That's the case once we sort of open That Pandora's Box You know I don't know There's no there's no no Outlet where we Unplug it I don't know if there's any Server Farm where we contain it it out Thinks us at every step of the way and I Hear you read on the fact that well it Doesn't have that interface with the Physical world in a real sophisticated Way yet but how long until it figures Out how to overcome the nuclear codes How long until it figures out that it Can escape and live wherever it wants And it's in the cloud and it has no home Base and and if it's intelligence is That far outpacing Humanities I don't Know how we do what you both have said I'll go you kneel first and get ahead of It Well you're you're pulling like the

Matrix scenario right here man I love it You're you're right that's part of the Challenge but again I'll go to my Parent-child analogy and that you know Every parent wants their child to Hopefully be smarter more successful Than they were And so it's always about trying to raise It right but that's the challenge what Does right mean we don't have like a Uniform code of ethics or morals Everyone has a different perspective Which is why we're stressing them out What's going to happen with AI at the Same time we know we can try and at Least manage or guide this a bit with Regulation But the way we do regulations is is Outdated right we typically wait for Something bad to happen and we react or Overreact Regulators they don't understand the Technology and they don't understand the Misuses or uses you got to change the Way we do regulations you got to have The technologists the businesses the Academics everybody at the table Essentially kind of do the scenario Planning and say like well what could Happen and then how do we try and put Either barriers or obstacles or prevent That from happening in the first place Because at some point this is going to Run run ahead of us yeah wrong no doubt

Uh what point Reed at what point does it Run out ahead of us Well when we don't whenever we don't do Our due diligence so there's a couple Things to say number one you know Neil You said you know there's no universal Code of right and wrong but one thing to Mention that I think I think a lot of This stuff to your to your point earlier Will is that this stuff crosses Political divides there are ethical Nightmares that we all share and I take It at the genocide of the human race is One of them right I don't care if you're Right or left Bernie Sanders or uh I Don't know Trump Genocidal you know genocidal maniac Maniac robots want to wipe out the human Race that's a bad thing we're all we're All on board Um look will I think the main thing is One thing to keep in mind is that These things don't want things so we Talk about intelligence but it doesn't Look like intelligence is what makes us Go after things so one way to think About it and obviously we can get things Can get very complicated here Um but one way to think about it is that Intelligence sort of lays out the map And desires sort of set the goal where We want to go on the map it sets the Destination So we have intellect that understands

That grasps that makes connections among Things that could figure out the means To ends but the ends are set by desire Right little kids they want stuff but They're they're pretty dumb right they Don't have the intellect but they've got Desires they've got goals so the AI as We're talking about it now you couldn't You can ramp up the intelligence so that It has digested the entire Corpus of Medical knowledge that we have and can Answer questions about the human body That it would take literally thousands Of doctors to have to do together but it Doesn't have it can so it'll lay out the Map of medicine if you like better than Any human can or any group of humans but It can't set a destination So the thing to be scared of is not that The robot is going to set a destination On the map kill Humanity the worries That people will either intentionally if They're really evil or accidentally set It to have some destination on that map But I don't I don't find much comfort in That read and that's back to the paper Clip analogy you know I don't find Comfort in the fact that AI doesn't have A set of directives Or that it does have a set of directives That doesn't seem on its face to be in Competition with the existence of Humanity what I'm worried about is yes We give it directive and we do not agree

On moral not just Republican Democrat That's a very shallow way to look at our Divides within Humanity on value systems And moralities you know we have varying Levels of religious beliefs we have we Have all sorts of divides within Humanity while you say well we all want To survive I think that's true I don't Know that that's true you might have Someone that says kill kill the infidels Or you might or you might have someone Says how about how that's that's true But if you know the truth is kill the Infidels Um is is actually less Um threatening to me than some of the People that have held very very high Level scientific and academic positions For example a guy like Paul Ehrlich Um who has said hey you know what's a Problem for Humanity overpopulation There's too many of us now that's a Value assessment that he has made and if His values get imparted into the AI and The AI decides Ash too many people and By the way just for what it's worth Those doomsdayers have been wrong for Centuries from Malthus to Ehrlich They've all been wrong but if one of Those guys who they continue to get high Level positions in positions of esteem Imparts those values into an AI how long Until AI says at a minimum sorry no more Kids for you or at a maximum we're gonna

Have to do something to call the earth I Mean and by the way Reid I'll just say This I think even more scary one is the Ones you've alluded to several times Not even knowing what values you're Giving it Yeah I mean look I don't think anyone's Gonna go into deny that what we've got On our hands is a phenomenally powerful Tool and it's the beginning right so we Talk about chat gpt4 now and it's Amazing it does has incredible power Wait till five or six comes around right So things are going to get nuts and then You're gonna throw into the mix other Kinds of technologies that we haven't Even talked about like quantum computers Quantum computers make the processing That our best super computers engage in Today look like kindergartners counting On their fingers so then when you Combine quantum computers and we're Talking about calculating trillions of Data points in fractions of a second That's what quantum computers can do and We're getting there I mean IBM Google We're they're making steady progress so Yeah things can absolutely get out of Hand Um it is a serious concern and the the Issue was not I'm not afraid of Killer Robots with Their own desires and their own Consciousness and their own evil goals

But yeah there can be absolutely really Bad people who get a handle on this Technology which is one reason why when Everyone's talking about democratizing AI I get a little bit worried I think Really do we want to you want everyone To have this thing I don't want everyone To have access to crispr uh do I want Everyone to have access to AI not so Much Well but I also don't know who I put This kind of god-like trust in as some Elite group to control it either Um Neil So you heard Reid talk about it We're talking Um which I think we're talking pretty Far out futuristic stuff I don't know I I actually don't know that what I said Is true or not tell me AI in the next Five years in your estimation in your Estimation how will I be using AI five Years from now read brought up wait till You see five and six I don't know where We'll be in five years we could be on What 20 and 30 uh AI I don't know where We'll be What will What what will it Look like five years from now Five years from now I think we're going To be pretty close to having that you Know personal concierge that understands Our needs our opinions can anticipate Things that we want to do and be able to Be prepped for it that's for a lot of People considered the Holy Grail

We're already at a point now where like Chachi PT can generate homework Assignments and things like that cpt4 It's going to be generating highlight Instant highlight reels from sports Games so five years from now just Imagine the ultimate assistant you ever Watch the show uh Black Mirror no I Didn't watch that but I saw her with Joaquin Phoenix is that did you see that I did see that you're going to be Looking at something kind of like that That I think you're going to see that Unfortunately for some people the AI is Going to be like a person and have a Relationship we want and people want That they want someone that knows them Almost as well as they know themselves Wow Read answer that same question where do You think it will be in five years how Will we part of our lives I agree with that personal concierge Stuff people are going to have Relationships with chat Bots I think That's that is where we're headed I'm I'll tell you some places where I'm Really worried about that we're headed That I think are are not just likely but Maybe even probable Number one I'm worried about manipulation with what You could call conversational media We've had social media and targeted

Marketing and disinformation but you Know imagine that you've got your Trusted chat bot that can negotiate Better than the best human negotiator Can manipulate and poke your buttons Better than the best you know poker Buttons Pusher of buttons Everything you write to this thing it Analyzes it and looks does sentiment Analysis on the text that you write so It can better manipulate you and now Maybe it's just trying to sell you some Sneakers or maybe it's trying to sell You a conspiracy theory but holy Sorry I don't know if we can swear on The podcast but I think it's deserved It's deserved in this moment right so Because this thing is going to be really Really good at pushing people's button And tricking people and you're not going To know the intentions of that chatbot Because who knows who created it so That's one massive Warrior this is one Place where I think we absolutely need Regulation because otherwise we're going To have highly manipulative chat Bots Unleashed on unsuspecting masses it's Going to be a massive problem that's one Issue the other thing that I see coming In and you talked about this a little Bit earlier is the Black Box stuff the Thing combined with should we replace Human judges because human judges are so Biased that if they don't eat well

People get hurt but Um do we do how much do we want Black Box AI making decisions about in the Criminal justice system about innocent Or guilty or deserve probation or Doesn't deserve probation or should be Treated for this disease or shouldn't be Treated for that disease and they might Prove that you know really reliable and So we think oh there we put them against Certain tests they're less biased than Judges so let's use them and besides Humans are black boxes and they're Biased this is less biased but then We're living in a world where you get a Decision you know denied whatever Whatever it is and you say well why on What grounds and it can't really give You a reason this is sort of like a Dystopian you know it's like bureaucracy At scale or bureaucracy woven into Everything where it's an impenetrable Rationale for everything so I think we Might be going to using Black Box models Where we're at very high stakes Situations It's scary so both of you whether or not It's Reed's most dystopian which you Described as a probability actually Um a dystopian view of where we are in Five years or I'm not even sure I would Decline describing Neil's vision of us Having relationships with a personal Concierge as not dystopian that may be

Dystopian in and of itself Um whether or not it's either one of Your visions I mean I think what I'm taking away from Both of you is life life Okay we're talking about not my tax Bracket okay not the little stuff we Fight over in politics life how I Conduct myself I was gonna say from like From when I awake and when I go to sleep But no I'm it's probably working on me While I'm asleep as well my life is Drastically different five years from Now Is that correct 100 I think to someone reads point We're gonna be order takers I already See it I know people that can't get Between home and office without Google Maps and all they know is I turn left And I'm told to turn left they don't Learn street names are or landmarks Anymore and you know kind of add on to His thing about is REI system is going To try and convince you to buy stuff Yeah but I'm also going to try and Convince you to buy they're trying to Convince your AI concierge to buy it so You're gonna have ai talking with AI Trying to figure out how do I convince Another AI system to do something I don't read how am I supposed to make Plans for my life when it sounds so Drastically different five years from

Now You know you never know what's going to Happen you know when I was a back when I Was a professor students would ask you Know should I prepare for this should I Should I do this should I do that and I Would say listen doesn't you don't know What's going to happen this was before AI you don't know what's going to happen You do the things that you don't you Want to do anyway I still think there's Going to be room for expertise right so I still think there's going to be room For people to engage in Creative acts to Critique creative Acts I do think that there will be ways that People someone people's work will be not Replaced but will genuinely be augmented So I don't think anything fundamental I Don't think you should change the course Of your life at this point because you See what's coming not unless you're Already interested in doing it it's hard To project employment forward it's it's Hard to project a career right five Years ago even fewer than that we always Thought that AI is going to replace the Jobs that are dumb dirty and dangerous The 3DS and so truck drivers better Watch out you're gonna get replaced now It's lawyers Radiologists watch out Right so you don't we just don't know It's white collar workers jobs who seem

To be at risk now it's the junior you Know artist Who comes up with the first draft of Something that person is at risk because Generative AI can create that sort of Stuff we just don't know what's going to Be what what the economic Outlook is Going to be we don't know what jobs are Going to be available we don't know Whether it's going to create more agile Than it destroys if it does create more Jobs it destroys great it might not but We don't know what jobs it's going to Create exactly So you know in the Fate so then then I Just think look in the face of such Uncertainty Pursue what you find really interesting And you think you might be able to make A living on that that's pretty much it Because prediction is good he leaves You're not you're not making you're not Making Um evidence-based judgment yeah and Personally I say Get Right with god Um I'm going to um I'm going to leave This this last question I'm asked both Of you this Um in technology I think this is a fair Statement to make and you both can of Course rebut the premise in technology What we have generally seen over time is It's something that starts out fractured And democratized ends up monopolized

Um be it Amazon or Google a behemoth Emerges is the future of AI these Fractured chat Bots like you described Neil communicating with one another Perhaps with competing interests or is It one monopolistic AI that controls Everything Neil I I think it's still going to be Fractured but it's going to be minimal It's going to be like two to four major Players and those are going to be the Only options It's the new arms race going on right Now you're wondering why Google is Trying to race that far and all these Things it's it's the next wave and it's The wave that's going to disrupt the Current big tech companies so everybody Wants to be Part of that big Survivor group I should Say survive big but the Survivor Group Which is going to be those top like two To four companies Read So the way that it looks now and of Course who knows what it's going to look Like you know a couple years from now or Less who knows but the way that it looks Looks now was that there's going to be These base models That are developed by a handful of Companies because it requires a Tremendous amount of compute power which Means a tremendous amount of money to

Train these systems so chat GPT or bars You can have Microsoft and Google and Other other small you know other massive Companies or really really well-funded Companies like openai they're going to Build the base models and then they're Going to distribute those base models to The masses so to speak and then there's Going to be a million and one ways that Those base models get fine-tuned So you will have I thousands of AIS Created by uh you know all the startups Out there individuals in their basements Whatever but it's all going to rest upon A foundation of a handful of models Controlled by a handful of corporate Giants Well you kind of laughed Neil but I'm Gonna end this conversation where I Began uh I do think if this doesn't look Like the Matrix I don't know what does I Mean her is a stopping point her is a Blip on our way to being plugged in to Pods while we live in a virtual world That's what it sounds like to me Um this is a fascinating and terrifying Conversation Um and one not only worthy of more time But like I like I said at the beginning And certainly in the way you guys Describe the next five years really Honestly the only conversation that that Or at least in every conversation it Should be coming back to the effect that

Will be had by AI uh thank you guys so Much can I say can I say what can I say One thing to give a little bit of Hope Please I'd love that one amazing Technology is crispr Right crispr gives Us the ability to really quickly edit Human DNA Crazy stuff great massive potential Benefit but there was a worldwide ban on Using crispr on human embryos it's sort Of incredible that the entire medical Community across the world said we're Just not gonna go there with crispr That's an impressive feat because if you Start messing with the DNA of human Embryos who knows what nightmares can be Can be literally born but if you can do That with crispr then I believe that we Can do that with AI as well but it's Going to take a lot of political will And effort so really quickly that's how We'll leave this do you think that's What should be done both of you Neil First do you think that we people have Called for a six-month pause whatever it May be do you think like we need to Quickly either stop or put the fences Around this ASAP Look it's it's the right problem but That's the wrong solution it's just not Feasible you're not going to get every Country company individual to stop but We need to figure out the right fences And it's a hard conversation and that's

The problem most people aren't willing To have that conversation yet we've got To change that And read I take it from your example With crispr then you you not only think It can be done but that it should be Done when it comes to AI Yeah that's right Um look there's there's Um I think right the six month pause I Think is just sort of Um operationally foolish never gonna Happen if I had a magic wand sure great But I don't Um so it's not going to happen but we um Yeah we have to have regulation around This stuff there has to be a worldwide Conversation and there will be lots of People who say you know we need to talk About what we need to what the solutions Are in Broad Strokes we know what the Solutions are broad Strokes we know what The solutions are Um but the main issue is not ignorance About what to do it's a lack of Political will to do the right thing All right thank you so much Thank you both what a what a fascinating Conversation I hope to talk to you again Yeah my pleasure thank you hey it's Will Kane click here to subscribe to the Fox News Channel on YouTube it's the best Way to get our latest interviews and Highlights and click to subscribe to the

Will Kane podcast for full episodes Right now

You May Also Like

About the Author: Roaldo

Leave a Reply

Your email address will not be published. Required fields are marked *