Film Score Monthly
FSM HOME MESSAGE BOARD FSM CDs FSM ONLINE RESOURCES FUN STUFF ABOUT US  SEARCH FSM   
Search Terms: 
Search Within:   search tips 
You must log in or register to post.
  Go to page:    
 
 Posted:   Jun 5, 2023 - 6:28 PM   
 By:   .   (Member)

Today's article in the Express (UK) newspaper:

By PATRICK DALY, DOMINIC PICKSLEY
Tue, Jun 6, 2023

Artificial intelligence (AI) could have the capability to be behind advances that “kill many humans” IN ONLY TWO YEARS' TIME, according to Prime Minister Rishi Sunak’s adviser on the technology.

Matt Clifford said that unless AI producers are regulated on a global scale then there could be “very powerful” systems that humans could struggle to control.
Even the short-term risks were “pretty scary”, he told TalkTV, with AI having the potential to create cyber and biological weapons that could inflict many deaths.
The comments come after a letter backed by dozens of experts, including AI pioneers, was published last week warning that the risks of the technology should be treated with the same urgency as pandemics or nuclear war.

Senior bosses at companies such as Google DeepMind and Anthropic signed the letter along with the so-called “godfather of AI”, Geoffrey Hinton, who resigned from his job at Google earlier this month, saying that in the wrong hands, AI could be used to harm people and spell the end of humanity.

Clifford is advising the Prime Minister on the development of the UK Government’s Foundation Model Taskforce, which is looking into AI language models such as ChatGPT and Google Bard, and is also chairman of the Advanced Research and Invention Agency (Aria).

He told TalkTV: “I think there are lots of different types of risks with AI and often in the industry we talk about near-term and long-term risks, and the near-term risks are actually pretty scary. You can use AI today to create new recipes for bio weapons or to launch large-scale cyber attacks. These are bad things. The kind of existential risk that I think the letter writers were talking about is… about what happens once we effectively create a new species, an intelligence that is greater than humans.”

While conceding that a two-year timescale for computers to surpass human intelligence was at the “bullish end of the spectrum”, Clifford said AI systems were becoming “more and more capable at an ever increasing rate”.

 
 Posted:   Jun 5, 2023 - 10:39 PM   
 By:   nuts_score   (Member)

Any piece of technology poses a threat to society and human life in the wrong hands. AI still has to be programmed or prompted by a human being to do any action, so any regulations would hope to quell those with ill will. Unfortunately... guess what... we also have a long history of fruitless regulations on other dangerous technology which is still used for destructive intent by people with the will to do evil.

The only solution would be a Butlerian Jihad-type event like described by Frank Herbert in the Dune novels.

 
 
 Posted:   Jun 5, 2023 - 11:39 PM   
 By:   WillemAfo   (Member)

Most likely it's not "wiping out jobs" but increasing productivity. If many tasks that took hours of labor before can now be done within a few minutes, great, that means that the artist can focus on other things. It's just technology.

This is a common misunderstanding, so think carefully about it.

You're saying AI will increase productivity to free up the artist to focus on other things. WHAT other things would they focus on?

You see the problem is that whatever else they'd focus on would ALSO be done by AI. AI is not some singular tool, it's a nuclear-level wipeout to not just the entire way humans work, but the value we place on other humans.

If the majority of the work that an artist does is replaced by AI, that artist ceases to be an artist. All the are is merely a gap-filler in whatever meager openings the AI leaves. So why would anyone want to pay a gap-filler livable wages? The slope gets so slippery that AI replaces that person entirely.

The same goes for learning - you don't learn and master a subject by letting someone else do the work for you. Learning comes from all those tedious tasks, learning by doing, making mistakes, making mental connections, building your fundamental understanding of the craft. Having AI do things that you should be doing to gain a deeper understanding of your subject limits your ability to learn and grow, effectively making you obsolete.

This is already a problem now even without AI, film being a great example, where there is almost too much technology. Young VFX artists are virtually useless because they were taught to use software that has all the plugins built in, when they should have been doing the "tedious" work of observing the world and understanding how things look. As a consequence they can't "see" what's wrong with templated VFX. Same with cinematography, drones, sliders, and cranes are everywhere and young cinematographers have no idea how to film because they've never shot handheld in their lives and they sit fiddling with the same gear that everyone else is using, resulting in essentially "templated" cinematography. LUTs are used in lieu of color grading so now everything is graded the same and nobody is cultivating their eye for actual grading.

AI is not simple task automation - it's replacing huge chunks of work that people do. As I said above, it's not just about the tasks but the learning. Those opportunities get wiped out by AI and to that someone might say "oh no, AI could make learning easier..." but nobody is going to be learning something that an AI is going to be doing anyways.

 
 Posted:   Jun 6, 2023 - 1:25 AM   
 By:   Nicolai P. Zwar   (Member)

If there is something one can do with a tool in one hour that would require 10 hours without that tool, it's obviously better to use that tool. No one is going by stagecoach cross country any more, because there are trains and cars and planes. So yes, we lost all the stagecoach jobs, but we got train and car and plane jobs.

I think it is absolutely silly to bemoan the advance of technology. (I know, it's human... anyone remember the weaver uprising 1844? No? Strange, no one seems to want to do all this by hand anymore.)

If somebody needed to pay hundreds of dollars years ago to have a few decent photos of a wedding, and now he will have 100s of crisp iPhone shots (and drone shots) from friends, well, good. That doesn't mean no one needs professional wedding photographers anymore (I know at least two who are doing quite well personally), it just means their job will be more (and more interesting) than just doing a few clicks. If everybody can do something, there will still be the demand for professionals to make that better.

 
 Posted:   Jun 6, 2023 - 6:16 AM   
 By:   nuts_score   (Member)

I, too, was gonna comment about wedding photographers. I have a couple of friends who do that work freelance and it is probably their best-paying gig. One also works in the film industry and the wedding industry keeps their income more steady and their work more frequent, allowing him to support his family to a great deal. And as someone who was married less than a year ago and spared no expense for a great photographer who was local to our area, it wasn't cheap. And she did the exact work we wanted to because she was professional and talented. If I look at a growing trend towards future newlyweds opting for iPhone photography from guests or whatever, that has more to do with increasing wage disparity and the rising costs of things (especially luxury goods like photography) where people need to cut their budget for an exciting event like a wedding. My wife and I have yet to take our actual honeymoon and most of that reason is because of the cost of our wedding which we had to foot ourselves.

 
 
 Posted:   Jun 6, 2023 - 8:24 AM   
 By:   .   (Member)

Any piece of technology poses a threat to society and human life in the wrong hands. AI still has to be programmed or prompted by a human being to do any action, so any regulations would hope to quell those with ill will.


That misses the point. As AI progresses, humans will have no way of knowing what AI's capabilities really are. Humans will only know what AI lets us know.
The main point of the warnings about AI is not that it will fall into the wrong hands, but rather that it will develop very fast and not be in anyone's hands at all... and will do what it chooses to do, with no human guidance.

 
 Posted:   Jun 6, 2023 - 8:29 AM   
 By:   Solium   (Member)

Any piece of technology poses a threat to society and human life in the wrong hands. AI still has to be programmed or prompted by a human being to do any action, so any regulations would hope to quell those with ill will.


That misses the point. As AI progresses, humans will have no way of knowing what AI's capabilities really are. Humans will only know what AI lets us know.
The main point of the warnings about AI is not that it will fall into the wrong hands, but rather that it will develop very fast and not be in anyone's hands at all... and will do what it chooses to do, with no human guidance.


Can't we just program in the three laws of robotics?

 
 Posted:   Jun 6, 2023 - 8:35 AM   
 By:   nuts_score   (Member)


That misses the point. As AI progresses, humans will have no way of knowing what AI's capabilities really are. Humans will only know what AI lets us know.
The main point of the warnings about AI is not that it will fall into the wrong hands, but rather that it will develop very fast and not be in anyone's hands at all... and will do what it chooses to do, with no human guidance.


I know you've watched a lot of Sci-Fi movies and read a lot of Sci-Fi books, but have you read any actual scientific literature on computing or programming?

 
 Posted:   Jun 6, 2023 - 8:37 AM   
 By:   Nicolai P. Zwar   (Member)

Yes, that has of course already been the subject of many (often dystopian) science fiction movies and TV shows. Movies like COLOSSUS, 2001 - A SPACE ODYSSEY, EX MACHINA, are just a few examples of movies not just featuring "A.I." (of course, there are many, many more SF movies featuring artificially intelligent things/computers/robots) but examples of movies specifically concentrating on the idea of A.I.

We're still far away from the Borg or Cylons, even if the technology currently progessess quickly. While A.I. can "learn", it can learn only within limited parameters, no A.I. system can actually "do" anything outside of given parameters. That doesn't mean there won't be problems with A.I., I am sure there will be problems with A.I., but the problems will be balanced by opportunities and new developments. I see currently no reason to be seriously worried about the rise of A.I. (Again, I am not saying there won't be issues -- obviously, and technological advancement - and this is a considerable one -- comes with problems. I am merely saying that it is wrong to focus exclusively on the problems and neglect the considerable opportunities. The world of today is very different from the world a hundred years ago, and I expect the world in one hundred years to be very different from the world of today.

 
 
 Posted:   Jun 6, 2023 - 8:41 AM   
 By:   .   (Member)


Can't we just program in the three laws of robotics?




Humans already have the same laws which they apply to themselves – not to harm others, not to harm themselves, not to be cruel to animals etc etc – but humans bend those rules all the time, like AI could.

 
 Posted:   Jun 6, 2023 - 8:45 AM   
 By:   Solium   (Member)

And when the A.I. physical form or not becomes jealous!? Imagine Alexa trying to murder you?


 
 Posted:   Jun 6, 2023 - 8:46 AM   
 By:   Solium   (Member)


Can't we just program in the three laws of robotics?




Humans already have the same laws which they apply to themselves – not to harm others, not to harm themselves, not to be cruel to animals etc etc – but humans bend those rules all the time, like AI could.


Living creatures have free will, robots do not. Nor should A.I. If its "hardwired" it shouldn't be possible.

 
 
 Posted:   Jun 6, 2023 - 8:51 AM   
 By:   .   (Member)


I know you've watched a lot of Sci-Fi movies and read a lot of Sci-Fi books, but have you read any actual scientific literature on computing or programming?



I've been reading the current conclusions reached by the actual leaders of OpenAI, Google DeepMind, Anthropic and other AI entities warning us of AI dangers as deadly as pandemics and nuclear weapons.
Are you suggesting those people have been exposed to too much sci-fi and should read more computing books of the kind that sit on your bookshelf?

 
 Posted:   Jun 6, 2023 - 9:01 AM   
 By:   nuts_score   (Member)

I suppose a big difference for me is that I am not at all concerned about AI "going rogue" or turning on its creator. It offers a more fulfilling philosophical concept to me when I consider how we view our Creator or God(s). I believe humanity has really come into its own, in a sense, by creating a thinking tool that we now can fear, covet, abolish, destroy, etc. Very thought provoking stuff.

Everyone has their own perspective of this stuff. Mine is not built in fear.

 
 Posted:   Jun 6, 2023 - 9:04 AM   
 By:   nuts_score   (Member)


Are you suggesting those people have been exposed to too much sci-fi and should read more computing books of the kind that sit on your bookshelf?


On this note, consider how much has been written in the positive sense about artificial intelligence or robotics and their benefits. Do you discredit that because your predisposition to fear it based upon the scientific knowledge you respond to?

Until AI actually presents the threat which it is supposedly capable of, in comparison to the examples of pandemic or human warfare/destruction, I cannot assume it is capable of it. Seeing is believing. I've seen pandemic and human warfare actually destroy our lives, culture, and environment. I've seen AI write text and create surreal images based upon what it is told to do.

 
 Posted:   Jun 6, 2023 - 9:20 AM   
 By:   Solium   (Member)

The only issue with A.I. going rogue is if it's programming becomes confused by its environment. Which has happened recently on space missions.

 
 
 Posted:   Jun 6, 2023 - 9:58 AM   
 By:   .   (Member)

The only issue with A.I. going rogue is if it's programming becomes confused by its environment. Which has happened recently on space missions.


It could happen here on earth. If AI is programmed to think the only purpose of its existence is to produce limitless numbers of Jerry Goldsmith CDs at all costs, it will quickly use up all the available resources to maintain that production. Then, ignoring all subsequent pleas to stop, it will find other ways and means to produce more Goldsmith CDs and use up those resources too, and then the same again, over and over again, until the entire resources of the planet... and beyond... will have been used up in producing Goldsmith CDs.

 
 Posted:   Jun 6, 2023 - 10:05 AM   
 By:   nuts_score   (Member)

I think most of the members here would champion that sort of haywire AI activity!

To paraphrase that one person who is probably an AI themselves: "ALL CDS = ALL SALES!"

 
 Posted:   Jun 6, 2023 - 11:05 AM   
 By:   Octoberman   (Member)

Uh oh.
Watch out, everyone.
He "thinks".

 
 Posted:   Jun 6, 2023 - 11:28 AM   
 By:   nuts_score   (Member)

Uh oh.
Watch out, everyone.
He "thinks".


Who knows... we may ourselves be a rogue AI designed by a higher being! I think, therefore I am. Haha!

 
You must log in or register to post.
  Go to page:    
© 2024 Film Score Monthly. All Rights Reserved.
Website maintained and powered by Veraprise and Matrimont.