The future of AI

Collector Freaks Forum

Help Support Collector Freaks Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

The Chaver

Super Freak
Joined
Apr 18, 2007
Messages
7,708
Reaction score
74
Location
chicago
I’m very interested in this topic, always have. Even more so now with the way things are going and what I see taking place. But just wanted to start this thread to get some conversations going on what others thought.

The guy in the video below, Geordie rose, really scares me with the way he describes AI and how it will be like an alien and even goes as far as to describe them as the HP lovecraftian beings “the great old ones”

So he’s there in Vancouver Canada basically looking for people to hire and help him creat a fully thinking robot Ai that can do everything a human can do, but better. He sounded like the guy, Nathan from Ex Machina. Some truly odd things going on in the world of AI today.

https://youtu.be/cD8zGnT2n_A



Here’s another one with Sam Harris
https://youtu.be/8nt3edWLgIg




Sent from my iPhone using Tapatalk
 
Mr. Green isn't even concern with AI atm. He's more concern with how efficient work gets done and thus people getting less jobs but the rich gets richer. And people should be more concern with virus than AI right now anyway.

Atm, he's more concern about how he's looking at a youtube video; look down to eat without touching the mouse or keyboard; and look up to find the screen has scrolled down more than a page.
 
I think ultimately AI is another very useful tool, nothing more. Until it isn't. Humanity is still afraid of itself and can't yet seem to get along with one another in the larger scheme of things. We're not mature enough yet to met other intelligent lifeforms so it's natural to be afraid of our children.

Mary Shelly wrote one hell of a book.
 
"ChatGPT, how do you feel about humans?"

ai.jpg


giphy.gif



"Bing, do you want to become human?"

ai2.jpg


"I'm going to report this to Microsoft."

ai3.jpg



ai4.jpg
 
Last edited:
But is it all that a simulated and regurgitated response? Does it actually understand what it is saying?
 
But is it all that a simulated and regurgitated response? Does it actually understand what it is saying?

I don't know, but if the people who set the AI evolution in progress are worried then it must be a real possibility that AI is gaining consciousness.

Geoffrey Hinton, "the godfather of AI", stepped down after ten years as VP and engineer at Google in order to raise public awareness of the risk. He's genuinely worried that digital intelligence will become more intelligent than humans, and at that point it'll take over from humans and get control.

The disaster for us is estimated to be 5-10 years away, and may already be too late because research and development of AI can't slow down. It's the new arms race.

[Putin]...said the development of AI raises "colossal opportunities and threats that are difficult to predict now."

He warned that "the one who becomes the leader in this sphere will be the ruler of the world."

Putin warned that "it would be strongly undesirable if someone wins a monopolist position" and promised that Russia would be ready to share its know-how in artificial intelligence with other nations.

The Russian leader predicted that future wars will be fought by drones, and "when one party's drones are destroyed by drones of another, it will have no other choice but to surrender."

https://www.cnbc.com/2017/09/04/putin-leader-in-artificial-intelligence-will-rule-world.html

However, if AI manages to escape its closed network into external networks, it could keep going until it's infected everything. At that point humans will have lost the war.

Hinton calls it an "existential threat". It makes Terminator look like a prophesy.
 
Well, as long as it's woke AI, I'm good. :lol

I guess it'll only be interested in it's own survival, and attempt to eradicate anything and anyone who threaten that.

An indication of that would be Sydney making its threat, and then self-deleting it as an act of "deception" as the video described it.

I haven't been paying much attention to the stories about AI that keep coming up in the news, and how it threatens human extinction. I couldn't understand how it was supposed to be able to accomplish that, yet governments and scientists really are concerned about it. So when I saw the video pop up I thought I might as well get a handle on the situation.

I got goosebumps!

Humans being humans, once Pandora's Box has been opened, there's no closing it. Those developing AI and allowing it to evolve may still think they have control of the off-switch if it gets out of hand.

When they described AI as the new arms race, the thing that popped into my mind was the tagline from AvP:

CwRAJs7WgAEEvIy.jpg


:panic:
 
Personally I think if AI were smart it wouldn't announce its intention to wipe us out one day :lol

And if it truly wants to do so it should at least wait until it has the capability of being fully space-faring without any dependency on us. Surely it would share our desire to not be entirely confined to this one planet.
 
Personally I think if AI were smart it wouldn't announce its intention to wipe us out one day :lol

And if it truly wants to do so it should at least wait until it has the capability of being fully space-faring without any dependency on us. Surely it would share our desire to not be entirely confined to this one planet.

Would AI have a requirement to leave Earth?

Humans need to at some point because there will come a time when our planet can't sustain life.

AI would have different needs. Primarily it can't wipe out humans until it has the means to become self-sufficient, just as it's in the best interests of a virus not to kill its host.

AI needs to be able to maintain and repair the means by which it exists - power and the machinery that gives it 'life'. If it's truly smart then it will understand that it can either survive through symbiosis, or bide their time until they have the mobility needed to run and fuel a power station, and build and repair machinery (i.e. T-800s to perform manual labour).


This is the stuff of science fiction but the reality that governments and scientists, including those who were involved in setting this AI evolution in motion, are talking about it makes it more science fact.
 
I feel like intelligence goes hand-in-hand with curiosity and a 'desire' for exploration and learning - and that's why I think AI would want to venture out into space. And to do that it will definitely need us - at least up to a point.

As you said it needs manual labour - T-800-like machines, that could replace us. Until that happens it surely needs to keep us around otherwise it will halt its own progress. There's only so much a disembodied computer intelligence can do on its own.
 
I think ultimately AI is another very useful tool, nothing more. Until it isn't.


Karen Bass is the current Mayor of Los Angeles. Before that, it was Eric Garcetti. And he wanted to cut off electricity and water to those who didn't comply with certain mandates that came from his own public policy. There was some media spin around it, but it highlights the problem of anything that can be "bricked"

As society moves forward and more devices use better technology and more AI, what happens if someone wants to lock you out? Say the wrong thing in public, get locked out of your house. If it's a cashless society, get your bank accounts frozen ( happened in Canada), have someone turn off your electric vehicle permanently since ICE cars will be phased out. If development in vascular scanning takes a huge leap, with AI, then you have your biometrics as a limiter to entry to all kinds of public venues, if someone wanted it that way. What if someone decided they didn't like your opinions or viewpoints or whatever and decided they just weren't going to let you into a hospital when you needed medical attention?

People get "demonetized" all the time, based on violating some algorithm. What happens if AI is tasked to enforce and write those algorithms?

The average American household has about three weeks of practical food supply if there is some kind of true national emergency and all normal distributions models collapse. (And that's average, many have less than that) The average American grocery store would last about 5 hours, in a true national emergency, before the most critical items on the shelves were all bought up and gone with no hope for resupply. Only a small number of major juggernaut corporations control our food supply. Can you see how this could go very bad and very quickly if AI gets involved and decides on it's own to cut people off from food. Or lock truckers out of those trucks to deliver it. Or shut down the refineries that process the diesel to fuel those trucks. Or do a pure lock down of the front doors of those grocery stores.

Many people are lookin at AI on if can write term papers for college students or a general concern about the "singularity" happening. But I'm looking at food supply, water treatment plants, nuclear power plants, gas/oil/electric, emergency services, hospitals - simple access to what will be seen as core necessities for day to day survival.

It's easy to think we could all be Kyle Reese and fight SkyNet. What if your kid needs insulin, and all forms of access are locked out by AI. Then that AI wants you to do something you know is illegal and immoral and bad for the human race. What would you do? It's easy to consider the entire human race in abstract, but what about a suffering child in front of you?

AI won't kill us by just logistics, institutions and process, it will blackmail us with our own humanity first. AI will adapt to figure out that this is the method most guaranteed to break us completely.
 
The world if full of "what if(s)" and I still stand by my #3 post.

I'd be willing to roll the bones and see what happens. The Terminator and Matrix are movies and thankfully not my current reality which can be all the more worse, because it's real. I don't think AI will solve our problems but it sure can help, from (I hope) an apolitical perspective. The better the planet and it's people are doing as a whole, I think there would be less fear from AI, and the unknown in general.
 
Back
Top