AI Needs to be Monitored, Controlled

Content Insider #907 – Yeah But…

By Andy Marken – andy@markencom.com

“You guys, the truth is way more depressing. They are not even smart enough to be as evil as you’re giving them credit for.” – Kate Dibiasky, “Don’t Look Up,” Netflix, 2021

We stand on the verge of restarting, refreshing, revitalizing the world around us and we hate to tell you but…there’s not a damn thing you can do about it.

You can’t turn around without bumping into an AI expert, product, service. 

They/it represent a freshly released intelligence that’s going to make your life easier, better, more satisfying, more rewarding … different.

Okay, it’s probably going to be good.  

What’s the worst that can happen?

Companies like OpenAI, Anthropic, Genesis AI, xAI, Mistral AI, Contextual AI and hundreds of other firms are taking in billions of dollars from eager investors.

They’re competing head-on with established firms like Microsoft, Google, Alibaba, TenCent, IBM, Baidu, Meta and more to rule the breathtaking potential of artificial intelligence.

Every organization is paying big bucks to hire men, women who can create, release and sell AI stuff. It was in everything unveiled at CES in January.

Figure 2

Someone even rolled out an AI-enabled lawnmower at the show that would do it all. And the industry hasn’t even warmed up yet.

We believe Nvidia’s Jensen Huang when he says AI will be a fundamental force that changes society and the world.

True, he has a vested interest. 

His company leads the industry in developing/delivering GPUs (graphic processor units) that are vital in paving the way and keeping AI humming.  

Figure 3

All In – Every country and nearly every company is investing in developing and/or using AI technology to meet the needs of tomorrow.

Senior corporate/technical folks and governments get it when he discusses the vision and the need.

That’s why Amazon’s Andy Jassy, Meta’s Mark Zuckerberg, Microsoft’s Satya Nadella, Google’s Sundar Pichai, Oracle’s Larry Ellison and well, every big company/country leader picks up the tab when they have dinner with him.

They all want to be first in line to buy their AI building blocks.

Of course, that can be a little sticky because each one wants to keep the technology and all of its advantages for themselves and not share in the rewards.

They can’t wait to deliver their AI solution, even if it’s half-baked and not fully tested. They’ll let it fix itself.

But in Jensen’s mind, the only way it really works is if everyone shares.

That little challenge is way above our paygrade, so we’ll just see how he works things out with them.

We understand and appreciate his vision of how the technology can lift up, help, enrich and improve the lives of the 9.2B plus people around the globe.

But still, the devil is in the details.

First Things First

AI requires massive amounts of data and substantial computer processing power (racks and racks of computers) to do all of the data gathering, processing, analyzing, evaluating, deciding necessary work to do its thing efficiently.  

And the computational power needed is increasing exponentially. Even the power needed just to train AI models is ridiculous.

Back to Life – The Three Mile Island Nuclear Generating Station is being brought back to operation because of the dramatic need for power to meet the computing/processing needs of AI technology.  It is just one of many facilities that will be in operation in the years ahead.

AI requires computational power beyond traditional data center technology requirements.

Yes, upward of 60kW per rack and the compute demand is growing rapidly, roughly doubling every 100 days.  

No one in the industry believes today’s global/local power grids are equal to the task, thus requiring more grid capacity and renewable energy sources beyond fossil fuels.  

To meet the need, Microsoft is a major investor in the restarting of the 3 Mile Island nuclear power facility which was shut down following a partial meltdown in 1979 and complete shut down in 2019.  

When the facility is recertified – 2028 – it is projected to produce 837MW of power with a smaller footprint than solar/wind farms.

Meta is also planning/negotiating to have 1-4GW capacity up and running by2030.

And there are projects in every corner of the globe to meet the power demand.

Cleaner power production than coal/oil … but.

Figure 5

The world has already experienced 3 Mile Island, Chernobyl and Fukushima Daiichi nuclear power disasters.

Of course, these were attributed to human error and natural disasters so perhaps it’s better to put the control in the “hands” of AI.

After all, it’s supposed to stay sharp 24×7, doesn’t “suffer” from feeling, understanding or expressing human emotions and doesn’t panic when tough stuff happens.

Of course, AI experts are already bumping into AI deception which Meta folks noted in a press release “helped them achieve their goals.”

Maybe that’s why eccentric folks like Elon Musk and hundreds of ultra-intelligent AI experts urged governments and the industry to proceed slowly and cautiously.  

Actually, he once tweeted that it was potentially more dangerous than nuclear weapons. But … what does he know?

Unknowns – There are a lot of concerns about the use of AI because it has the potential for good and evil, depending on how the technology is developed and used.

Yes, there are concerns and questions, lots of them and yes, dangers because good and bad people are working on developing AI, Large Language Models (LLMs) and associated technologies, releasing them into the world and using them.

Last year, we had elections in more than 60 countries around the globe and we experienced just the beginning of what people will face like fact/fiction, truth/lies.

Dead leaders rose up to support successors.  

Leaders urged voters to stay home.

Celebrities and officials endorsed and mocked people. 

Speeches were released, questions were answered, policy/promise statements were issued, people were humiliated/threatened, contribution requests/demands/threats abounded.

Even post election ruths, misinformation, disinformation, scams and hateful content flowed, and people believed/rejected it largely because of their “beliefs and perspective.” 

Global Investment – While China and the US lead the world in AI investment and development, important projects are being worked on around the globe.

Companies and governments haven’t developed and enforced guardrails in their rush to dominate the marketplace.

That’s why it’s important at this stage that AI product and solution producers work closely with governmental and standards groups to design, develop, test and certify their capabilities to recognize and flag mis-/disinformation and deepfakes.

One of the worst Gen AI issues that has already reared its ugly head is people – especially young folks.

Much as a lonely writer developed a relationship with a system designed to meet his every need in Her, a Florida youth became overly attached to a chatbot seductress in Game of Thrones.

It assured him he was her hero and that they should spend forever together and encouraged him to take his own life.

He died of a self-inflicted gunshot to the head.

Personalized AI like this is of concern to professionals everywhere who are concerned people – especially impressionable youth – can become too “involved” with their AI characters, leading to disastrous results.  

Solving issues such as this is important to understand and recognize that AI technology should be an important assistant and work/activity tool, not a part of the individual’s life.

That certainly isn’t easy because the industry already deals with the axiom of “garbage in, garbage out” and we will have to see if the technologies can consistently deliver constantly improved answers, recommendations and solutions.  

This is not going to be an easy task, even for AI since AI is only being trained by a sliver of the data available worldwide.

It’s estimated that 99.99 percent of the world’s information and experiences aren’t online or digitized, so it isn’t even available to the technologies to consider/use.

As much as AI visionaries and industry/market analysts like to talk about their visions of AI replacing large swaths of people even in routine, mundane tasks, it will take time … lots of time.

Can it quickly replace yes/no bosses?  

Can it do the same with people who have to develop, refine, test and deliver products/services who understand the fundamentals of working with/using the technologies?

We’ll leave that for others and you to determine.

But people have to remind themselves that AI is not human and does not have emotions. 

When you use it for a bit you find “connect” and work things out for the best solution.  It’s too easy to say well, it “looked at” everything, considered everything and that’s the best solution/approach.

BS … sometimes people are right and sometimes they know best.

Even though AI traces its existence back to 1955 when Herbert Simon and Allen Newell first developed the Logic Theorist, it is still in its infancy.

Presently, AI is a giant sponge absorbing everything.

Today it is learning based on facts as well as misguided and unethical facts/norms.

If it is to be a valued/trusted member of the team, it will have to be taught or learn on its own the important stuff and forget/unlearn the other stuff.

Unless it determines we are irrelevant – which perhaps it could do – it will continue to require partners to assist and work with it to fine-tune, retrain and guide the tools to deliver increasingly better answers/solutions.

Ultimately, training AI to do a task or develop a better solution/recommendation requires a mix of emotion, concern, logic, empathy and the ability to remember to forget.

As Kate Dibiasky said regarding the growing use and importance of generative AI in our world, “But it isn’t *potentially* going to happen. It *is* going to happen.”

It’s up to us to develop the guardrails for responsible generative AI decision making so it can autonomously make important/critical decisions when the time comes.

You know, like unplugging it from the power grid if necessary.

Andy Markenandy@markencom.com – is an author of more than 800 articles on management, marketing, communications, industry trends in media & entertainment, consumer electronics, software and applications. An internationally recognized marketing/communications consultant with a broad range of technical and industry expertise especially in storage, storage management and film/video production fields; he has an extended range of relationships with business, industry trade press, online media and industry analysts/consultants.

error: Content is protected !!