Microsoft boss says AI ‘is not equipped to take over’ Tech News

The head of Microsoft’s artificial intelligence said the company will continue to expedite development efforts on large-scale AI models, despite concerns from some in the field that the technology is moving too fast and too unpredictable to be safe.

Eric Boyd, corporate vice-president of Microsoft’s artificial intelligence platform, told Sky News: “The potential for this technology to truly increase human productivity … to lead to global economic growth is so strong that we put it aside. is stupid.”.

In 2019, the US software giant invested $1 billion in artificial intelligence startup OpenAI.

The cash and computing power provided by Microsoft through its Azure cloud computing platform enabled OpenAI to create GPT4, the world’s most powerful “large-scale language model.” It launched to the public as a chatbot, Chat GPT.

Microsoft soon built GPT4 and its conversational capabilities into its Bing search engine. But the company has also incorporated the technology into a product called Copilot (actually a virtual digital assistant), and into many existing software products, including word processing and spreadsheets.

Boyd explained that its AI vision is not to take over the world, but to change the relationship between humans and computers.

“It’s going to really redefine the interfaces we’re used to, the way you’re used to talking to machines — keyboards and mice and so on. I think it becomes more language-based.”

But what of AI leaders’ claims that large “generative AI” models (those that can create text, images, or other output) are developing too quickly and are not yet fully understood?

“Experts in the field have gotten there based on their current credentials,” Boyd said.

“Of course, we’re going to listen and take all their feedback seriously. But I think if you look at what these models are doing, what they’re capable of, you know, it seems clear that those concerns are far from what we’re actually doing.”

Please use Chrome browser for a more convenient video player

Sky News trials artificial intelligence reporter

Boyd believes that the current capabilities of language models such as ChatGPT are overstated.

“People talk about how AI is taking over, but it doesn’t have the capability to take over. These are models that produce text as output,” he said.

Boyd said he was more concerned that AI might exacerbate existing societal problems.

“How do we make sure these models work safely in the use cases they’re in?” he muses.

“How do we try to minimize the biases inherent in society and the biases that emerge in our models?”

Please use Chrome browser for a more convenient video player

“AI to boost UK economy by £400bn”

Read more from Sky News:
China could fall further behind U.S. in AI race due to ‘tough’ regulation
Here’s how artificial intelligence is changing the future of journalism

But some of the biggest recent concerns about AI aren’t the safety of the technology itself. Instead, they’re more concerned with how much damage the technology could do if applied to the wrong tasks, whether it’s diagnosing cancer or managing air traffic control. Or being intentionally abused by rogue actors.

Boyd acknowledged that some of those decisions were up to them. He cited Microsoft’s decision not to sell facial recognition software it developed to law enforcement agencies. But the rest is up to regulators.

No matter where you get your podcasts, you can click to subscribe to Sky News Daily

“I think as a society we have to think about where this technology fits in and where we are concerned about its use. But we definitely think there is room for regulation in this industry.”

The partnership with OpenAI gives Microsoft a major boost in the race to break through the artificial intelligence market. But the competition is fierce. Google has the world’s leading artificial intelligence research department and is committed to bringing artificial intelligence products to consumers.

Big tech companies appear to have no intention of slowing down the race to develop bigger and better artificial intelligence. This means that society and our regulators must accelerate their thinking about what safe AI looks like.

Source link