If I finish a session without saying a word, I consider it a huge success.
Unfortunately, sometimes I can no longer stay under the radar. The microphone has to be unmuted, the camera probably has to go on, and all the attention is on me.
At least, until this week.
thanks for the new iphone This feature allows anyone to clone their own voice, no technology required, and it doesn’t take much time, making anxiety a thing of the past.
announced back in May Now available as part of iOS 17 public beta, the next major software update appleComing to my smartphone in September, the “Personal Voice” tool allows my voice to read any text aloud without having to speak it myself.
How does it work?
The feature is located in the Accessibility section of the iPhone Settings app, under the Speech heading.
To make your own digital voice-on-demand, your phone asks you to read 150 fairly random phrases aloud, which takes about 15 or 20 minutes, depending on your patience.
“A German-born author won a prize for writing,” “During the Middle Ages in Europe, people bathed less often,” and “The ancient Greeks laid the foundations of Western culture” were some of the sentences I got. Later I got some weird questions from people in the next room who could hear me.
The phone needs plenty of time to process the speech because it’s all done on the device itself, rather than being uploaded to a powerful computer somewhere at Apple’s headquarters.
It needs to be locked and kept charging, so it’s best to leave it working overnight.
Once your voice is ready, you can enable the Live Voice feature in the settings and choose your personal voice. Triple-tapping the phone’s side button will open a text box, and anything you type will be read aloud.
The ‘Holy Grail’ iPhone costs £145,000
Apple to tweak ‘evade’ autocorrect
Is it convincing?
Don’t want to expose the lack of technical knowledge of some relatives, it depends a lot on the situation.
numbers me and my sister checked the status Taylor Swift Tickets in a WhatsApp Voicemail, but she still doesn’t seem to know anything. My mom responded to the invitation to the movie with no qualms, until I asked if there was any news on the news.
Tech-savvy friends and loved ones were even more immediately skeptical.
“Who are you? What did you do to Tom?” asked one.
Another said: “Sounds a bit like you, but also like someone made a robot version.” They make me afraid of power.
As for the meeting (admittedly my most ambitious attempt at displacing myself), the longer the sound went on, the more colleagues realized I was pulling a prank.
But overall, it’s impressive for something that only takes 15 minutes of work and a good night’s sleep.
Like the rise of generative artificial intelligence Chat GPT And the growing realism of deepfake videos has drawn attention not just to the power of such technology, but also to its accessibility.
digital news anchor This article can be read via the play button at the top of the page It required a dedicated text-to-speech publishing company, a long and professional recording process, and constant tweaking to make sure she didn’t get it wrong with certain words and phrases.
What I do will soon be on everyone’s iPhone without such effort or expertise.
Can artificial intelligence help dating apps succeed?
How artificial intelligence is changing the future of journalism
Isn’t this asking for trouble for fraud?
Apple says it’s an accessibility feature designed for people who have difficulty or loss of speech.
The company explained the randomness of the personal voice process, which is all done on the device, which can guarantee the privacy and security of user information.
Voices are not shareable, can be deleted, and all 150 recorded phrases can be downloaded and backed up.
Computer security firm McAfee warned that voice cloning technology in general would contribute to an increase in fraud, but said Apple’s protections should be adequate and unlikely to exacerbate the problem.
McAfee researcher Oliver Devane told Sky News: “If you use an online service and there is a data breach, snippets of your voice could be stolen.
“It just exists on the device, and your ability to delete the file removes that risk.
“If people want to use this technology for malicious purposes, there are already some services available.”
No matter where you get your podcasts, you can click to subscribe to Sky News Daily
McAfee recently surveyed 1,009 adults in the UK and found that nearly a quarter have experienced or know someone who has experienced some kind of AI voice scam.
The study also found that 65 percent of adults don’t believe they can tell a clone from the real thing.
Earlier this year, Voice tech company ElevenLabs offers a cautionary tale Allow users to upload any audio to generate an artificial voice by publishing a public voice clone kit.
This results in spurious clipping Emma Watson Read Mein Kampf and joe biden announced that the U.S. military would enter Ukraine.
How to distinguish true and false sounds?
No matter how it’s made, there are steps you can take to protect yourself from voice scams.
• Source of doubt—— You can verify them by asking the person things that only they know.
• what makes them different – Is there a problem with their accent or speed of speech? Have they stopped stuttering? Listen for key sound characteristics.
• call them back- If it sounds correct but the number is incorrect, call them back at a number they already know.
• Identify anti-theft service – They will notify you if your data is compromised and ends up on the dark web.
• A verbal code— A word or phrase that you or they share with friends and family when they receive an emergency call, such as when they are not using their normal equipment.