Hackers might use artificial intelligence to acquire user passwords with near-perfect accuracy by “listening” to an unsuspecting person’s keystrokes, according to a worrying study published earlier this month.
A group of computer scientists from the United Kingdom created an artificial intelligence model to recognise keyboard sounds on the 2021 version of a MacBook Pro — dubbed a “popular off-the-shelf laptop.”
According to Cornell University’s study results, when the AI programme was launched on a nearby smartphone, it could duplicate the inputted password with a stunning 95% accuracy.
Study Reveals AI’s Shocking 95% Success Rate In Stealing Passwords Through Keystrokes
During a Zoom video chat, the hacker-friendly AI tool was also incredibly accurate while “listening” to typing over the laptop’s microphone. According to the researchers, it duplicated the keystrokes with 93% accuracy, a record for the medium.
The researchers warned that many users are unaware that malicious actors could monitor their typing to breach accounts – a hack known as an “acoustic side-channel attack.”
“The ubiquity of keyboard acoustic emanations not only makes them a readily available attack vector but also prompts victims to underestimate (and thus not try to hide) their output,” according to the report.
“For example, when typing a password, people will frequently hide their screen but do little to mask the sound of their keyboard.” To test accuracy, the researchers pushed 36 keys on the laptop 25 times apiece, with each press “varying in pressure and finger.”
The programme could “listen” for distinguishing features of each key press, such as sound wavelengths.
The iPhone 13 mini was positioned 17 centimetres away from the keyboard. Joshua Harrison of Durham University, Ehsan Toreini of the University of Surrey, and Maryam Mehrnezhad of the Royal Holloway University of London conducted the study.
Another risk element for emerging technology is the likelihood of AI technologies assisting hackers.
Several prominent academics, like OpenAI founder Sam Altman and entrepreneur Elon Musk, have cautioned that AI might pose a substantial risk to humans if sufficient safeguards are not in place.
According to the authors, these types of attacks are understudied but have a lengthy history. According to the authors, “acoustic emanations” were a weakness in a partially disclosed NSA document from 1982.
The study adds to recent concerns about how artificial intelligence capabilities could be used to threaten security and privacy.
According to Insider, AI capabilities can make online scams harder to identify because AI makes it simpler to personalize scams for each target.
What do you think about it? Do let us know in the comments.
For more trending stories, follow us on Telegram.
Categories: Trending
Source: vtt.edu.vn