July 22, Apple released an
update to the iOS operating system version 12.4, which closed three serious vulnerabilities discovered by an expert from the Google Project Zero team Natalie Silvanovich. The most dangerous (CVE-2019-8646) allows you to steal data from a remote device without user intervention. It is theoretically possible to record arbitrary information, but the real extent of the potential damage to owners of vulnerable smartphones and tablets has not yet been fully studied.
There is still little information on other vulnerabilities, it is only known that one of them (CVE-2019-8647) can cause the SpringBoard module, which is responsible for rendering the home screen of iOS devices, to crash.
Silvanovich posted the
Proof of Concept for CVE-2019-8646, which implements a typical operation scenario: a prepared message in iMessage, during which data is sent to the attacker’s server, which should not be sent there. Let's look at this vulnerability more closely and at the same time talk about the human factor in the processing of voice messages for Siri and other voice assistants.
The disclosure of vulnerability data is a rather interesting case when media representatives have to understand the technical description of the problem. All publications are based on three vulnerability reports in the tracker of the Google Project Zero team: there
is information on the most dangerous vulnerability (data leak), it says about the Springboard crash, here
about another memory corruption error, with unknown consequences (or without them) - the practical exploitation of this vulnerability is difficult). Information about another bug (CVE-2019-8641) has not yet been disclosed, since the patch released by Apple was ineffective.
Vulnerabilities are affected only by phones based on the latest version of iOS 12. In media publications with varying degrees of detail, rather cautious formulations about technical details from the original bug report are cited, and for the average user the situation looks strange: the level of drama is not clear
. Most likely, more details will be revealed at the presentation of the Google Project Zero team, scheduled this week at the BlackHat conference. However, another researcher took PoC Silvanovich and recorded a
video of his work:
To simplify the description, this is what happens: a prepared iMessage message containing a URL is sent to the user. The iMessage client on the phone is trying to automatically process this message. In some cases, the address is sent. An error in the handler leads to an incorrect determination of the number of characters in the URL. As a result, when the call does occur, the contents of the neighboring memory block are added to the original address as a parameter. The attacker’s server only needs to register the call and decode the data. The full range of information that can be stolen in this way has not yet been precisely determined: the iMessage message archive and binary data are mentioned. The video above shows an example of an attack on the SpringBoard module, which results in a leak of the image previously viewed by the device owner.
Despite the fact that all the consequences of exploiting this problem have not yet been studied, the comments of
some industry representatives
boil down to the fact that the attitude to iPhone security should be reviewed. We don’t think this is a good reason: to evaluate the security of a system by the number and complexity of closed
bugs is in most cases a bad idea. However, it will not be superfluous to install the iOS update if this has not happened yet: any bug operated without the knowledge of the user is, by definition, dangerous.Voice Recognition SufferingThe source of the picture and the original artist Vasya Lozhkina.
Another piece of news related to Apple is the Siri voice recognition system. On July 26, an article
was published in The Guardian, a British publication, describing the work of company contractors to improve the functioning of automated algorithms. In general, this is logical: if something is not decrypted in the subscriber’s speech, you should try to improve the system. But in this approach, you can see the risk to privacy, especially when you consider that the voice assistant sometimes turns on and starts to record sound not by the command of the subscriber, but by chance, on its own. This in an article The Guardian testifies an anonymous representative of the contractor Apple.
Apple is not the first company to be criticized because of the presence of units of "live listeners" of the voice recognition system. July 10, the Belgian publication VRT reported
not only about similar practices, but also about the leak of archives of records of one of the contractors Google. In April, a similar report was published
on Amazon Alexa's speech recognition system.
When a device that constantly records sound and sometimes sends a recording to the manufacturer appears at your place, there will be many questions about privacy by definition. The reason for the wide discussion of the three media materials mentioned was the “unexpected” discovery: it turns out that your voice commands are heard not only by a soulless machine, but also by a living person. While Amazon, Google, Apple and even Yandex are trying to capitalize on a new convenient feature, they need to ensure the trust of users, even if there is no direct threat of data leakage. And in most cases it does not exist: a maximum of one hundredth of all records is transmitted to contractors, without the possibility of identifying specific users.
Nevertheless, it is somehow necessary to react, and so far this is what turns out. Apple has suspended
Siri's quality control program. Amazon has added
a privacy setting that blocks human voice recognition for your account. Google has suspended
verification of records in the European Union, as regulators have questions about the compliance of such practices with GDPR. It seems to be nice, consumer privacy in this case can be even better protected than in other situations. The question is how much the rejection of control will affect the quality of recognition. The moment when society has a choice: faster progress or less interference in private life.Disclaimer: The opinions expressed in this digest may not coincide with the official position of Kaspersky Lab. Dear editors generally recommend treating any opinions with healthy skepticism.