x
Help Us Guide You Better
best online ias coaching in india
2021-08-14

Download Pdf

banner

Science & Technology
www.thehindu.com

Even though Apple’s intention to combat child pornography is laudable, the company’s latest feature has come under strong criticism as the new feature could compromise the iPhone maker’s end-to-end encryption system.   | Photo Credit: File photo

On August 5, Apple announced a new feature to limit spread of sexually explicit images involving children. It will soon be introduced in iMessage app, iOS and iPadOS, and Siri. The tech giant notes that the feature will protect “children from predators”, and that it was developed in collaboration with child safety experts.

Also read: Apple's child protection features spark concern within its own ranks

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

The Cupertino-based company noted its sensitive image-limiting feature will help law enforcement agencies in criminal investigations. For a company that famously stood up to the FBI’s demand in 2016 to unlock one the shooter’s phone, this is a big move.

Several experts and advocacy groups say that Apple’s new feature could potentially become a backdoor channel for government surveillance.

Apple’s child protection feature is an on-device tool that will warn children and their parents whenever a child receives or sends sexually explicit images. The machine learning (ML)-based tool will be deployed in the iMessage app to scan photos and determine whether they are sexually explicit. The company noted that other private communication in the app will not be read by its algorithm.

Also Read | Apple says it won't expand new child safety feature to any government request

Once a picture is identified as sensitive, the tool will blur it and warn the child about the content. As an additional layer of precaution, the child will also be told that their parents will get a text if they view the image. This feature can be switched on or off by parents.

In the U.S., child pornographic content is tagged as Child Sexual Abuse Material (CSAM), and are reported to the National Centre for Missing and Exploited Children (NCMEC), which acts as the country’s reporting centre for such images. NCMEC works with law enforcement agencies in the U.S., and notes that sexually explicit images are shared on Internet platforms people use every day.

To limit CSAM content on its platform, Apple says it will scan photos on user’s device and cross reference them with NCMEC’s database. The tech giant will use a hashing technology in iOS and iPadOS to transform the image into a unique number. This process ensures that identical images will have the same hash even when cropped, resized or colour converted.

Then, a cryptographic technology called Private set intersection (PSI) powers the matching process by not allowing Apple to view what is in the image. But, once a particular threshold for the number of CSAM images in a phone, is breached, Apple will manually check the pictures and disable the user’s account. It will then send a report to NCMEC. A threshold is maintained to ensure that accounts are not incorrectly flagged.

Even though Apple’s intention to combat child pornography is laudable, the company’s latest feature has come under strong criticism as the new feature could compromise the iPhone maker’s end-to-end encryption system.

Digital rights group Electronic Frontier Foundation (EFF) notes that, “even a well-intentioned effort to build such a system will break key promises of the messenger’s encryption itself and open the door to broader abuses.”

EFF points out that it will be difficult to audit how Apple’s ML algorithm tags an image as sexually explicit as ML technology sans human intervention has the habit of wrongfully classifying content. Another area of concern is the client-side scanning used in this process, which will scan a message, check it against a database of hashes before it is sent. This looks like your every message is viewed by a third-party entity before it is sent.

But even if you believe Apple won’t allow these tools to be misused there’s still a lot to be concerned about. These systems rely on a database of “problematic media hashes” that you, as a consumer, can’t review.

“But even if you believe Apple won’t allow these tools to be misused there’s still a lot to be concerned about,” Matthew Green, professor at Johns Hopkins University tweeted. “These systems rely on a database of “problematic media hashes” that you, as a consumer, can’t review.”

Green raises the questions of “collisions” as a result of combining hashes: “Imagine someone sends you a perfectly harmless political media file that you share with a friend. But that file shares a hash with some known child porn file


Our code of editorial values

END
© Zuccess App by crackIAS.com