x
Help Us Guide You Better
best online ias coaching in india
2018-03-03

Download Pdf

banner

Science & Technology
www.thehindu.com

There are three standard counter-responses to the claim that technology can be dangerous in itself. One, the fault is not in technology but in the humans who use such technology: guns don’t kill, only people do. Two, technology is as useful as it can be harmful. Three, technology will always be under our control and so we can literally pull the plug when we want.

All these views can be effectively challenged, particularly in the case of artificial intelligence (AI). There is a fundamental difference between a knife, or even more complex machines, and AI, and that is the degree of independence that AI technologies have. An AI machine is an autonomous entity and from what we have seen of such machines, they are like other human beings in terms of their capacities for decision and action.

Enforcing a particular view

The real worry about these technologies is the emphasis on intelligence rather than other characteristics of human beings. AI is an attempt to reproduce superintelligent humans. It chooses one aspect of human beings, namely this vague idea called intelligence, and artificially magnifies it to an extent that allows the machine to do things far better than humans can. The success of these machines only reinforces the success of a particular view of human beings: not their vulnerability and finitude (characteristics that have catalysed so much of great music, art and literature), but largely some calculative capacity.

Purely intelligent creatures, whether people or machines, are bad for humanity. The restricted meaning of intelligence in AI is that associated with superlative memory, calculative power, decision-making capacity, high speeds of action, etc. These machines thus become superbeings, and a society filled with many superbeings is a recipe for disaster.

Being human is not about superintelligence and super capacity. It is about living with others and learning to live within our limitations. Vulnerability, decay and death characterise any living form. AI machines are a mirror to our desire for immortality and absence of human weaknesses. There is nothing wrong in this desire per se, but building surrogate machines is not the way to achieve this.

AI has not been used to get rid of poverty, to have more equitable distribution of wealth, or to make people more content with what they have. The types of AI we have, including war machines, will primarily be dictated by profit for the companies that make them. Is this what we need? It would be a sad world where ‘life’ forms come into existence based on the logic of profit.

The cost of technology

Unlike a gun, the AI machine is a performer in itself. To think that such machines will be subservient to us all the time is wishful thinking. We haven’t learnt anything about the master-slave relation if we think that these machines are only meant to be our ‘slaves’ which make our lives ‘easier’. All technologies come with a cost (not just economic but also social and psychological) and we have very little idea of the cost that AI will extract from us. Most worryingly, these thinking machines, which are smarter than us, will know exactly how to manipulate us to the extent that we will not be able to see their negative effects.

The only good thing about horrible dictators like Adolf Hitler is that they eventually die. Imagine a Hitler who lives forever? This is what AI machines can do. The foolishness of men will come to haunt the future of humankind in more ways than one. Is AI the final beginning of the end?

Sundar Sarukkai is a professor of philosophy at the National Institute of Advanced Studies, Bengaluru

 

It’s well known that we humans are not nearly as good as we think we are when it comes to thinking about the future. From paper to telegraph, from steam engines to computers, human beings have always feared new technology. We’ve always treated it as the ‘other’. Yet, we know from history that we have always embraced technology eventually, to make our life better, easier. There’s no reason to believe that our future with AI will be any different. What we fail to acknowledge in all the raging rhetoric about AI gods and war machines in the media today is that we are beguiled by the idea of evolution. If we sought to create tools of propaganda and change and ended up using paper widely back in the day, today we seek a life beyond the material, we seek answers to ‘what next’ for humankind.

Giant leaps

AI is a natural step or phase in the evolution of humankind. With every passing day, we’re witnessing the rise of AI in health and medicine. It was recently reported that we can predict heart diseases with machine learning, and that self-healing electronic skin lets amputees sense temperature on prosthetic limbs. Health care and medicine become affordable and accessible with AI taking centre stage in telemedicine and quick diagnosis. Water and energy networks become accessible and widely usable when AI can mediate the use of different sources. We don’t need humans to physically go to, and service, remote locations.

Like any other technology, AI is in a nascent stage and is being shaped by innovators across the world. AI will not be one thing; there will be many kinds of AI and many kinds of species augmented by AI. We’ll be witness to both the beauty and the dangers of what a few are creating. That is why it’s more important now than ever before to get more people to participate in the building and shaping of AI. Inclusive AI will mean that more of society will be able to enjoy its benefits and participate in shaping the future. Technology inherently does not have agency. Its interaction with us and the life we give it gives it agency.

This will be different as AI grows. We’re giving birth to a new world of intelligence, and the process will be like raising children of a whole new species (or many new species). This species will not be bound by the constraints of the human body and will exist in many forms across space and time. We could twiddle our thumbs and write about the singularity and its fears. But it’s more important for us to seed the world around us with the types of AI we want to see in the future. Today, we have control and can shape AI in its early stages. We need to wrap our heads around what this means to us and the responsibility with which it comes. How do we make it fail-safe? Do we hard code backups/kill switches for situations that have gone bad? Maybe Isaac Asimov’s three laws are the sci-fi equivalent that we can draw inspiration from?

It’s all in our hands

We forget that we are the creators of technology. AI, by itself, is not looking to destroy humanity. We can’t wash our hands of it and question whether AI can destroy humanity, as though we have nothing to do with it. Whether we use AI to augment ourselves, create new species, or use it to destroy lives and what we’ve built is entirely in our hands — at least for now.

Ashwini Asokan is the CEO and founder of Mad Street Den, a computer vision and AI company based in Chennai

 

We are told that AI is working magic, and also that it may lead to humankind’s ultimate destruction.

Strong and weak AI

While we are far from “strong AI” (the idea of ‘thinking’ machines) we already have “weak AI” all around us — from translation Apps to facial recognition on social networks. But for most marketers AI has just become a buzzword for any form of algorithmic decision-making or usage of big data combined with self-improvement. Weak AI builds on mathematical techniques that have been developed since the 1940s, but have only more recently been computationally feasible.

Apart from computational power, AI requires copious amounts of data to learn. This data can either be generated by the machine itself — imagine a machine being instructed in the basic rules of chess and what constitutes “success”, and then playing millions of games against itself and using that as the basic data for improving itself — or it has to be provided data. If the data being provided have not been cleaned (whether in terms of accuracy or bias), then the resultant learning will also exhibit the flaws in the data.

By using AI to create closed captions on videos on YouTube, Google is helping all persons with hearing impairment (but currently in a restricted number of languages); by using AI for real-time image recognition, visually impaired persons are provided a chance to have the world in front of them narrated to them. And it is not just in rational “thinking” that AI can aid humankind, but also by performing emotional labour (as movies like Her highlight). These beneficial uses of AI cannot be denied. Despite the beneficial uses of AI, scientists and leading thinkers like Stephen Hawking, Nick Bostrom, and Elon Musk warn us about the dangers of AI and the coming technological singularity.

Ethics and regulation

While it may sound trite, the greatest promise of AI is that of beneficial change at a faster rate than ever before, and accelerating. The greatest challenge of AI is the same, except with harmful change. While technological capabilities — and with it human capabilities to use technology — are changing at a faster pace than ever before, our ability to arrive at ethical norms regarding uses of AI and our ability to regulate them in an intelligent and beneficial manner have not nearly kept pace, and are not likely to. That is why we need AI researchers to actively involve ethicists in their work. Some of the world’s largest companies are cornering the market for AI researchers with backgrounds in mathematics and computation: Baidu, Google, Alibaba, Facebook, Tencent, Amazon, Microsoft, Intel. They also need to employ ethicists.

Additionally, regulators across the world need to be working closely with these academics and citizens’ groups to put brakes on both the harmful uses and effects of AI. Some parts of this will involve laws regulating data which fuel AI, some will involve empowering consumers and citizens vis-a-vis corporations and governments which are using AI, and some other parts will involve bans on certain kinds of uses of AI. While some of the most difficult legal and ethical questions around AI — involving liability for independent decisions made by AI — might not be questions we need to answer as of now, given that we are still far from strong AI, we still have difficult questions to be asked about harms caused by AI, everything from joblessness to discrimination when AI is used to make decisions. But for governments to regulate, we need to have clear theories of harms and trade-offs, and that is where researchers really need to make their mark felt: by engaging in public discourse and debate on what AI ethics and regulation should look like. And we need to do this urgently.

Pranesh Prakash is policy director of the Centre for Internet and Society, Bengaluru

END
© Zuccess App by crackIAS.com