“The promise of a technology driven world may not be all goodness ... but something far darker and disturbing,” Namrata Rana and Utkarsh Majmudar write in their book Balance - Responsible Business for the Digital Age. Rana is the India Ambassador of University of Cambridge Institute for Sustainability Leadership, and Majmudar writes about companies’ business responsibilities. Both are visiting faculty at the Indian Institute of Management in Udaipur. The authors contend that the immense challenges that all nations face presently—waste, water, energy, biodiversity, inequality and data—have led to the emergence of “a consciousness about the earth.” Alongside, the tremendous technological developments of the past decade have also helped evolve “strategies ... that can help mitigate our misuse of the planet.” They write that, in such a scenario, “responsibility and balance can’t just be a few additional words that are tossed into the discussion.”
In the following excerpt, the authors discuss how every sector of the world economy “is seeing the benefit of capturing data and using it for increasing sales.” As data personalisation becomes central to business models across industries—education, healthcare, insurance, banking, among others—ordinary people can be “easily targeted, identified and therefore manipulated.” As India grapples with regulatory issues around privacy, data-hosting and a concomitant lack of awareness, Rana and Majmudar write,“we seem to be sitting on a time bomb.”
Ad tech conceals a dirty secret. It is increasingly personal and relies on tracking people. Re-targeting has been used by marketers to follow customers around. Small bits of code that were downloaded onto your computers and mobile phones enable companies to track customers, no matter which website they visit. This small code constantly aggregates data and serves up appropriate content, campaigns and banners to entice customers into buying whatever they are selling. If you visited a clothing website or clicked on a banner and were surprised that no matter where you go you get information about the same apparel company, you are being tracked. A constant reminder to finalise the purchase.
However, ad tech is also not the only culprit. Most online users have possibly never read the terms-of-use of any of the social platforms, and are entirely unaware of how much companies know about them and how far information about them has already travelled. People are not really paying attention to their privacy. A massive legal document appears on the screen and customers just click “accept” by default because the legal language is beyond their comprehension. People who want to know that the boundaries are not being crossed do not even know what the boundaries are.
On the bright side, our digital identity can help solve a host of issues around health, travel, governance and much more. Personalised medicine and an improved quality of care can become possible with seamless transfer of data between people, doctors and medical establishments. Governments can ease the general bureaucratic tangles and delays around public services, improve tax collection and improve transparency and accountability for public spends. But in the hands of the wrong people data can create chaos. To some extent the recent scandals have helped turn a spotlight on what lax laws and unbridled access to data can actually do.
In May 2017, Google’s announcement to link in-store credit card transactions to the digital profiles of its users’ online profiles, gathered through YouTube, Gmail, Google Maps and more, made its advertisers very happy. Doing so would allow it to show hard evidence to advertisers that its online ads lead to in-store purchases. While Google was the first company to make this formal link, many technology companies have been rushing to create comprehensive user profiles so that their free services can be monetised. Google’s new program is now the subject of a Federal Trade Commission complaint filed by the Electronic Privacy Information Center in late July of 2017.
In January 2018, data from a hugely popular fitness-app called Strava was hacked. The app has a feature that depicted a heat-map visualisation to show where its users were jogging, walking or travelling, at any given time. The map covered almost some three trillion GPS data points. After the hack, many security analysts showed that because many American military service members were Strava users, the hack had inadvertently revealed the locations of military bases and the movements of their personnel. A seemingly innocuous service had opened up a national-security challenge, leading to a public outcry with many leading voices stating the apparent concerns about privacy and security. Clearly, in today’s world, data and privacy are public goods like water and air. However, when left loose or unfettered, data can be weaponised, leaving millions vulnerable.
Data can be used in many unforeseen ways. The data-analytics firm Cambridge Analytica allegedly used personal information from more than fifty million Facebook profiles to target US voters with personalised political advertisements, based on their psychological profiles. Employees of the company were also filmed boasting of using manufactured sex scandals, fake news and dirty tricks to swing elections around the world. Cambridge Analytica has since then also been accused of influencing the vote on Brexit and the Indian elections.
This alarming revelation implies that our digital identity and footprint is a source of power to companies and people we know little or nothing about; people and companies whom we have possibly never even interacted with. Machine-learning algorithms obtain data from social media streams, combined with other sources, and reveal more than our favourite websites, preferred products, reading preferences and so on. They can also disclose sexual orientation, political affiliations, our use of addictive substances and many more things that we would rather not reveal publicly. The difference between the factories that cause environmental pollution and the digital world is that environmental pollution and damage is there for everyone to see and feel; the actions in the digital world, on the other hand remain largely hidden. What it really means for ordinary people is that we can be easily targeted, identified and therefore manipulated. AI, machine learning and deep learning are technologies that are increasingly being harnessed for competitive advantage by companies, countries and politicians.
Recent events have shown that the promise of a technology driven world may not be all goodness and bright, happy, shining people but something far darker and disturbing.
In the early days, which were actually just about a decade ago, data collection was largely the domain of websites and mobile apps. Today, every sector is seeing the benefit of capturing data and using it for greater insight and increasing sales. Data personalisation is becoming central to the customer experience. Apple’s virtual assistant Siri learns from increased usage and can fix appointments, answer emails and highlight important tasks. Google’s Nest thermostat adjusts heating and cooling as it learns home owners’ habits. This is not just because new technology is available, but because customers are changing the way they interact with brands. Today’s customers want experiences that are personally relevant. A one-size-fits-all approach, whether it be through advertising or through products, is now changing. The rich landscape of digital experiences today, is far beyond what we could ever have imagined.
Technology is enabling smart devices to learn and personalise and this is only likely to grow with the proliferation of artificial intelligence, connected devices and high speed, readily available internet access. International Data Corporation, or IDC, predicts that by 2019, 40 percent of digital transformation initiatives will be supported by cognitive computing or AI efforts. Spotify—a music streaming service—uses AI to customise the listening experience according to user tastes. The entire business is based on data and AI. AI tracks trends, predicts customer preferences and creates custom lists for its users. Fashion and sports-goods companies are using AI to make recommendations to customers, not at the checkout counter but while buying. These predictions are based on body type, past purchases, weather, browsing history and more. There is also news that KFC is partnering with Chinese search engine Baidu to create a restaurant that uses AI facial recognition software to infer what a customer might want to order. The program collects data like gender, facial expressions and other visual features to provide menu recommendations to customers. It also saves previous orders, so returning customers can get recommendations based on what they have ordered before.
Technology is constantly moving ahead and has expanded outside the home to schools and to healthcare spaces. Healthcare is considered to be one of the biggest beneficiaries of the technology revolution. Artificial Intelligence based systems are being used to predict, prevent and treat diseases. Monitoring devices, such as the Apple watch, are collecting information about heart rates and activity levels. Till now, doctors treated each patient separately and collective knowledge about disease and treatment was difficult to collate. Now, there is a real possibility of collecting patient data to find and track disease, but one fraught with privacy and ethical issues if unfettered access is provided. This is not all, artificial intelligence-based systems can also make mistakes and are, at times, singularly lacking in empathy during care situations. For example, in a 2015 clinical trial, an AI app was used to predict which patients were likely to develop complications following pneumonia, and therefore should be hospitalised. This app erroneously instructed doctors to send home patients with asthma due to its inability to take contextual information into account. There are other threats too. Connected medical devices, such as pacemakers, have been shown to be vulnerable to hacking. A pacemaker is a small battery-operated device which is surgically implanted into the chest of a patient. In 2017, the American Food and Drug Administration, or FDA, recalled half-a-million pacemakers over hacking fears. The security flaw could have enabled hackers to run down the battery and tamper with the patient’s heart beat.
Banks have been using data-tracking services to manage customer fraud for several years now. The rise of online and mobile banking has enabled banks and insurance companies to track customer usage patterns to identify at-risk customers. Insurance companies have started collaborating with health-tracking apps and devices to surreptitiously increase premiums. Not far behind are banks who have started making profiles of potential defaulters. Banks can now, through sophisticated apps, monitor your spending pattern, who and what you spend on. Bank transaction data, credit behaviour and location data is readily available; this is then matched with advice and recommendations. The new set of fintech companies, which consist of payment wallets, online lenders and more, rest their entire business models on knowing the customers’ digital identity in great detail and customising services to their needs. Both, old and new world financial companies and banks rely on security protocols to hold on to the hordes of data they collect. However, nothing is foolproof. In the middle of 2017, Equifax, one of the world’s largest credit agencies, was hacked via a security flaw. The hackers were able to steal the data of 143 million American citizens. The data consisted of addresses, names, social security numbers and credit profiles. But Equifax is not a bank, it is one of the many services providers used by banks and finance companies to render services. Your data therefore could be anywhere, with anyone.
Smart homes promise automated lighting, better water management and connected appliances. Lights that switch on and off, a coffee machine that makes coffee as soon as you wake up and room temperature that adjusts automatically. Significant time and resources could be saved if everyone were to live in a smart home. But at what cost? Smart homes collect and analyse a substantial amount of data. The systems learn from your usage patterns and adjust accordingly. This data then resides on servers. Also, different companies sell different gadgets—the smart refrigerator and the smart light could come from different companies and capture data on different systems. The traditional home in this context sounds far more secure than a smart one where personal data and that of family members’ could be open to scrutiny, hacks and misuse. Take the very real possibility of data being sold to insurance companies who could increase insurance premiums based on what you eat and how much exercise you do.
For most people, smart homes are a bit futuristic and some might even say these are first-world problems. Let us then take a closer look at something closer home—the education ecosystem in India. Almost everyone would agree that India’s transformation to a growing economy has come about due to improved education levels, made possible by public policy choices and parental awareness that schooling will ensure a better future. This push towards better education also spawned the concept of online testing, tuitions, hardware for digitally equipping classrooms and companies that provided these services. To this mix were also added school management software, curriculum automation and teacher training courses. A lot of the companies in this mix were and are start-ups and while some have succeeded, several have shut shop or been sold. The big question though, which no one is asking, is where has all the data gone? Data about all the children who attended these smart classes, tuitions and online tests. When children enrol into schools they give up all their social information, but now, because of technology, someone knows what grades they achieved in every single class, what their strengths and weaknesses are, as well as their hobbies and interests.
One can argue that the influx of technology into education was the need of the hour and did the nation a huge service by scaling-up learning in ways that were unimaginable. While that may be true, now that we are aware of the perils of data capture and misuse shouldn’t we be asking the tough questions about us and our children’s privacy? For many parents though, these questions are still far away.
Marketers, however, know an easy target when they see one. Children’s toys are seeing a new kind of technology revolution with toys constantly connected to the internet and smart toys that capture information, learn and personalise content to the child’s needs. In 2015, Hello Barbie started the smart toy revolution that allowed voice interaction between the child user and the doll. With its microphone, voice recognition software and artificial intelligence, the doll was considered a serious threat to child safety and privacy. The backlash and hacking concerns loomed so large that Hello Barbie got its own Twitter hashtag, #HellNoBarbie. Baby products, such as baby monitors and toys, have been singled out over the past few years to be highly vulnerable. Baby monitors now have two-way audio, WiFi, motion sensors, integrated video, some with night vision, smartphone apps and “ecosystems”. Some even talk of cloud storage, temperature monitoring and built-in lullabies. There are also watches that act as tracking devices, allowing parents to know where their children are. While in itself technology is only acting as an enabler, the questions about who owns the data, where it is going and what is being captured, remain. In countries such as India with lax laws around privacy and poor customer awareness, we seem to be sitting on a time bomb. This is likely to have significant consequences for the future of our children.
Increased connectivity increases not just the quantum of data but also the risk. Since everything we own is going digital, we run the risk of allowing hackers into our homes through something as innocuous as a doll, a watch or even a lighting device. Apart from this, devices transmit location data, so in effect we tell people where we work, stay, shop and pretty much everything else we do. Mobile cell towers can easily track signals to determine your location. In many cases this has been used for saving lives during natural calamities, discovering lost people and solving crimes. Location data, however, is not just stored in the servers of mobile phone companies. Many apps that you download switch on location tracking on phones whether they need it or not to provide the service you signed up for. While Uber certainly knows where you go, it is a real possibility that your bank, your newspaper, online grocery store and your insurance company do too.
This highlights questions around what you signed up for and what you did not. The ethics of what is reasonable and what is not. Not many of us read the terms of use and privacy policies. Companies use that to their advantage. Technology companies also deny access to a service if you do not sign up for the entirety of the data-usage terms. This leaves users with little or no option—if they want to use the service, they have to agree to allow access to personal data. There is a desperate need for balance to be restored between technology companies, consumers and the data that drives the transactions between them.
The internet is essential to the world we have now created. On it rests economic growth, our ability to trade, our ability to learn and our ability to interact. For companies to establish responsibility in the digital age, a new set of norms is needed. The movement towards this has already begun. India has formed the Srikrishna committee to draft new privacy laws and define the dos and dont’s for companies and modernise India’s data privacy standards. The Indian Supreme Court, last year, held that the right to privacy is a fundamental right and an integral part of the right to life and liberty. The United Kingdom is in the process of passing their own version of General Data Protection Regulation, or GDPR, attempting to strengthen the idea that people own their personal data.
The GDPR gives consumers control over their data, allowing them to port data across service providers and knowing what this data is being used for. Consent will need to be freely given—specific, informed and unambiguous—and businesses will need to be able to prove they have it if they rely on it for processing data. A pre-ticked box will not be valid consent. For businesses, GDPR introduces new obligations on security, privacy and ensuring they inform customers about data breaches.
In the US too, concerns over privacy are mounting. Other countries are likely to follow suit as the impact of the regulation is already impacting business as usual.
This is an extract from Balance: Responsible Business for the Digital Age, by Namrata Rana and Utkarsh Majmudar, published by Westland. It has been edited and condensed.