What is artificial intelligence (AI)?

Artificial intelligence refers to the simulation of human intelligence, i.e., learning, reasoning, and self-correction, by machines. While AI now dominates discussions on tech, it is not a new concept. First envisioned by scientists in the 1950s as part of machine learning, it has more recently developed into a tool that possesses the necessary sophistication to perform a range of actions, reducing the need for human labour and intervention. AI is technology that allows computers to comb through and process large datasets using algorithms, and utilise them to solve problems with greater speed and at a scale larger than human capability permits. 

The most commonly evident form of AI today is arguably that of a chatbot – think Google Assistant, Siri, ChatGPT. However, AI capabilities have been integrated into almost all the technology we use in our daily lives such as GPS (rerouting based on traffic and other conditions), ride sharing apps (Uber, Ola), social media filters (targeted ads, facial recognition, selfie filters, etc.), and email (smart compose, tabbed inbox, itinerary summary tabs, etc.). 

What are the risks?

One of the greatest benefits of AI is its ability to process large amounts of information and provide quick, tailored responses to specific prompts. But this is a double-edged sword – given its capabilities, it is possible for AI technology to exasperate existing risks that exist with internet use, while also creating new ones. 

For instance, AI responses to prompts can be in the form of text, images, audio, and other media formats. This means that AI can create images and audio which look like the real deal, based on prompts entered by the user. These features can also be used to create deepfakes, i.e., images of a real person can be superimposed on another person’s body and sounds can be manipulated to generate realistic human voices. 

Some of the potential risks that AI poses include:

  • Misinformation and disinformation – While AI processes all information available to it, in its current form, it is unable to determine which of this information is true, and which is false. Therefore, responses it provides to questions/prompts may be based on incorrect or manipulated information, and therefore, also incorrect. As a result, AI can potentially magnify false information available on the internet. 

More worryingly, it is actively being used in disinformation campaigns. For instance, in the US state of New Hampshire, voters received phone calls in the voice of US President Joe Biden, urging them not to vote in the primary election. Similarly, in the UK, deepfake audio of London Mayor, Sadiq Khan, making inflammatory remarks almost had serious consequences. 

Such disinformation has the ability to affect stock prices, election outcomes, and even result in violence.

  • Fraud – By cloning voices using AI, scammers call individuals, claiming to be a loved one in trouble, a key person at a financial institution, or an officer at a government body, asking for money or payments due. Sometimes, these calls are used to obtain key personal information that could be used to gain access to bank accounts. AI is also used to send official-looking emails or texts demanding personal information or login data, or which may contain links that contain viruses that collect such information. Chatbots are also used to befriend lonely individuals who are looking for platonic or romantic relationships online, and to trick them into parting with their money. 
  • Pornography – While AI’s media generation features are being used to create art and enhance digital content, they also pose several risks. For instance, the risk of AI-generated pornographic images increases manifold. Recently, fake, sexually explicit AI-generated images of Taylor Swift flooded social media. The non-consensual nature of these images is a serious violation of the privacy of the targeted individuals and can lead to serious repercussions and may even be used for blackmail.

Lack of regulation

Until recently, with the exception of Tamil Nadu’s AI Procurement Policy, India had not taken any steps to regulate AI or to mitigate risks. Despite the government’s use of its capabilities to improve exam prep, agricultural productivity, information dissemination, etc., there was no system in place to prevent rights violations, facilitate grievance redressal or affix criminal liability for the misuse of AI. 

However, in December 2023, the Union Ministry of Electronics and Information Technology (MeitY) issued an advisory to social media platforms, directing them to follow the existing IT Rules to deal with deepfakes. Further, on 1st March 2024, it issued a second advisory, requiring platforms in India that are testing or training an AI tool to seek permission from the government before launching the product. It also mandates that platforms ensure their resources do not permit bias, discrimination, or threats to the integrity of the electoral process through the use of AI technology and algorithms. The Minister of State for Skill Development and Entrepreneurship later clarified that this requirement applies only to “significant platforms” and not to start-ups. The advisory requires that disclaimers and consent-based disclosures regarding the fallibility or unreliability of generated outputs must be included before public access can be permitted. 

At present, this is the extent of regulation that exists. The government has expressed its intention to establish an AI regulation framework later this year.

The European Union’s recent legislation of the AI Act is a significant step towards ensuring trustworthiness of AI. Some of the significant provisions of the Act are classification of risk, banning of AI systems that affect the safety, livelihoods and rights of people, mandatory labelling of AI-generated content that are classified to be risky, flexibility of rules to adapt to technology changes, etc.

Ensuring online safety

The facets of children’s online safety involve three things: the appropriateness of content accessed, who they interact with online, and how they conduct themselves while on the internet. So, what can you do to ensure your safety and that of your child while using the internet? Short answer – become AI smart. 

  1. Verify the source of messages or emails before providing personal information or clicking on unknown links. 
  2. Verify the origin of calls from unknown numbers – reach out to the official numbers of government offices or financial institutions.
  3. Be sure not to provide sensitive data to AI financial tools. 
  4. Establish a safe word with your loved-ones, especially children so that they are able to ascertain it is you on the phone and not a voice cloning software.
  5. Be wary of who has access to your personal information, images, and other media.
  6. Double-check information or advice provided by chatbots. 
  7. Check permissions and privacy policies while downloading an app or accepting terms and conditions on websites. Check for updates to policy regularly – sure, AI is always collecting data to improve your experience. This data is also, however, collected to benefit marketing and development strategies of companies. Know why your data is being collected, and what your information is used for. 
  8. Use strong, unique passwords for your online accounts. 
  9. Update your software and applications regularly to have access to any new security improvements. 
  10. Be wary of online videogames, where other users may utilize AI-powered chat, cameras, hacking and voice-masking to breach existing protections on your devices.
  11. Be aware of the security challenges associated with AI-enabled toys, like with all other AI-aided gadgets. Toy companies now provide devices capable of interactive communication, and learning from experiences. For example, the CogniToys Dino, powered by IBM’s Watson, is capable of engaging in intelligent conversation with children and telling them stories. These devices, connected to the internet, can be used to access data and devices that use the same network.
  12. Make children custodians of their own data and online security. However, it is possible to also use AI to your benefit – AI powered online safety apps like FamilyKeeper, that monitor without snooping and enable parents to be alerted when their children face issues involving bullying, explicit content, and mental health risks, among others can be one avenue. Parental control apps like Google Family Link are also an option, allowing parents to exercise more control. 
  13. Since AI is frequently built by tech companies with profit objectives, be wary of what the tech will prioritize over well-being. 

To sum it all up – be proactive. Skills, which were once thought to be possessed only by humans, are being imitated by AI. With increased dependence on AI-powered technology, parents will also have to keep an eye on children’s ability to retain agency without becoming over-dependent. It is important to disconnect from gadgets and technology every now and then to bond with nature and interact with the family/community. Human-to-nature and human-to-human contact is essential for your mental health and a healthy social life.

AI use will change the world around us. Like any other technological development, there will be negative outcomes alongside some amazing advancements. While we enjoy these benefits, we must also remain informed about the possible dangers and stay safe.

Article by: Gauri Anand

With inputs from Team Bodhini