In Pakistan, artificial intelligence (AI) is progressively shaping industries such as e-commerce and finance, reflecting global trends. Yet, concerns remain about the potential for AI to turn against its creators, a theme popularized in films like Terminator. Although these risks may seem far off, advocates for “AI safety” caution against scenarios where AI might outpace human intelligence and operate with misaligned objectives.

In November 2023, a summit on AI safety convened global leaders to discuss these emerging threats. Critics argue, however, that the emphasis should be on pressing concerns like AI bias, misinformation, and infringements on intellectual property and human rights. These issues already affect various sectors, especially in developing nations like Pakistan, where data and technology regulations are still evolving. The key challenge lies in achieving a balance between fostering innovation and ensuring safety.

AI systems have frequently stumbled in practical applications. For instance, Google’s image-recognition AI once mistakenly categorized black individuals as gorillas, and facial recognition technologies have often misidentified people of color due to biased training sets. Additionally, AI in hiring processes has shown a preference for male candidates, while deepfakes are increasingly being used for malicious purposes, such as creating fake political speeches. In Pakistan, these dangers are amplified by the growth of social media, with lawsuits from artists highlighting the misuse of intellectual property.

Experts recently stressed the need for AI systems to uphold human rights, embrace diversity, and promote equity. This principle necessitates a comprehensive evaluation of how AI technologies are designed and implemented, ensuring they support equality rather than reinforce existing biases.

Access to AI education must be prioritized, along with understanding its effects on the job market.

“Despite the significant progress made by today’s large language models (LLMs) in emulating human-like intelligence, these systems exhibit considerable flaws. Key challenges such as hallucinations, lack of grounding in reality, unreliable reasoning, and opacity stem from the fundamental structures and training methods of these models. These issues are not mere technical errors; they reflect inherent limitations that raise important questions about the safety, reliability, and true intelligence of AI systems,” remarks Jawad Raza, recognized among the Corinium Global Top 100 Innovators in Data & Analytics.

He further notes that the demand for ethical AI deployment is supported by various organizations, including Unesco, which emphasizes the necessity for transparency and explainability in AI systems to protect human rights and fundamental freedoms. The organization advocates for stringent oversight and impact evaluations to align AI practices with human rights standards. Additionally, the UN High Commissioner for Human Rights has called for regulations that prioritize human rights in the development of AI technologies.

This involves evaluating the potential risks and impacts of AI systems throughout their lifecycle, ensuring that technologies that do not adhere to international human rights laws are either suspended or banned until sufficient safeguards are in place. “As AI continues to progress, it’s vital for stakeholders to maintain ongoing dialogues about the ethical dimensions of these technologies, ensuring they are developed with a focus on fairness and inclusivity,” he asserts.

Where does Pakistan stand?

Pakistan is in the nascent stages of creating comprehensive regulations and ethical frameworks for artificial intelligence. However, similar to other countries, there is a growing recognition of the significance of AI governance. Muhammad Aamir, Senior Director of Engineering at 10Pearls, emphasizes that as the Personal Data Protection Bill advances, regulations must robustly protect individuals’ privacy rights, especially in AI applications.

“Proper data management in accordance with international standards is essential. Additionally, AI developers and users need clear guidelines to ensure algorithmic transparency and accountability. Establishing standards for explainability and audit trails is crucial. Ethical issues around bias and fairness must also be addressed, ensuring AI systems are free from inherent discrimination.

“Examples such as the Gender Shades project reveal alarming error rates of up to 34.7% for darker-skinned women in facial recognition systems, compared to only 0.8% for lighter-skinned men. It is critical to establish sector-specific regulations for healthcare, law enforcement, and surveillance to ensure responsible AI use in these vital areas.

“In education, equitable access to AI learning resources must be prioritized, along with addressing the technology’s implications for the job market. Ethical guidelines and transparent practices for AI research and public sector adoption will help build public trust.”

He also stresses the importance of special provisions for women and individuals with disabilities to guarantee inclusivity in AI education and access to resources. Overseeing these initiatives, the AI Regulatory Directorate, part of the National Commission for Personal Data Protection, can ensure adherence to ethical standards across the board.

Leave a Reply

Your email address will not be published. Required fields are marked *