
Regional Vice President, India & South Asia
Cloudera
In today’s hypercompetitive landscape, organizations everywhere are betting large on artificial intelligence (AI) to give them a transformative edge. Even as innovation accelerates, companies recognize the crucial role of ethics and regulation in AI development, with 88% of C-level executives reporting their organizations as taking measures to communicate the ethical use of AI to their workforces.
Why are ethics and regulation so important in the race to put AI innovations to market?
New AI innovations introduce new ethical concerns
Advances in AI have meant that we have moved from building systems that make decisions based on human-defined rules to automated rule definition, content creation and decision-making by complex models trained on huge data sets. An unconstrained AI system will prioritize optimizing their inputs according to the defined objectives, often without regard for broader societal impacts or ethical considerations, eroding public trust.
Despite advancements, AI today continues to encounter issues including bias and hallucinations, which have resulted in some controversial outcomes. For instance, a 2025 report by MeitY highlights cases of biased hiring algorithms and facial recognition errors in India, emphasizing the need for diverse and transparent training data. Addressing these issues is crucial to ensure AI systems are fair, reliable, and beneficial for all users.
Similar controversies have emerged worldwide,from unfair loan disbursements due to gender discrimination, to the use of privacy-breaching facial recognition technologies to process insurance claims.
Many of these events can largely be attributed to issues with explainability. AI,especially deep learning models, learn in a way that does not follow the straightforward rules humans use. These models are often seen as a “black box” because of the inhumanely complex layers of calculations they use to arrive at decisions. Thus, many experts find it a challenge to understand how AI comes to conclusions. Without appropriate human supervision and understanding, these biased decisions could spiral into negative outcomes like the ones above.
Keeping the focus on ethics has never been more important, especially as new generative AI innovations, like Phenomenal AI’s text-to-video platform developed in India promise to accelerate productivity at the workplace and enable organizations to sharpen their competitive edge. Despite their great potential, these generative tools can introduce issues like copyright infringement – and worse still, open the doors to misuse and misinformation.
Need for public and private sectors to work together to embed ethics and regulations into AI
While many generative AI tools, like ChatGPT, have rules to prevent abuse, many users have found ways to break these safeguards. Cybercriminals have even created their own generative pre-trained transformers (GPTs) to code malware and create highly convincing phishing emails at scale.
There are currently few tools and laws which can effectively detect and deter the production of such harmful outputs. As such, the public and private sectors need to tighten collaboration to better regulate AI to reduce the risks of misuse, and ensure that models are created with ethics in mind.
Ethical AI involves integrating core ethical principles, accountability, transparency, explainability, and good governance into AI models. Improving explainability and strengthening ethics in models can help organizations address AI’s shortcomings today. It also can greatly improve the accuracy and effectiveness of decision-making.
Many public and private sector entities are working together to advance ethical AI. For example, in India, the government is taking an increasingly active role in shaping responsible AI development. A publicly funded AI compute infrastructure, AI Kosha, with a total capacity of over 10,000 GPUs, is being established to support innovation across startups, research institutions, and enterprises. This AI Compute Network is designed to accelerate the safe and scalable adoption of AI and foster a robust ecosystem rooted in trust and responsibility. As regulations and initiatives continue to roll out, organizations can play their part to advance ethical AI by ensuring the data they use is trusted.
Designing ethical enterprise AI systems requires trusted data
Building AI systems that people trust requires organizations to have trusted information sources. With accurate, consistent, clean, bias-free, and reliable data as the foundation, an ethically designed enterprise AI system can be relied on to consistently produce fair and unbiased results. Organizations can easily identify issues, close any gaps in logic, refine outputs, and assess if their innovations comply with regulations.
Here are some tips for organizations looking to develop better ethical AI systems:
- Focus on intent: An AI system trained on data has no context outside of that data. There is no moral compass, no frame of reference of what is fair unless we define one. Designers, therefore, need to explicitly and carefully construct a representation of the intent motivating the system’s design. This involves identifying, quantifying, and measuring ethical considerations while balancing these with performance objectives.
- Consider model design: Well-designed AI systems are created without bias, causality and uncertainty in mind. Organizations should remember that apart from data, model designs can also be a source of bias. Organizations should regularly audit them for model drift – when a model starts to become inaccurate over time due to outdated data. Businesses should also extensively model the cause and effect of systems to understand if changes will result in negative consequences down the line.
- Ensure human oversight: AI systems can reliably make good decisions when trained on high-quality data. However, they lack emotional intelligence and cannot deal with exceptional circumstances. The most effective systems are ones that intelligently bring together both human judgment and AI. Organizations must always ensure human oversight, especially in situations where AI models produce outputs with low confidence.
- Enforce security and compliance: Developing ethical AI systems centered on security and compliance will strengthen trust in the system and facilitate adoption across the enterprise,while ensuring adherence to local and regional regulations.
- Harness modern data platforms: Leveraging advanced tools, like data platforms that support modern data architectures, can greatly boost organizations’ ability to manage and analyze data across the entire data and AI model lifecycle. Ideally, the platform should have built-in security and governance controls that allow organizations to maintain transparency and control over AI-driven decisions – even as they deploy data analytics and AI at scale.
Attributed to: Mayank Baid, Regional Vice President, India & South Asia, Cloudera