The ethical difficulties in the implementation of artificial intelligence

2 de November de 2023
persona-inteligencia-artificial

The accelerated deployment of generative AI technologies like ChatGPT has raised concerns about potential harm and misuse. There is growing pressure for companies developing these technologies to do so ethically. However, achieving this is not as straightforward as adhering to predefined ethical principles.

Extensive research reveals that ethics in AI is not just about following principles, but about establishing management structures and processes that enable organizations to identify and address threats.

This approach may disappoint those seeking clear and precise guidance, but it provides a more nuanced insight into how companies can pursue ethical AI.

Ethical Challenges at Play

The study focused on professionals responsible for AI ethics in large companies. These experts expressed concerns about privacy, manipulation, biases, and other risks associated with the commercial use of AI. The example of Amazon, whose resume selection system favored male candidates, underscores these risks.

Furthermore, companies primarily adopt ethical AI for strategic reasons, seeking to maintain the trust of customers, business partners, and employees. They want to avoid scandals like the Facebook and Cambridge Analytica case, which demonstrated the harm that ethically questionable use of advanced analytics can cause.

Towards an Ethical Implementation of AI

The pursuit of “ethical AI” goes beyond general principles. Ethical AI managers found that they needed more than high-level principles to make specific decisions. Trying to translate human rights principles into questions applicable to developers resulted in 34 pages of inquiries.

To address ethical uncertainties, organizational structures and procedures are employed. Some methods are inadequate, but others show promise, such as hiring an AI ethics officer, establishing internal committees to deliberate on ethical dilemmas, and conducting algorithmic impact assessments.

Ethics as Responsible Decision-Making

The central idea is that companies seeking ethical AI should not expect to find a simple set of principles that offer foolproof answers. Instead, they should focus on making responsible decisions in a world of limited understanding and changing circumstances, even if some decisions turn out to be imperfect.

In the absence of explicit legal requirements, companies should do their best to understand how AI affects people and the environment. Seeking input from various stakeholders and committing to high-level ethical principles is crucial.

This perspective encourages AI ethics professionals to focus their efforts on adopting decision-making structures and processes, rather than just identifying and applying AI principles.

Towards a Future of Ethical AI

While laws and regulations are expected to establish concrete guidelines, responsible decision-making structures and processes are a starting point. Over time, they will contribute to the accumulation of knowledge needed to formulate strong legal standards.

The path to ethical AI is a complex journey that goes beyond the adoption of established principles. It requires a commitment rooted in a deep understanding of the complex ethical landscape surrounding AI technologies. This journey, though challenging, is indispensable for shaping a future where AI serves as a positive force.

(Visited 5 times, 1 visits today)
About the author
Comments
Add comment