Reglas para IA

Muchos de nosotros, lectores ávidos de literatura de ciencia ficción, recordamos con cariño las tres reglas de la robótica de Asimov, que aparecieron por primera vez en el relato «Círculo vicioso», (1942).  Las recordamos para aquellos que nunca hayan tenido la oportunidad de leerlas:

  1. Un robot no hará daño a un ser humano o, por inacción, permitir que un ser humano sufra daño.
  2. Un robot debe hacer o realizar las órdenes dadas por los seres humanos, excepto si estas órdenes entrasen en conflicto con la 1ª Ley.
  3. Un robot debe proteger su propia existencia en la medida en que esta protección no entre en conflicto con la 1ª o la 2ª Ley.

Años más tarde, en la novela «Robots e Imperio», se creó la ley zero que consiste en:

Un robot no hará daño a la Humanidad o, por inacción, permitir que la Humanidad sufra daño.

¿A qué viene recordar estas leyes? En el contexto han empezado a aparecer decálogos a considerar para la inteligencia artificial (que en cierta medida recuerdan bastante a las leyes de Asimov).

El pasado Junio, el CEO de Microsoft, propuso 10 reglas (a continuación su copia directa) y que han sido comentadas por expertos:

Rule 1 – AI must be built to aid humanity and preserve our autonomy: Concerns for human autonomy will be witnessed in general, as more autonomous machines are being built. To protect human workers, collaborative bots must partake in dangerous activities, such as mining.

Rule 2 – AI must reflect transparency: AI empowers machines to know about us, however, it’s equally important that humans understand he intelligible machines as well. In other words, humans must be aware of how the technology works, and the associated rules. Moreover, it’s essential we have a understanding of how the technology analyzes results and its impact.

Rule 3 – AI must enhance efficiencies without destroying the dignity of people: The technology should preserve cultural commitments to drive diversity. This can only be possible with a broader, deeper, and more diverse engagement of populations in the design of these systems. Moreover, the tech industry shouldn’t dictate the values and virtues of this future.

Rule 4 – AI must be designed to address the need for intelligent privacy: There are various sophisticated protections available in the market around us. They are designed to secure personal and group information, in ways that earn trust.

Rule 5 – AI must reflect algorithmic accountability: Humans can undo unintended harm leveraging algorithmic accountability. These technologies must be designed in a manner that they can account for both expected and unexpected scenarios.

Rule 6 – AI must prevent bias: It’s equally important to ensure proper, and representative research for AI. This helps in preventing the use of wrong heuristics to discriminate.

Rule 7 – Need for empathy: This attribute could be considered critical to approaching AI, and is difficult to replicate in machines. Empathy will occupy a valuable spot in the human–A.I. world, helping us collaborate and build relationships, besides perceiving others’ thoughts and feelings.

Rule 8 – Need for education: Investment for AI education must increase, as it will be instrumental in creating and managing innovations. This will also help us in achieving higher level thinking and more equitable education outcomes. It’s usually a difficult social problem to develop the knowledge and skills needed to implement new technologies.

Rule 9 – Need for creativity: Creativity is one of the most coveted skills humans possess. This trait isn’t expected to change much within the years to come. However, machines will continue to enrich and augment our creativity.

Rule 10 – Focus on Judgment and accountability: Humanity has reached a level where it can gladly accept a computer-generated diagnosis or legal decision. However, we still expect a human to be ultimately accountable for the outcomes.

Estemos más o menos de acuerdo, creamos que se deben matizar o cambiar, es importante que si necesitamos nuevos roles tanto en administración pública como en organizaciones al convertirse el dato y el algoritmo en fundamentales, deberemos tener en cuenta valores y reglas éticas que guíen nuestra madurez.

¿Ya tienes tu propio decálogo de reglas para IA?

About Josep Curto Díaz

Josep Curto es el director académico del Máster en Inteligencia de Negocio y Big Data (MiB) de la UOC. Así mismo es director de Delfos Research, empresa especializada en investigación de los mercados de Business Intelligence, Business Analytics y Big Data.
This entry was posted in Artificial Intelligence, Chief Data Officer, Data Driven and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *