Could AI systems be ethically correct?
If it is true that Artificial Intelligence (AI) is not a new concept, nor a recent one, it is also a fact that this branch of Information Technology assumes greater relevance as it impacts our daily lives and society, in an increasingly complex way and controversial, as its main objective is the execution of tasks that would be considered intelligent if performed by a Human.
At the same time, respect for the Fundamental Rights of Human Beings, such as Privacy and Personal Data Protection, is also a non-recent issue and with growing importance, which acquire legal contours and impose technical requirements for their compliance, which are increasingly defined and unavoidable.
The dichotomy between the development of AI systems and ethical values is a constant nowadays and the search for balance is a theme to which, naturally and progressively, we dedicate more time, as the perception that it is vital for our survival imposes it.
Going through the Fourth Industrial Revolution (Industry 4.0), the pace at which Society evolves, along with the emerging possibilities and technological and IT solutions, the collection of large-scale data (Big Data), namely through devices connected to the Internet ( IoT), the need for these to resort to AI to interpret the information they collect and generate responses, creating intelligent networks, is no longer compatible with the inertia regarding the requirements of privacy and information security, nor with the exemption from liability, whether individual or collective, regarding our actions and regarding the information that we are or have become responsible.
Indeed, our critical sense and the way we act and react, determine our choices and our path and evolution as individuals. Allowing us to assume parallelism at this level, we could say that artificial AI makes “choices” based on algorithms, while human intelligence, with all its complexity, allows us to consciously assume our choices, our decisions, and our behaviors, through the context and our values.
Being ethically correct is one of those choices and is, without a doubt, in its genesis, a human characteristic.
We often assume, and the fruit of some speculation, that AI systems are closer to the reasoning of a Human and the fullness of the human brain than they are. Proof of this fear is the constant need to delimit the limits of action and of the development itself / its repercussions, to prevent roles from inverting. Algorithms effectively allow problem-solving, based on reading and language comprehension and learning, using logical reasoning. But, what is the way to go until obtaining information that allows the generation of algorithms to the point of reaching this level? The Human Being Himself, the way thoughts and decisions are organically generated, will it ever be possible to “replicate” or mechanize?
And if so, aren't we distorting the principles of Individuality, respect for the rights and freedoms of Human Beings, and subverting the ethical principles that must precede and accompany any (technological) development process?
On the other hand, could an algorithm someday define what precedes individual decision-making and all the experiences and variables, whether environmental or individual, that determines it?
When software developed based on AI, for example, “machine learning”, “makes decisions” using an algorithm, it will not be considering the Human Being just an “artificially intelligent robot” and reducing it to a set of information and logical reasoning, underestimating its genesis?
When we define a profile, based on inputs and information collected from an individual, as in the case of what is obtained from the personal data that we have access to and can process, we are also not assuming exactly this assumption, like that or those guys?
It won't be entirely unrealistic to consider the hypothesis that AI might resemble “Human Consciousness” in decision-making. But this similarity presupposes human action and programming and, in the case of “machine learning”, for example, is limited to thelearning information that has been collected and processed so far and that allows inferring results.
Although there are already controversial examples that lead us to question the ability of AI systems to communicate in their language and learn to be creative, so far, we can consider that recreating the human brain in all its valences and being able to recreate one of its greatest potential, the one that allows the development of creativity, while, will still be a complete utopia, and each AI system is limited to a set of actions for each purpose, within the scope in which it is created.
Thus, we would have to rethink the very concept of creativity and the act of creating, which presupposes creating something new without starting from any pre-existing information or if learning by repetition a creative process can be considered a new approach to the definition of creativity.
From either perspective, CREATE implies BEING and FEELING…
Andreas Kaplan and Michael Haenlein define artificial intelligence as “a system's ability to correctly interpret external data, learn from that data, and use that learning to achieve specific goals and tasks through flexible adaptation”.
However, and if it is not possible to predict how science and neuroscience will evolve, and how this “integration” may become possible, it is necessary to be aware of the risks associated with the benefits, when we talk about progress and development.
Ideally, we should try to anticipate harmful and harmful consequences that detract from the benefit that should be the primary focus of development.
The ability to collect, process, interpret all the information, coming from all our senses, allowing us to have emotions, memories, and BEING, is what defines us as Individuals and distinguishes us from a "machine or system", but also what defines us as individuals. makes it permeable to influences and to be “partially programmable”.
The balance between the development of AI systems, their influence on human behavior, and the collection and processing of information about that same human behavior defining profiles, to allow this "cycle of influence" and results through programming and learning by logical reasoning, must be a constant when we talk about AI and development.
How can we ensure respect for Human Beings' fundamental rights, such as their privacy, if we assume that Human Beings are at the service of technology and development and not the other way around?
In addition to the basic questions of legality and legitimacy in the processing of personal data, is it possible for us to recognize the limits inherent to a purpose when we talk about the development of AI systems?
Data protection regulations, such as the RGPD… have also emerged to impose these limits, to sandardize procedures, to promote the implementation of control measures, and to promote the development of ethical standards, appealing to a sense of personal responsibility, corporate and collective. The reflection that is needed: will the current measures, existing mechanisms, and legislative frameworks in force be sufficient so that, in the race for development based on AI, where we seek to reproduce and improve human behavior/actions, we safeguard respect for Individuality, for the privacy and the rights and freedoms of the Human Being? Inevitably, the motivations that drive development, namely that based on AI, reflect how the markets condition organizations to disregard ethical principles in favor of financial return. It is urgent to achieve this balance, imposing and adjusting the limits and legal requirements, but also promoting and assuming increasingly humanistic, more conscious, responsible, and ethically correct visions and postures, which aim at the development of technology and AI as a lever for growth and evolution of Societies and the Individual.
“a system's ability to correctly interpret external data, learn from that data, and use that learning to achieve specific goals and tasks through flexible adaptation”.