Search
Close this search box.
Search
Close this search box.

Blog

Is AI a threat to humanity?

Date of publication: 1 year ago

Share this:

Is AI a threat to humanity?

Today’s post will be less technical and more philosophical. For several months, the temperature of the discussion on the development and future of artificial intelligence has been breaking new records, and nothing seems to suggest that it will start to fall. We have an AI summer in full swing.

In this blog, I want to collect my current thoughts on what we are dealing with and where we are heading. And above all, what can we do to make this development as good as possible, especially when we are the ones developing it?

Present Times

Talking about the future without properly diagnosing the present situation is impossible. I will briefly outline the current situation from the point of view of technological development.

Over the past dozen or so years, we have experienced huge changes: the emergence of smartphones, access to the Internet almost anywhere and at any time, the development of the so-called social networks, and, in general, virtual spaces where many people spend most of their lives (not only in terms of the amount of time but also exploring the world, shaping views or experiencing emotions).

This development gave (and still gives) great opportunities. On the other hand, it leads to very rapid cultural changes such as how we work, study, heal, spend our free time, enter and maintain interpersonal relationships, and much more. It also greatly impacts the body, psyche and spirituality of a person who often functions very unnaturally (e.g., most of the day is stared at the screen).

The following negative consequences of this development are easily visible: difficulties in focusing (and, consequently, lack of in-depth thinking), deteriorating quality of relationships (shallowing them), overstimulation, addiction to technology, less independence in dealing with problems, information flood, sense of chaos and unrest, and many other similar phenomena. As a consequence (although it is certainly not the only reason), we have a wave of depression, a sense of loneliness, and helplessness.

Therefore, it seems reasonable to ask whether the development of AI in this context can bring about any significant change. If so, will it be a rather positive or rather negative change? What does it depend on, and what kind of change will it be? And a question that is particularly important for people who create these algorithms – how should we approach this to make it a positive change?

AI Development Challenges

The decades-long development of algorithms, generally called artificial intelligence, poses the following major challenges, in my opinion.

An even greater loss of the sense of human uniqueness

From a scientific point of view, there is no reason to believe that AI algorithms will ever be able to surpass or match human capabilities. Although we know more and more about the functioning of our nervous system, we still see that we are far from understanding human features such as the almost unlimited potential of the mind and adaptation to various conditions, creativity, abstract thinking, willpower, striving for long-term goals, anticipating consequences, willingness to own development and getting to know the world, and finally the common sense of what is good and what is bad for most people. Current attempts to explain and, even more so, to translate these phenomena into algorithms are shallow.

Machines surpass man in the way that the calculator surpasses us in computation. They’re able to process a lot more information faster. This brings great opportunities. This is why models such as the GPT-4 can operate in a way that is, at first glance, indistinguishable from a human.

But therein lies the first challenge. The very name, artificial intelligence, and part of the (pseudo) marketing that movies, media and not very wise statements of various (again pseudo) visionaries are doing on this topic build the mood as if we were actually able to build an artificial brain or robots superior to us in terms of intelligence. This point of view is also presented by the open letter’s authors, which calls for the cessation of research on artificial intelligence for 6 months.

With the development and spread of these algorithms, people will increasingly lose the sense of their own uniqueness in this world. Hence, the feeling of helplessness, of being unnecessary and eliminated, will deepen.

Even greater information chaos and difficulties in finding the truth

Tools such as ChatGPT allow you to generate long, seemingly high-quality and credible texts within a few seconds. And as, so far, spreading disinformation required a lot of time, now it will be a trivial task and possible to carry out on an incomparably larger scale.

Moreover, these tools represent the views depending on their training. Thus, they can influence the shaping of public opinion all the more effectively.

Greater influence of large corporations and the state

Developing large AI models requires significant investments that only major corporations and governments can afford. This creates a situation where these entities can, in effect, control access to advanced AI algorithms, use them to increase their advantage and collect and analyse huge amounts of user data. This, in turn, leads to a greater concentration of power and increases the risk of abuse or excessive control by these actors.

States can also use artificial intelligence to monitor and control citizens. The use of technologies such as facial recognition or advanced analytics of personal data can lead to violations of individual privacy and freedom, which is already happening, for example, in China. There is also a risk that governments will use AI algorithms to manipulate public opinion and control information to maintain their power.

The risk of becoming even more dependent on technology

The development of artificial intelligence may lead to an even greater dependence of society on technology. The use of AI in various fields, such as medicine, transport, education or management, may make us increasingly dependent on autonomous systems. Suppose conscious work is not done to maintain certain skills that have been necessary until now. In that case, serious problems can arise in the event of, for example, failures or unavailability of algorithms.

Difficulty in predicting cultural changes

The large-scale introduction of artificial intelligence can trigger cultural changes that are difficult to predict. Interacting with intelligent AI agents such as chatbots or virtual assistants can influence our communication and social habits. This can lead to changes in our social relationships, as well as the evolution of our culture and the way we view human-machine relationships.

Of course, there are many more potential challenges (e.g. replacing certain professions with algorithms), but these five categories are the most crucial at the moment.

Criteria for AI Development

To meet the challenges listed in the previous section, criteria are needed for the development of AI.

The development of AI (like any other technology) makes sense only if it is focused on human good. This may be a trivial statement, and probably everyone will agree with it. However, the key question is what we mean by the good of man. Typically, those are terms such as:

  • health improvement,
  • positive impact on work (greater convenience, ease of finding a job, interesting work adapted to talents, etc.),
  • overall improvement in well-being and quality of life (increased security, poverty reduction, etc.).
  • improving education and science,
  • a positive impact on the natural environment.

People with democratic-liberal views usually add factors such as:

  • greater equality (gender, race, etc.),
  • greater democratisation of the world.

It’s worth looking at the potential good of AI from the perspective of an individual. When discussing humanity in general, it’s easy to lose sight of this perspective. Well, it seems to me that AI will only serve the good of man when it meets the following assumptions:

  • enable (or at least not hinder) a person’s full spiritual, mental and physical development,
  • as a consequence, a man will increase his capabilities (he will simply be wise and effective in life – he will know what is good for him and gives happiness, he will be able to set goals better and achieve them more effectively, he will think independently),
  • increase the scope of human inner freedom (greater strength of character and consistency in pursuing goals, not falling into addiction, greater independence of actions),
  • increase the scope of external human freedom (greater protection of privacy, greater possibilities of action, lack of control by the state or companies), health and material issues are also related to this,
  • it will facilitate the building of real, strong and happy relationships with other people, positively impacting families and other communities.

What to Do to Make Changes Positive

Below are some ideas on what to do to meet the above criteria.

Overall perspective

Many non-AI things need to be done for the effects of change to be positive.

First of all, the effects depend on the moral attitude of those responsible for creating and applying these systems. Many people rightly point out that AI is a neutral tool in itself. It all depends on how it will be used. If extremely influential people implement their vision of the world through the development of these algorithms, and this vision is inconsistent with the real good of man (for example, because their goal is only to gain more wealth, power or control over others), then it is obvious that the impact of AI will be negative. This also applies to those involved at every level.

Therefore, the development of technology must always go hand in hand with the moral development of people. We already know that the development of generative models such as GPT has a huge impact on life, and it takes really great effort and care to make it good. Therefore, we need a lot of effort to become more moral, make the right choices in our private lives, and care for other people’s good.

We must also follow the general principles of good development, such as:

  • mitigation of the negative effects of the current development of technology: encouragement to become independent from it (e.g. hours without a smartphone), to build interpersonal relationships, to contact nature,
  • a lot of exchange of thoughts, debates, discussions, and working out solutions at various levels. Their subject should not only be technology but, above all, religious, philosophical and ethical issues – what is the nature of man, or what is the good of man?
  • basically support and openness to the initiative and ingenuity of people,
  • the limited role of the state (and international organisations), which should interfere only in situations where it is impossible to work out solutions at the level of lower organisations (e.g. protection against monopolies of large corporations, ensuring appropriate and fair legal space),

AI Perspective

You can also take several actions that should respond to the challenges typically associated with the current situation in AI.

Portraying AI the right way

Eliminating misleading terms such as artificial intelligence is probably impossible at this stage. However, it is worth educating what neural networks are and how far they are from the real operation of biological neural networks. In the description of AI algorithms, anthropomorphisms (e.g. phrases such as that AI knows, understands, or wants something) should be avoided when writing about the actions of algorithms.

Educating how to use AI properly

It is important that people are educated on the proper use of AI-based tools. Awareness of the limitations of AI algorithms should be promoted, and critical thinking and evaluation of the information generated by these systems should be encouraged. Limited trust in decisions made by AI and maintaining one’s own thinking and healthy scepticism will avoid erroneous or inappropriate conclusions.

You should also be sensitive to privacy issues (paying attention to what content and where we place it, what are the conditions for its processing, etc.).

Transparency of activities and interests of organisations developing AI

To prevent excessive concentration of power and influence in the hands of large corporations, it is necessary to promote equal opportunities for smaller companies and startups dealing with artificial intelligence. Creating an environment facilitative to innovation and competition can contribute to greater diversity and accountability in this field of artificial intelligence. In addition, research conducted on artificial intelligence should be open to some extent and subject to reasonable restrictions. In addition, research on artificial intelligence should be open to some extent and subject to reasonable restrictions.

Summary

The current situation should be approached with great seriousness and prudence. We’ve created some really powerful tools. Their impact on society will certainly be colossal. However, I am far from pointless scaring, let alone trying to stop development.

In my post, I gave some rules and ideas on how to properly direct AI development. It is worth it that more and more ideas appear and are discussed and implemented.

Of course, the possibilities and responsibilities of people for this development vary greatly from people completely unrelated to this technology, who should rather take care of their own development and proper understanding and use of these technologies, to engineers who understand more about the operation of algorithms and can effectively educate their environment, to people in decision-making positions who respond the most for what shape the implemented solution will take.

I want this article to fit into the ongoing debate as one of the points of view. And I’ll happily encourage and participate in the discussion around it!

Other posts

Breaking news from MIM Solutions

News

Our solutions are used by doctors!

On 1 October, infertility treatment clinic Invicta implemented a new AI tool – the AIOO application, which was co-created by MIM Solutions. AlOO predicts the