Since October 7, the world has watched in amazement (to denounce or celebrate) the monstrous barbarism of Hamas terrorists, their obvious pleasure in torturing, raping, murdering women and children, and the ruthless reaction of an Israeli government that, to put an end to a terrorist movement that it has long promoted, falls into the trap set by this movement and kills a large number of Palestinian women and children.

During this same period, many other major events occurred elsewhere: the acceleration of Israeli settlement in the West Bank, the continuation of extremely deadly fighting in Ukraine, the terrible civil wars in Sudan and the Democratic Republic of the Congo, the dramatic expulsion to their country, from which they had fled, of more than one and a half million Afghans, and so many other tragedies around the world. Moreover, among many signs, this month, of the worsening environmental situation, the temperature felt in Rio de Janeiro on November 14, 2023: 58 degrees.

And also, in other areas, essential things with vertiginous consequences for humanity:

On the one hand, the announcement this week, by Chinese teams, of the birth, (more than two years ago) of a macaque hybridized by a certain number of stem cells of another macaque, opening to the production of models for the study of human diseases, and even human organ transplants developed in other primates.

On the other hand, it was in the field of artificial intelligence (AI) that the past month was the most extraordinary: not only because these technologies have been used in every conflict in the world, but because many new projects have been announced: Open AI has opened a new version of ChatGPT (GPT-4 Turbo), with stunning changes to develop new applications that offer revolutionary ways to create images (DALL E 3), and text (TTS), from an oral speech and vice versa. OpenAI also reported that it is working on its own iPhone, which will be able to interact with ChatGPT.

Apple is also announcing that it will install its own equivalent of ChatGPT on its iPhones. Samsung has launched its own model of AI (Samsung Gauss), which will be included in January 2024 in its new phone: the Galaxy S24. Google has launched new AI tools, from its Google Bard, for advertisers and advertising agencies. Amazon said it was preparing to launch its own Olympus AI chatbot, rival to Google Bard and ChatGPT, with 2000 billion parameters, double GPT4. Mistral, the French AI startup, is starting to deploy its open source model, particularly on Azure. Finally, Meta and RayBan have launched a new version of their glasses, in integrating a 12-pixel camera, and an AI assistant, which should, from early 2024, identify places, plants, animals and people.

During the same month, Google invested heavily in Character.AI, an application that allows dialogue with fictional characters or famous people, through the good of an AI; and Thomson Reuters acquired an extraordinary AI application for legal issues, Casetext, and launched an extensive AI training program for its 26,000 employees.

Still since the beginning of October, when the main companies in the sector (Anthropic, Google, Microsoft, and OpenAI) announced the launch of the “Frontier Model Forum” among themselves, to regulate themselves, and do everything possible to stop everything that is open source, governments were trying to establish public governance: In Bletchley Park, Buckinghamshire, the world’s first Sommet was held on the existential threat that AI poses to humanity. It resulted in a declaration signed by all the countries present (including the United States, China, Great Britain, India and Brazil) and by the European Union, pledging to use AI to protect human rights and achieve the sustainable development goals set by the United Nations; announcing the upcoming creation of an International Panel on AI Safety (IPAIS), modelled on the IPCC for climate, which will be responsible for regularly assessing the risks that AI poses to humanity; and finally, announcing the start of negotiations for a global regulation on the subject. At the same time, in the United States, an executive order of the President required all companies in the sector to make known to the American government any new AI project, before publishing or marketing them, if they might threaten the freedom of citizens, consumers or workers and American sovereignty. At the end of October, the G7 members, meeting in Japan, published principles for action on the same subject. Finally, the United Nations has just announced the creation of a specific Advisory Committee to deal with it.

So goes the world, made of medieval barbarism and potentially life-saving science fiction. Will we be able to surpass one and put the other in the service of Good… ?

j@attali.com

Image: Shutterstock.