Dubna. Science. Commonwealth. Progress
Electronic english version since 2022
The newspaper was founded in November 1957
Registration number 1154
Index 00146
The newspaper is published on Thursdays
50 issues per year

Number 5 (4753)
dated February 13, 2025:


Institute day by day

Artificial intelligence: trusted and safe

On 30 January, Director of the Institute of System Programming (ISP RAS), RAS Academician Harutyun I.AVETISYAN delivered a report "System programming and technologies for development of trusted systems (including, artificial intelligence)" at the JINR seminar.

The seminar aroused great interest among the JINR employees.

Opening the first JINR seminar this year, JINR Director Grigory Trubnikov highlighted that it is a good tradition to invite to it significant figures related to science, education, or international relations. "We have a wonderful guest today, we have known each other for a long time," Grigory Trubnikov said. "I mean not only our personal contacts, but also the cooperation between the ISP and the Joint Institute. The Institute of System Programming is a long-standing and reliable partner in building computational networks for Big Data analysis from mega-science projects. I think we have a good future ahead of us, considering the huge facilities under construction in Dubna and Partner Countries. We have a huge, over 160 petabytes of data storage. It is a blessing to have such an infrastructure, but one should be able to protect it, also in terms of reliability.

I am grateful to Harutyun Avetisyan for agreeing to speak today and for the fruitful and friendly cooperation. Since he is a graduate of Yerevan State University, our personal contacts help to strengthen scientific ties with the Republic of Armenia."

At the beginning of his report, the speaker emphasized that today's software increases fast, becomes more complex, is never isolated and at the same time, should be always efficient, productive and trusted (in the sense of safe). Companies use open source projects for their products that they finalize together. It is impossible to do it alone; software and operating systems today are millions of lines of code. The large companies Intel, IBM, Microsoft have lost out by developing closed compilers. Among cloud technologies, the leading open cloud platforms are Openstack, OpenNebula, Eycalyptus. The last two projects were developed by separate teams and at first, OpenNebula was the most advanced and deployed at CERN. Afterwards, the leader was Openstack that immediately set a course for community building, uniting over 400 developers today. The speaker shared the experience of ISP, during which the Asperitas environment based on Openstack was developed that in 2024, obtained a certificate of compliance with trust requirements and requirements for virtualization tools on the fourth level from the Federal Service for Technical and Export Control (FSTEC RF). Asperitas became the basis for the commercial platform ACloud that will be used by the KI - JINR - ISP Consortium for IT-support of the "megascience" research infrastructure.

The speaker also touched upon the issue of security. A few years ago, there was an opinion that "perimeter protection" was enough. Of course, Avetisyan stated, antiviruses are necessary, but in the early 2000s, they realized that if the system is not designed in a right way from the very beginning, if certain code analysis tools are not used, it will not obtain any certificate of trust. It is necessary to ensure the so-called life cycle of secure software development and every few years, the requirements to the tools are increased. In Russia, such an All-Union State Standard was developed in 2016. Software errors are the main cause of system vulnerability; the line between a "bookmark" and a programmer's error has disappeared. The speaker gave impressive examples of errors. OpenSSL library with a vulnerability was released in March 2012, the bug was discovered only two years later. During this time, half a million websites were infected and losses amounted to 500 million dollars. "We are proud to have authored the development of a prototype secure software pipe. More than 200 companies, including Kaspersky Lab, use our technology," Avetisyan emphasized.

The new challenge is artificial intelligence (AI) that becomes cheaper and cheaper, is embedded everywhere and will keep embedding. For the last few years, a myth has been spreading: strong AI will take us over. "I always say: there is no strong AI, it's a myth, it's a technology without subjectivity. Is it intelligence at all?" the speaker wondered. Weak AI can only meet the tasks for which it is programmed, it is vulnerable to biases and errors. Strong AI makes intelligent conclusions, uses strategies, plans actions, is capable of abstract thinking. It does not yet exist. The danger is that AI is massively embedded and uncontrolled. This is where open source is important too! The recently released DeepSeek R1 by the Chinese company DeepSeek (open source solution) is much cheaper than ChatGPT by the American company OpenAI (closed source solution). A few days after its release, DeepSeek overtook ChatGPT in the number of downloads in Apple's app store. As for AI development tools, there are open frameworks here, too.

One of the key tasks of AI is the task of trust. According to the RPORC polling "Do you trust artificial intelligence?" conducted last December, 52% answered "yes". AI simplifies life, it is objective and impartial, it can be given jobs that are dangerous for humans and for other reasons. 38% don't trust it and that's up 6% from 2022. The arguments cited are AI failures and errors, possible out of human control, the risk of AI accumulating data leakage and more. Our national AI strategy for the period until 2030 explicitly states: "Trusted artificial intelligence technologies - technologies that meet safety standards, developed with due regard for the principles of objectivity, non-discrimination, ethics, excluding in their use the possibility of causing harm to a human being, ...damage to the interests of society and the state". In the case of AI, cybersecurity (problems of development, attacks, bookmarking and so on) is only part of the proxy, it also needs to be ensured from the social and humanitarian side (problems of honesty of generative AI, manipulation of public opinion and individual consciousness, others). And then it will get worse - we won't realize whether we communicate with living people or not.

H.I.Avetisyan dwelled on the sources of threats in AI in terms of cybersecurity - since the main information is contained not in the programme code, but in the data, they are the source of threats. In the social and humanitarian area, the threats are dipfakes (audio and video), manipulation of people with unstable psyche. In 2023, Ilon Musk (Tesla and SpaseX) and Steve Wozniak (Apple) signed an open letter on the need for a six-month moratorium on training powerful AI systems. There are a growing number of initiatives around the world to regulate trusted (that we can trust) AI and the most stringent AI requirements are currently imposed in the European Union. The whole world will follow this path and so will we, the speaker believes. Today, regulation in Russia widely develops: in 2019, the national strategy for AI development until 2030 was updated; in 2021, the Research Centre for Trusted Artificial Intelligence of the ISP RAS gained state support and the Academy of Cryptography started developing a scientific base for current secure AI technologies and systems used in government information systems. Last year, with the support of the Ministry of Finance, the Consortium for Research on Security of AI Technologies was established that included the Science and Technology Centre for Digital Cryptography, the Academy of Cryptography and ISP RAS and companies and universities have started to join. The speaker also spoke about the work of the Research Centre for trusted AI ISP RAS and the software tools developed there to combat both cyber and social and humanitarian threats. He told about a new research area that has a practical implementation - the so-called federated learning for security, when one can learn in one of 50 centres, without transmitting data to each other and the prototype is developed as if all these data were in one place. Together with Sechenov University and Yandex, the efficiency of this approach was demonstrated on medical data.

Another area that is already legally established in the European Union and the United States, includs digital watermarks that should be used to label generative AI. The project on digital watermarks is developed jointly with the Steklov Mathematical Institute. Its goal is to make it possible to distinguish between natural and synthesized data.

ISP proposed a prototype of cooperation within the country: to develop a depository of trusted software within Russia, a controlled and secure one, to organize a community (academic community and leading universities) around key technologies, of which there are only a few hundred. And all companies and the government can develop their own technologies from these secure "cubes". As a result, at the initiative of FSTEC, the Centre for System Software Security Research was established on the basis of ISP, where more than 70 companies and universities work together with ISP. Among the first results: more than 30 critical vulnerabilities in the Linux kernel have been identified and more than 500 patches have already been adopted in the main kernel branch. As the speaker emphasized, a scalable ecosystem has been developed within the country that ensures the generation of human resources and technologies and as a result, technological independence, fast development and adaptation.

The report raised many questions - from narrowly specialized to almost philosophical ones that the speaker tried to answer. The cooperation between JINR and ICP that is of strategic significance for the Joint Institute, continues.

Olga TARANTINA,
photo by Elena PUZYNINA
 


When quoting, a reference to the weekly is obligatory.
Reprinting of materials is allowed only with the consent of the editors.
Technical support -
LIT JINR
Webmaster