Con un gruppo di attivisti/e stiamo pensando di proporre una ECI (European Citizen Initiative) sul tema dell'intelligenza artificiale. Mi piacerebbe molto sapere quali sono le vostre idee e le vostre preoccupazioni su questo argomento.
Tenete presente che è una bozza ancora molto embrionale e proprio per questo vorrei l'opinione di un gruppo di esperti come voi. Senza entrare in tecnicismi, quali sono i principi/etiche che vorreste fossero assicurati in una AI Europea? Quali sono i pericoli che bisogna assolutamente evitare?
Title: For a European Artificial Intelligence: open and at the service of democracy
We ask the Commission to develop an open artificial intelligence system, at the service of the right to knowledge and democracy.
We call for the creation of a "Large Language Model" system that is open source, trained on the basis of the documents of the public administrations of the Member States and the EU, such as legislation, case law, cultural and scientific heritage...
An application will consist of a virtual assistant to help citizens in their dealings with the EU and states. The service will improve knowledge of the rules and facilitate democratic participation, allowing everyone to interact in simple and not necessarily technical language. The model will also be able to be integrated upstream in the regulatory process, assisting the legislator in the writing of rules.
REASONS
The recent advances shown by generative AI, especially Large Language Models (LLM), suggest a huge impact on productivity. In "The economic potential of generative AI" McKinsey estimates the value generation potential for the industry at between 2.6 trillion and 4.4 trillion.
Among the various risks accompanying the diffusion of these technologies are that of increased inequality due to the possibility or otherwise of access and that implicit in the oligopoly of model development and ownership. As LLMs spread, integrating their functions into a multitude of products, often imperceptibly to the end user, the possibility of having transparent control over the models and training sets used for training will become increasingly important in order to better understand how the system works.
The Commission's contribution is necessary because LLMs have been shown to improve performance as they increase in size, and the cost of training increases accordingly; for example, OpenAI CEO Sam Altman told the press that GPT-4 training cost more than $100 million. It seems clear, therefore, that in order to be internationally competitive, investments of at least this order of magnitude are needed, which are more easily achieved on a European scale.
The governance of the management and development of the model must reduce the risk of interference by commercial interests or state control and can be achieved through a non-profit organisation or foundation.