On April 23, the Italian Council of Ministers approved a draft law (“disegno di legge”) to introduce provisions that empower the Italian Government in the field of artificial intelligence (“DDL AI”). This draft law amends the initial version that was circulated in early April. The DDL AI, which includes 26 articles, establishes regulatory criteria aimed at balancing the opportunities presented by new technologies with the risks associated with their misuse, underutilization, or harmful application. While still under parliamentary review and not yet enacted, this draft law anticipates some principles of the European Regulation on Artificial Intelligence (“AI Act”) and incorporates several specific national characteristics. It complements, rather than duplicates, the AI Act, which was approved by the European Parliament on March 13. The draft law adopts an anthropocentric approach and explicitly refers to fundamental rights and freedoms as enshrined in the Italian Constitution and European Union law. It also emphasizes the principles of transparency, proportionality, security (especially cybersecurity), enhancement and protection of personal data, accessibility, and non-discrimination, safeguarding human autonomy and self-determination. Furthermore, the DDL AI introduces both overarching principles and sector-specific provisions, covering areas such as health and disability, labor, public administration, judicial activities, and national cybersecurity. The regulations span five domains: national strategy, national authorities, promotional activities, copyright protection, and criminal sanctions. This paper focuses on data protection, labor, copyright protection, and criminal sanctions in the context of the DDL AI. The principles of information and data protection The DDL AI stipulates that the entire lifecycle of artificial intelligence (“AI”) systems and models should adhere to the fundamental rights and freedoms recognized by the Italian and European legal frameworks, and uphold principles of transparency, proportionality, security, data utilization, personal data protection, confidentiality, accuracy, non-discrimination, gender equality, and sustainability. Specifically, concerning personal data protection, the Italian Legislator generally reaffirms the principles already established in Regulation (EU) 2016/679 (“GDPR”) and Italian Legislative Decree 196/2003, as amended by Legislative Decree 101/2018 (“Privacy Code”). Accordingly, the processing of personal data through AI systems must be lawful, fair, and transparent. Additionally, personal data must be processed in a manner that aligns with the specified, explicit, and legitimate purposes for which it was collected, thereby respecting the principle of purpose limitation (Article 4 of the DDL AI). The security of personal data collected via AI systems is crucial (Articles 3(2) and 6 of the DDL AI). Thus, data-processing methods must ensure and maintain fairness, trustworthiness, quality, suitability, and transparency. Appropriate security measures such as encryption, authentication, access control, and other safeguards are required to protect data against unauthorized access, loss, or theft. Security policies should also be regularly updated to address evolving threats (Article 17 of the DDL AI). Moreover, the DDL AI specifically mentions the privacy notice (Article 4(3) of the DDL AI). When processing personal data within AI systems, it is imperative to provide users (i.e., the data subjects) with a clear and straightforward privacy notice, ensuring their awareness and their right to object to any unauthorized processing of their data. The DDL AI also addresses the use of AI technologies by minors under the age of fourteen, requiring consent from a person holding parental responsibility. This aligns with the guidelines for accessing social networks (Article 2-quinquies of the Privacy Code). Article 8(2) of the GDPR mandates that reasonable efforts be made to verify that consent is given or authorized by the person responsible for the minor, considering the available technology. However, these provisions have generally been ineffective and pose significant challenges. The broad scope of the provision, particularly regarding “access to AI technologies,” could become impractical with the widespread integration of AI across various devices, facilitated by the Internet of Things. Continuous verification of user activity and consent could be nearly impossible to maintain. Thus, revisions to this requirement are anticipated during the parliamentary review process. Finally, a coordination challenge emerges concerning the provisions on scientific research and experimentation in healthcare, as outlined in the PNRR decree, which seeks to amend Article 110 of the Privacy Code (for a detailed description of this topic, please refer to our previous paper, available here, in Italian). This reform removed the obligation of prior consultation with the Italian Data Protection Supervisor (“Garante Privacy”) in cases of retrospective clinical trials where obtaining consent is unfeasible. Conversely, the DDL AI appears to reintroduce the requirement for prior authorization from the Garante Privacy for the processing of personal data in this field. The provisions on the use of AI in labor matters The principle surrounding the use of AI in the workplace is portrayed from a human-centered perspective. The use of AI should improve workplace conditions, safeguard workers’ mental and physical integrity, enhance work performance, and boost productivity. The principles of equality and non-discrimination are also reaffirmed. To maximize the benefits and mitigate the risks of using AI systems in the workplace, the legislator has established the Observatory on the Adoption of Artificial Intelligence Systems in the Workplace (Osservatorio sull’adozione di sistemi di intelligenza artificiale nel mondo del lavoro), hosted by the Italian Ministry of Labor and Social Policies. This approach is consistent with the AI Act, which employs a risk-based methodology (for a detailed description of this topic, please refer to our previous paper, available here). Risks associated with AI systems could affect both the recruitment and hiring phase and the entire management of the employment relationship. Indeed, all AI systems used in employment, workers management, and access to self-employment should be classified as high-risk, potentially impacting future career prospects and workers’ rights. In particular, AI systems used to monitor performance and behaviour may also infringe upon fundamental rights to data protection and privacy. Therefore, the provisions set forth in the AI Act concerning high-risk AI systems (Articles 26 and 27), including those specifically aimed at the use of AI in the labor context, must be applied. The AI Act mandates that before deploying or using a high-risk AI system in workplaces, employers who are deployers must inform workers’ representatives and the affected workers that they will be subject to such systems. This requirement is a general disclosure obligation, without specific mandates for prior consultation and/or negotiation concerning the introduction of AI in workplaces. However, the AI Act (Article 2(11)) allows member states to maintain or introduce laws, regulations, or administrative provisions that provide greater protection to workers regarding their rights concerning the use of AI systems by employers, or to endorse or permit the implementation of collective agreements more favourable to workers. In the Italian legal context, the safeguards set in Article 4 of Law 300/1970, as amended by Legislative Decree 151/2015 (the “Workers’ Statute”), might be particularly relevant. AI systems could potentially be classified under “defensive controls in a narrow sense” or “defensive controls in a broad sense,” necessitating compliance with either the requirements outlined in Article 4 of the Workers’ Statute for the former, or the specific jurisprudential criteria for the latter. In light of the above, it would be advisable to prepare a corporate policy aimed at regulating the use of AI in the workplace. This policy should include measures for the protection of personal data and confidential information. It would also be appropriate to define the implications of improper use of such technology. The DDL AI also includes a provision on intellectual professions. In this case, the use of AI systems is limited to the performance of activities that are merely ancillary and instrumental to the professional work performed. The provision continues by requiring transparency from the professional to inform the client if AI technologies are used. As expressly phrased, this is a general information requirement and may lead to several potential uncertainties if left unmodified. Provisions about copyright protection Following the transparency approach throughout the legislative framework of the DDL AI, provisions for user protection and copyright are included in Chapter IV. Amendments to Italian Legislative Decree 208/2021 (known as “TUSMA”) are outlined. Specifically, Article 23 of the DDL AI introduces provisions within the framework of regulating audiovisual services, stipulating the necessity, subject to prior acquisition of consent from rights holders, to identify content that has been wholly or partially generated, modified, or otherwise altered through AI systems capable of presenting as real data, facts, and information which are not, by adding an appropriate identification mark. The legislator thus addresses concerns related to the issue of so-called deepfakes, extending the obligation, already provided for advertising content, to inform users, through appropriate means, of the commercial intent behind the communication, even in cases involving content generated using AI systems. The responsibility for adhering to this transparency obligation rests with the author or rights holder of the content in question. They are required to incorporate into the content, as a conspicuously visible and identifiable element, the acronym “AI” or, in the case of audible, audio announcements, or in any case with technologies appropriate to enable such identification. Notwithstanding protections for the rights and interests of third parties, this obligation shall not apply when the content is part of a work or program clearly creative, satirical, artistic, or fictitious. Failure to comply with such obligations will result in an administrative fine ranging from 10,329.00 to 258,228.00 euros. The legislator leaves the regulation of implementation procedures to soft law sources (i.e., forms of co-regulation and self-regulation or through a code of conduct). The DDL AI introduces two different additions to Article 42 of TUSMA. Firstly, it requires that providers of video-sharing platforms subject to Italian jurisdiction implement appropriate measures to safeguard the general public against deepfakes. Secondly, it binds providers of video-sharing platforms to ensure the incorporation of a feature within said digital platforms, allowing users who upload user-generated video content to disclose whether such content has been created, edited, or altered, even partially, in any form or manner, through the use of AI systems, either known to them or reasonably to be known. Further, Article 24 of the DDL AI provides for amendments to Law 633/1941 (i.e., the “Italian Copyright Law”). The legislator first broadens the object of protection of the Copyright Law to include also works created with the aid of AI tools, as long as the human input is creative, relevant, and demonstrable. This is a confirmation of what has so far been decided, for example, by the U.S. courts and also in line with the early instructions of the Italian Supreme Court (Corte di Cassazione). The DDL AI also introduces Article 70-septies to the Italian Copyright Law, as part of the so-called “free uses”. Reproduction and extraction of works or other materials through AI (including generative) models and systems are subject to the fulfilment of the conditions set forth in Articles 70-ter and 70-quater of the Italian Copyright Law, regarding the text and data mining exceptions (i.e., the process of deriving information from machine-read material and thus of AI system training activities). Hence, the possibility of using copyrighted material without the consent of the owner of that right to train AI systems is balanced by the right of the owners of those rights to exercise the “opt-out” right, i.e., the power to exclude its use in the data sets being used for training. Criminal sanctions In Chapter V of the DDL AI, the legislator addresses the criminal provisions. The use of AI systems is added as both a common aggravating circumstance and a specific aggravating factor for certain offenses, including criminal impersonation, stock manipulation, fraud, computer fraud, money laundering, self-laundering, employment of illicit money, assets, and utilities. Article 171 of the Italian Copyright Law and Article 195 of Legislative Decree 58/1998 (i.e., the Consolidated Law on Financial Intermediation) are also amended, incorporating two new criminal offenses. The first pertains to breaches of the aforementioned Articles 70-ter and 70-quater of the Italian Copyright Law, while the second relates to market manipulation. Finally, a new criminal offense is introduced into the Italian Criminal Code: the unlawful dissemination of content generated or manipulated with AI systems. Under this new offense, it is punishable whoever causes (or aims to cause) harm or causes unfair damage to a party by sending and disseminating deepfakes.