A new chapter on the political and legislative debate on abortion has been introduced by France as an answer to the restrictive approach adopted in the U.S.

In 2022, the Supreme Court of the United States in the case Dobbs v. Jackson overturned the precedent Roe v. Wade of January 22, 1973. Now, the voluntary termination of pregnancy (abortion) is not a right guaranteed at the federal level – and thus throughout the United States – under the control of the Supreme Court, but each State may prohibit it under the national law. According to the main opinion, within the US Federal Constitution, in fact, there isn’t an express right to abortion. Under the new framework, fourteen States have prohibited it, as explained in the Rapport n° 42 issued by Mme Agnès Canayer before the French Senate on behalf of the competent commission asked to examinate a proposal to amend the French Constitution by guaranteeing the fundamental right to abortion.
In Europe the echo of Dobbs v. Jackson implications brought the adoption of different approaches.
For example, in September 2022, Hungary introduced the obligation for women deciding to get an abortion to listen to the foetal heartbeat. Conversely, in October 2022, Finland authorised abortion without requiring any justification from the woman, while in June 2023, even Malta opened under certain conditions, namely life threatening or the foetus is not viable, to abortion.
Within this second approach, we may classify the French solution, aiming to corroborate the importance in a civil law system to include within the fundamental rights recognised at the highest level the right of each woman to get an abortion. Thus, the French answer to the risk that case-law could provide a misleading interpretation of the constitutional architecture of the rights and freedoms embedding the French values consists of a proposal of loi constitutionelle aiming to introduce firstly the right, then the freedom to abortion.
The legislative iter started in February 2023, when the Parliament voted a proposal of amendment of the Constitution, that should require a referendum to be finally adopted. Thus, a project of amendment has been announced by the President Macron in October 2023, under the initiative of the Prime Minister. The project of amendment, as it came from the government, requires the approval with the majority of 3/5 of the Congress (ie joint chambers of the Parliament) without a referendum consultation. Article 89 of the French Constitution states also that in both cases the National Assembly and the Sente shall previously agree on the text.
Regarding the text of the project of amendment, the aim was to expressly guarantee “la liberté garantie à la femme d’avoir recours à une interruption volontaire de grossesse”. The formula of the new seventeenth paragraph of article 34 of the French Constitution has been then integrated with the amendment Philippe Bas, namely : « La loi détermine les conditions dans lesquelles s’exerce la liberté de la femme de mettre fin à sa grossesse ». In particular, the added sentence recalls the balance of parameters expressed within the national legislation on abortion in France. Firstly, the French Code on the Public Health states under article R4127-47 the clinician right to objection, as a direct consequence of the liberté de conscience stated under the Déclaration des droits de l’homme et du citoyen in 1789. Secondly, it has argued that the consecration of the existence of the liberté should be consistent with the spirit of the Simone Veil Act of 17 January 1975. Therefore, the recognized role of Parliament in establishing the conditions in which this freedom is exercised shall be maintained, as it has been the case since 1975. The aim of the proposals is founding the guarantee of this freedom in the French Constitution itself.
Modifying the Constitution to include the right to abortion is a unique approach within the legislative techniques adopted on the topic in other legal systems. Where abortion is cited in Constitutional charts, in fact, it is to establish its prohibition, like in the 8th Amendment of the Irish Constitution, then repealed in 2018 by the 36th Amendment. The latter, however, allows the Parliament to regulate the termination of pregnancy, without establishing any right or freedom to for women, like the French one. Despite few cases, moreover, abortion is not even expressly mentioned, even if it could but easily detected within other guaranteed rights: in Slovenia, for instance, the same right to abortion emerges where its Constitution establishes the “Freedom of Choice in childbearing”, under article 55. While in South Africa it is recalled by combining the Constitutional sexual and reproductive rights with the right to autonomy. The constitutional protection through a legislative initiative with a constitutional effect of the right to abortion will be shared with some legal systems like Nepal and some states of the US.
After the approval in the National Assembly and Senate, on March 4th 2024, the 25th amendment to the 1958 French Constitution – that extensively achieved the 3/5 in the united chambers (ie the Congress) with 780 votes – will be signed on March 8th 2024, the International Women’s Day.
It will be interesting to follow the debate at global level and verify whether the French Constitutional right might develop a domino effect.


Accountability, Transparency, and Fairness to Assess Generative AI Solutions with the Lenses of Data Protection Law

At the end of March 2023, the Italian Data Protection authority imposed OpenAI LLC a temporary limitation in Italy of the ChatGPT services, including a ban on processing under article 58, §2, sub f) of the EU Regulation 2016/679 on General Data Protection Regulation (hereinafter GDPR), because of several critical aspects impacting on their users – especially children- ‘s fundamental rights protection.
The decision considered the following grounds of assessment. Firstly, the lack of a privacy information policy explaining users and data subjects how OpenAI LLC would have collected and processed data in the platform. Secondly, the lack of a legal basis to process personal data for algorithms training purposes. Thirdly, the lack of safeguards to assess users’ age in order to avoid minors under 13 years old to use the software in alignment with the included terms and conditions, that -in any case- have been considered misleading in some parts related to data processing activities.
That decision opened an international debate on risks and opportunities of OpenAI applications, providing a domino effect in other EU systems: for example, on April 13th, the EDPB launched a task force on ChatGPT, while on April 23rd the French CNIL published a Dossier on the generative AI (https://linc.cnil.fr/dossier-ia-generative-chatgpt-un-beau-parleur-bien-entraine) to explain the chatbot mechanisms and their effects from an ethical-legal perspective.
In April 2023, the limitation has been overruled since the US company implemented a series of technical and organisational measures able to mitigate any risks of fundamental rights compromission. In particular, the platform implemented a more easily accessible procedure for opting-out from data processing either for users or non-users, it published in the webpage a detailed privacy policy also for the data processed to train the algorithm; it developed mechanisms to allow users to erase and or correct possible inaccuracies. These safeguards have been considered first essential measures to make that chatbot service available again.
Thus, the Italian affaire on ChatGPT is particularly interesting to be analysed under several perspectives.
It shall be considered a worthy sample of joint collaboration with the competent data protection authority to identify proper organisational and technical measures to mitigate the impact of a given data processing activity on the users’ fundamental rights, even if stimulated by an investigation instead of being promoted by design. In fact, the interaction between the data controller and the competent authority is stated by article 36 GDPR, that establishes the conditions for a “prior consultation” mechanism in case the results of the “self” data protection assessment referred to a high-risk processing activity for the data subjects’ fundamental rights and liberties or if none of the implemented safeguards could ensure an appropriate level of mitigation.
Moreover, the ChatGPT case highlighted the role of the data protection law in addressing the standards of accountability for AI-based solutions. In fact, since the AI package has not been approved yet, the lack of a specific setting of obligations for AI developers is covered by the GDPR at least as far as personal data is concerned. Thus, the GDPR could protect only direct or potential users of a given AI-based application, as long as they could be considered as data subjects of a given data processing. However, any further implication on fundamental rights not included in a data protection impact assessment is still not enforceable by a data protection authority at this stage. For this reason, many data protection authorities decided to open dedicated departments and task forces in order to specifically address data protection issues related to AI-based solutions. In fact, despite of the decision to restore the ChatGPT services, the monitoring and risk assessment activities on generative AI have just started in light of the principles of accountability, transparency, and fairness.
In this regard, the grounds analysed by the Italian Data Protection authority are shaping a minimum standard suitable to be applied to analyse the effects of AI solution, including the generative ones with the lenses of data protection law. In particular, it appeared that the developer data controller has at least: i) to address the risks considering the different categories of data subjects/users and their tailored vulnerabilities; ii) to ensure a transparent and clear information policy; iii) to ensure easily mechanisms to opt out from the data processing activities.
As far as the sub i), children are per se considered as vulnerable users and specific technical and organisational measures have been required to assess their age in order to establish an aware and lawful contractual relationship between the service providing the chat bot and the user. However, individual digital vulnerabilities emerging from the digital divide or to the possible consequences on the human oversight are not protected by the data protection law. They could be addressed under a trustworthy assessment of the given AI-based application (as envisaged by the High Level Group on AI) but they cannot in the context of pure data protection impact assessment under article 35 GDPR.
The ground related to the transparency (ii) is limited as well. In fact, according to the GDPR some information on the data processing are mandatory, including those related to possible profiling activities – that are the ones undertaken by the algorithms. However, the so-called privacy policy does not require any details on what could happen to data once they are anonymised. Even if it could be essential to understand the implications to use a given application. As a consequence, to ensure a user-friendly mechanism to opt out (iii) from the data processing activities undertaken by the given application is a limited obligation under the data protection law. In fact, it is applicable as long as the data are personal (including the pseudonymised ones), but the data subject will completely loose the information control once that the data have been made sufficiently anonymous. In addition, the data protection law cannot solve possible issues related to technical bias that could bring as a result a discriminatory decision for a category of persons, but only if the decision is directly impacting on the data subject (see the Deliveroo case - Tribunal of Bologna 31.12.2020). Therefore, also the fairness achieved through the GDPR compliance is limited respect to the range of possible risks of fundamental rights compromission.
Such a short analysis aimed to highlight how the ChatGPT Italian affaire has been essential to develop a remarkable assessment on AI applications with the lenses of the privacy and data protection compliance. It opened a serious debate on the urgent necessity to extend the analysis on all possible ethical and legal implications of a given solution through a consolidated methodology based on the principles of accountability, transparency, and fairness by specifying the limitations that could be met in the context of a data protection investigation by the competent authority.

Acknowledgement
This contribution has been partially supported by Programma Operativo Nazionale Ricerca e Innovazione 2014-2020, PON “Il danno alla persona e la giustizia predittiva”.