Our Group has been leading the push to update the ePrivacy Regulation and bring online privacy protection and the confidentiality of online communications to the level it needs to be at in the current complex digital world. We forged a constructive and comprehensive Parliament position and have been leading the negotiations with the Council, despite substantial opposition from the member states.
In the Artificial Intelligence Act, under the leadership of our Group, we added an obligation on companies to carry out a fundamental rights’ impact assessment before deploying an AI system to mitigate all risks to individuals’ fundamental rights within the context of the use of AI. We have also prohibited the use of AI for behavioural manipulation, social scoring, real-time biometric identification, such as facial recognition, in public spaces unless it is strictly necessary for terrorist attacks, serious crimes, or for locating a victim of abduction, trafficking or sexual exploitation. We have also prohibited predictive policing based only on AI, or categorising of individuals based on their sensitive personal characteristics.
To clearly ensure that all non-high-risk AI systems present in the Union are ethical and human-centric, our Group has ensured that there is always adequate human oversight of the AI systems and that AI never replaces a human making a final decision. We also empowered individuals by creating an obligation to inform people who are subject to an AI system a right to receive a meaningful explanation and an obligation to improve the AI literacy of individuals. This is crucial for the enforcement and democratic scrutiny of these rules.
We also introduced an explicit right to lodge a complaint and the possibility of collective redress. We are also regulating generative AI systems like ChatGPT, which can increase the spread of fake news or deep fakes causing serious harm to people, as well as endangering our democratic debates. Such generated AI content must be clearly labelled in order to accurately inform individuals.
On the e-evidence package, our Group led the negotiations for fair and balanced legislation for member state authorities to request potential electronic evidence directly from service providers or ask for its preservation. We ensured that the fundamental rights of individuals remain protected while the cooperation between the law enforcement and the service providers is made easier. We ensured that the member state of the service provider is notified of requests for particularly sensitive personal data, unless the suspect lives in the country making the data request, or the offence was committed there. We negotiated clear rules on when the requests can be refused and we made sure the rules are in line with data protection legislation.
With the Digital Services Act (DSA), we have made all online platforms responsible for the content they display to prevent illegal and harmful activities online and the spread of mis-, dis-, and mal-information. We have banned online advertising based on profiling based on sensitive data. Targeted advertising aimed at minors will no longer be allowed. Social media have to be more transparent and give information on their recommender system’s algorithms and options for alternative recommender systems.
With the S&D’s leadership, the DSA includes stronger obligations on the so-called ‘very large online platforms (VLOPs)’ like Facebook, Instagram, Amazon and X. VLOPs will have to assess the dissemination and amplification of illegal content and societal harms, and take effective mitigation measures. If they fail to do so, the Commission is allowed to impose heavy fines. The DSA is now used to force VLOPs to counter the spread of fake news and disinformation, for instance Russian propaganda during the war in Ukraine, or the conflict between Hamas and Israel.
With the Digital Market Act, the EU has aimed to regulate digital monopolies established by big-tech and to establish a fair competition environment. It affects online Search Engines (Google Search), video platforms (YouTube), social media (X, TikTok) and communication platforms (WhatsApp), among many others. We have banned self-preferencing, which allowed platforms to advertise their own services over those of their competitors. We have ensured the possibility to create interoperability between messenger apps – making it possible to communicate between WhatsApp and Signal for example – and we have given regulators greater powers to break up tech giants and stop them from carrying out ‘killer acquisitions’.
The S&D Group supports the creation of an EU Digital Skills Certificate to validate and recognise people’s digital qualifications and credentials across Europe. A feasibility study is ongoing and results are expected by the end of the year.
We want to guarantee the right for everyone to have their personal data protected, privacy online respected and not to be discriminated against through or by digital applications or artificial intelligence.
We want to update the rules on privacy in the digital age, and more specifically the confidentiality of communications and the rules regarding tracking and monitoring, to ensure that our communications are not just secure, but also private, that we are not tracked online without our knowledge or consent, and that no profiles are made of us to influence or exploit our consumption habits, our likes and dislikes, or our political and social choices for economic or political gain.
We need to create rules to promote human-centric and trustworthy Artificial Intelligence in Europe. The S&D Group aims to protect people’s fundamental rights, health and safety by reinforcing safeguards, empowering people, increasing AI literacy and, where risks are too high or contrary to our values, ban certain AI systems.
We need to ensure that the rules on gathering electronic evidence strike the right balance between the needs of the law enforcement and judiciary on one hand and the protection of the fundamental rights of individuals on the other. We also need to guarantee that the law secures that the encroachments into privacy and protection of personal data are strictly proportionate and necessary.
We need to stop social media companies from creating algorithmic funnels that lead to their users to consuming more radicalised, conspiratorial and outrage-inducing content.
We have to create rules for 'Big Tech' to protect users, especially vulnerable users. Large social media and online platforms (Facebook, Instagram, X, Amazon, App Store) have to be responsible for the content they host and take measures to counter illegal content, goods and services.
Ensure a fairer Digital Single Market for start-ups and small businesses by stopping Big Tech such as Amazon from favouring their own services over those of others.
Make online marketplaces liable for the illegal products sold on their platforms, and ensure that consumers can buy products that they know will be safe and there are people they can contact and, if necessary, hold responsible if that’s not the case.
We want to ensure that digital education complements and enhances in-person education, and allows for tailored education fit for the needs of the learner, including disadvantaged groups. In addition, we need to develop training opportunities for teachers and the adaptation of curricula contributing to the best use of digital tools, simultaneously ensuring distance and blended learning not substituting, but complementing in person teaching.