September 2024 data protection round up

September 2024 data protection round up

Latest UK News

  1. Framework Convention on Artificial Intelligence. On 5th September, 2024, Lord Chancellor Shabana Mahmood signed the Council of Europe Framework Convention on Artificial Intelligence, marking the UK’s commitment to the first legally binding international AI treaty. This agreement aligns with the EU AI Act and aims to bolster protections for human rights, democracy, and the rule of law by establishing a unified approach to AI systems. Participating countries must show compliance by enacting domestic measures for transparency and risk mitigation. While the UK plans to update its regulations to meet these obligations, specific details have not yet been disclosed.
  2. Report published into data controllers’ use of personal data. The ICO has released the findings of its Data Controller Study, which seeks to enhance their understanding of how organisations gather and utilise personal data, thereby informing their regulatory decisions. Among the 2,280 organisations surveyed:
    – 93% collect personal data directly from their
    customers
    – 64% are very or fairly familiar with data
    protection laws
    – 59% were aware of the ICO before participating
    in the study
    – 52% consider processing personal data
    essential to their core business functions
    – 37% process data to support service or product
    delivery
  3. Report published on PET adoption. A new report from the ICO outlines the challenges organisations encounter when adopting privacy-enhancing technologies (PETs). It identifies key barriers such as legal uncertainty, insufficient technical expertise, high implementation costs, and a lack of understanding of the risks and benefits. To address these issues, the report recommends:
    – Providing training to raise awareness of PETs
    – Developing cost-effective solutions
    – Promoting collaboration between developers and data-sharing organisations
    – Seeking support from funding bodies
    – Detailed compliance guidance from regulators
  4. Meta resume plans to use UK users’ data to train AI. Meta has decided to resume its plans to use Facebook and Instagram user data to train generative AI, following a pause requested by the ICO. Since the pause in June, Meta has revised its approach, making it easier for users to object to data processing and extending the time frame for objections. The ICO stated: “We have been clear that any organisation using its users’ information to train generative AI models needs to be transparent about how people’s data are being used. Organisations should put effective safeguards in place before they start using personal data for model training, including providing a clear and simple route for users to object to the processing. The ICO has not provided regulatory approval for the processing and it is for Meta to ensure and demonstrate ongoing compliance.”
  5. Memorandum of Understanding. The ICO has signed a Memorandum of Understanding (MoU) with the National Crime Agency (NCA) to enhance the UK’s cyber resilience. This collaboration aims to help organisations across the country better protect themselves from data theft and ransomware attacks. The MoU underscores the ICO’s commitment to closer cooperation with the NCA, ensuring that organisations are directed to the appropriate bodies and encouraged to report cybercrime promptly.
  6. Transport for London warns of customer data breach. Transport for London (TfL) has alerted the public that some customer data were accessed during a recent cybersecurity breach. The transport authority first noticed “suspicious activity” on 1st September 2024, and later confirmed that “certain customer data,” including names and contact details, had been compromised. Additionally, bank details of up to 5,000 passengers may have been accessed through refund data from TfL’s Oyster contactless payment system. A 17-year-old boy has been arrested as part of the National Crime Agency’s investigation into the incident.

Latest European News

  1. Guidance on DMA and GDPR interplay. The European Commission (EC) and the European Data Protection Board (EDPB) have announced their collaboration to develop guidance on the interplay between the Digital Markets Act (DMA) and the GDPR. They emphasise that creating a coherent interpretation of both regulations, while respecting each regulator’s competencies where the GDPR applies and is referenced in the DMA, is essential for effectively implementing these frameworks and achieving their complementary objectives. This enhanced cooperation will build on the existing engagement through the High-Level Action Group for the DMA.
  2. EU Data Protection Supervisor. The EC is preparing a shortlist of potential nominees to succeed Wojciech Wiewiórowski as European Data Protection Supervisor, with his five-year term ending in December. Hielke Hijmans, Chairman of the Litigation Chamber and member of the Board of Directors of the Belgian SA, is a candidate representing the Netherlands. Sources indicate that Poland’s Wiewiórowski is also seeking re-election for another term.
  1. Irish proceedings against Grok dismissed. The Irish High Court has dismissed proceedings against X concerning the company’s use of personal data for its AI tool, ‘Grok’. The Data Protection Commission (DPC) had initiated urgent action in August over concerns about X’s processing of personal data from public posts of EU/EEA users for training Grok. X subsequently agreed to suspend this data processing. Based on X’s commitment to permanently uphold this suspension, the High Court has struck out the proceedings. The DPC welcomed the decision but has also requested an opinion from the European Data Protection Board (EDPB) under Article 64 GDPR. This request asks the EDPB to consider the extent to which personal data are processed during various stages of AI model training and operation. Commissioner Dale Sunderland stated: “The DPC hopes that the resulting opinion will enable proactive, effective and consistent Europe wide regulation of this area more broadly. It will also support the handling of a number of complaints that have been lodged with/transmitted to the DPC in relation to a range of different data controllers, for purposes connected with the training and development of various AI models.”
  2. Clearview AI fined €30.5 million by Dutch regulator. The Dutch Supervisory Authority has fined US company Clearview AI €30.5 million for illegally creating a database with ‘unique biometric codes’ linked to collected photos. The company also allegedly failed to provide individuals whose faces were in the database with sufficient information about how their images and biometric data were used. The authority is investigating whether company directors can be held personally responsible for these violations, as Clearview has
    not changed its practices despite previous fines from other authorities, including the ICO. Clearview responded to the fine, calling the decision “unlawful, devoid of due process, and unenforceable,” arguing that it does not have a place of business, customers, or activities in the Netherlands or the EU that would subject it to the GDPR.
  3. Russian military intelligence behind cyberattacks. Estonia has disclosed that Moscow orchestrated a series of cyber-attacks on several Estonian ministries in 2020. Four years after the attacks on the IT services of ministries, including the foreign ministry, Tallinn has identified members of Unit 29155 of Russia’s military intelligence service as the culprits. This marks the first time Estonia has attributed a state-targeted cyber attack to a specific perpetrator.
  4. Inquiry launched into Google AI model. Ireland’s Data Protection Commission has initiated a statutory inquiry into Google’s Pathways Language Model 2 (PaLM 2), the core model behind Google’s text and image-generation services. The inquiry aims to determine whether Google failed to fulfil its obligation to conduct a Data Protection Impact Assessment, as required by Article 35(2) of the GDPR, before processing the personal data of EU/EEA data subjects.
  5. Germany enacts new requirements for health data processing. Germany has introduced stricter requirements for processing health-related personal data using cloud computing services, following updates to Section 393 of the German Social Code V. These updates aim to enhance security and establish uniform standards within the healthcare sector. Key highlights of the new Section 393 include:
    – Health data can only be processed in Germany, the European Economic Area (EEA), or third countries with a European Commission (EC) adequacy decision.
    – Cloud computing service providers processing data in third countries must have a residence in Germany.
    – Standard Contractual Clauses (SCCs) and Binding Corporate Rules (BCRs) are no longer considered adequate guarantees for globalmcompanies providing cloud computing services, in non-adequate third countries.
    – Cloud systems must maintain a C5 certificate
  6. Companies ordered to resolve deceptive cookie banners. The Belgian Supervisory Authority has mandated that four major Belgian news sites—De Standaard, Het Nieuwsblad, Het Belang van Limburg, and Gazet van Antwerpen—update their cookie banners to comply with GDPR. These sites must add a ‘reject’ button to the first layer of their cookie banners and modify the currently misleading colour scheme of the buttons.
  7. Online Health Taskforce. On 4th September 2024 Ireland’s Minister for Health, Stephen Donnelly, launched the Online Health Taskforce. This initiative aims to address the physical and mental health issues affecting children and young people due to certain online activities. The taskforce will investigate the sources of these harms and offer strategic recommendations, which may include national guidelines, legislation, enhanced health and social care support, and further education.

Latest International News

  1. FBI’s data handling practices criticised. A recent audit by the US Department of Justice (DOJ) has criticised the Federal Bureau of Investigation (FBI) for significant weaknesses in managing and disposing of electronic storage media. The audit revealed that removable devices, such as USBs and internal hard drives, were often left unguarded and improperly labelled, posing a risk of exposing sensitive information. The DOJ emphasised the need for the FBI to enhance its data protection measures to comply with federal regulations and safeguard its information assets. The report recommends implementing stricter access controls and regular staff training to mitigate these risks.
  2. FTC settlement with security camera firm. The US Federal Trade Commission (FTC) has announced a proposed settlement with Verkada, a security camera firm, over alleged data security failures and violations of the 2003 CAN-SPAM Act (Controlling the Assault of Non-Solicited Pornography And Marketing Act). According to the FTC’s complaint, Verkada’s inadequate information security practices allowed a hacker to access internet-connected security cameras, exposing patients in sensitive locations like psychiatric hospitals and women’s health clinics. Under the proposed order, Verkada must implement a comprehensive information security programme and pay a $2.95 million penalty.
  3. New Hampshire Data Privacy Unit. New Hampshire Attorney General John M. Formella has announced the establishment of a Data Privacy Unit within the Consumer Protection and Antitrust Bureau. This unit will primarily enforce compliance with the New Hampshire Data Privacy Act, which takes effect on 1st January 2025. Additionally, the unit will develop a series of FAQs to help consumers and businesses understand their rights and responsibilities under the Act. For applicable businesses, these responsibilities include:
    – Limiting the collection of personal data to what is relevant and necessary for the disclosed purposes
    – Establishing robust security measures to protect consumers’ personal data
    – Implementing clear and accessible privacy notices
    – Obtaining informed consent from consumers and providing opt-out preference signals
    – Conducting data protection assessments for high-risk activities
  1. China publishes governance AI framework. On 9th September 2024, China’s National Cybersecurity Standardisation Technical Committee released an Artificial Intelligence Safety Governance Framework. This Framework aims to ensure the safe and ethical development and application of AI technologies. It aligns with transparency and risk management principles found in other international regulations, such as the EU AI Act, reflecting a global trend towards responsible AI governance that balances innovation with safety. The Framework identifies inherent safety risks related to algorithms, data, and AI systems, as well as additional risks associated with AI applications. It also provides technological risk mitigation measures and safety guidelines for four groups: model algorithm developers, AI service providers, users in key areas, and general users.

To find out more about Moore ClearComm and how our team of industry specialists can help our organisation, contact us today: info@mooreclear.com

DOWNLOAD AND READ THE FULL ARTICLE HERE