Navigating the Intersection of AI and PETs (Privacy Enhancing Technologies)

Navigating the Intersection of AI, and PETs (Privacy Enhancing Technologies)

We’re not talking about robotic cats and dogs; we’re referring to the newly emerging technologies in generative artificial intelligence (AI) and privacy-enhancing technologies (PETs).

Artificial Intelligence (AI) and Privacy Enhancing Technologies (PETs) have been high of the agenda in recent months for the Information Commissioner’s Office (ICO), who published on their website in June and in their July newsletter discussing both.

The ICO has called on businesses to hold back before adopting generative AI technologies and consider the associated privacy risks, while at the same time, the ICO has launched a new PETs guidance. This guidance is aimed at data protection officers (DPOs) and others who use large personal data sets in finance, healthcare, and research as well as central and local governments.

The G7 Summit

During the G7 Summit, held in May 2023, global leaders highlighted AI and its governance, particularly in relation to privacy, as one of the key areas requiring a strong alliance, essentially aligning with the fundamental values of protecting democracy and human rights. The Center for Strategic and International Studies provides an excellent overview of the outcome of meetings in Hiroshima. In summary the G7 Summit Communique made the following key points:

 

  1. AI development should align with the shared values, principles and risks that mirror the basic principles listed in the Organization of Economic Cooperation and Development (OECD) AI Principles. The “Hiroshima AI Process” will be launched later in the year to address the importance of international collaboration.
  2. Looking forward, the G7 emphasised the importance of interoperability of AI governance frameworks and implementing a “risk-based” approach. For example, the EU’s AI Act proposes a set of risk categories. Further to this there was some consideration regarding support for International Technical Standards like ISO/IEC.
The UK’s Information Commissioner, John Edwards, was at the G7 round table in Tokyo during the summit and discussed data protection with other G7 data privacy authorities, considering the following topics:

 

  • Data Free Flow with Trust (DFFT)
  • Enforcement cooperation
  • The risks associated with emerging tech, specifically those around generative AI from a data protection and privacy perspective.

But What are PETS?

And how do PETs relate to the emerging risks that AI presents?

Privacy-enhancing technologies, aka PETs, have the power to enable organisations to use innovative and trustworthy applications, like generative AI, while also allowing them to share, link and analyse other people’s personal information without having access to it.

PETs essentially help organisations demonstrate a ‘data protection by design’ approach to processing of personal information. PETs can help with compliance, by aligning with the data minimisation principle, and can also afford an appropriate level of security for the processing of data.

However, there still needs to be a large amount of oversight when implementing PETs. They are not the silver bullet, and so should be used alongside other technical methods that are designed to protect sensitive and personal information.

PETs involve processing of data and so any due diligence process should come with a data protection impact assessment (DPIA) to see if the PETs selected are appropriate to mitigate the risk.

As Always, caution is advised

Not all PETs offer the same effective level of anonymisation.

It is beneficial to do your due diligence with any new technology.

Do you need help understanding how AI might impact your organisation?

Or maybe you need a virtual DPO to assist with any data privacy concerns? Please contact the team on 01926 800710 for a chat or send us a messageᅠhttps://www.riskevolves.com/get-in-touch

NB. No animals, AI or otherwise were harmed in the making of this article.

MD for Risk Evolves, Helen has worked in the IT industry since 1986. Helen is a leader in the areas of risk management and operational improvement, and works with companies in senior governance, risk and compliance roles. She is a member of the British Standards Institute and is a member of the BSI Committee creating a new guidance standard to assist organisations on how to become cyber resilient. Helen and the team at Risk Evolves work with organisations to improve their resilience through stronger process implementation and better communication and education of staff.

Related Post