fbpx
Insights

Artificial Intelligence in Education: Addressing the tensions between innovation and privacy through guidance

The Information Commissioners Office (ICO) has just released guidance which is important to all those developing and promoting products powered by artificial intelligence (AI).

The use of AI is a driving force in the continuing development of online learning and assessment products, with many businesses looking to make their offering as competitive as possible; worried, also, about being left behind in a race to deliver greater sophistication in products offered to schools and colleges.

Innovative businesses behind the development of AI applications, point to the enormous advantage to our education system, as the inclusion of AI driven tools within a pupil’s learning experience can be a basis for a much more customised learning experience. AI-based solutions can provide access for the pupil to, not just a teacher’s critique and signposting of further tasks, but learning also from the experience of many others who have followed similar learning pathways and been confronted by similar challenges.

The growing credibility that AI products have in the assessment processes, can present an answer to the vexed issue of teacher workload concerns.  And yet, the use of AI in any context where personal data is gathered, processed and aggregated with other data, involving limited, or even no human intervention, is viewed with suspicion and concern by many, particularly privacy campaigners.

It is to this background that the ICO has been gearing up in order to be able to respond to the growing use of AI across numerous sectors, including education.  A knowledgeable team with a solid understanding, working also with the Turing Institute, ensures that the ICO is already well placed to address matters of data protection compliance where AI processes have been engaged.

 

A number of important publications are now being developed

In addition to regulatory work such as enforcement, the ICO has a responsibility to promote good practice in the protection and processing of personal data, and it is with this brief in mind that the ICO is busy developing publications intended to inform the use of AI in the processing of personal data.

First to see the light of day was an extensive statement of the auditing tools and procedures ICO will use in its regulatory work.  Last week saw the publication of detailed guidance on AI and data protection, the subject of this briefing.

To follow, is a toolkit the ICO expects will provide support for information governance within organisations which develop or buy in AI-based solutions, whether for their own business or that of their customers.

The guidance is not statutory in status but is to be regarded as good practice in the context of data protection compliance. There may be circumstances in which an organisation may justifiably deviate in its practices, but the business will need to be ready to be accountable for the deviation and be able to demonstrate its thinking at the time when the deviation was decided upon.

The guidance is also intended to signpost AI developers towards data protection law, enabling a developer, and also those in a governance role (also an intended audience for the guidance), to identify the all-important legislative compliance requirements which AI should not override

 

Where do data protection risks lie in AI?

The guidance has particular value in its identification of issues which should be considered when engaging with developers to bring AI into product development. There are a surprising number of issues which can be encountered, depending, of course, on the nature of the product in development and the precise ways in which personal data may, in its processing, be tested against data already held. A few particular issues are less obvious to anyone with just a day-to-day working knowledge of the scope of data protection legislation. Here are some examples of the less obvious issues which AI presents.

 

AI can actually add to the amount of data you control

AI frequently leads to the creation of new elements of personal data. Not all AI processes lead to outcomes which fall to be regarded as personal data, but those that do should fall immediately into the scope of protection and, of course, with transparency around existence and intended use.

 

AI can close off opportunities for an individual

Where an AI process leads to recommendations about a particular individual then, even if the processing requires an element of human intervention, the role of AI can be of huge significance.

For example, if an AI-based outcome presented to a teacher or a careers advisor leads to the elimination of options which could have been recommendations for a future learning task – or a plan to attain further qualifications – then the AI-based approach will need to be demonstrably robust. Issues around risk of bias, particularly in the context of equalities legislation, falls within the ambit of what ICO will be likely to consider.

 

Particular care will be required in the area of special category data

Certain types of information such as health, sexual preference and religious affiliation are treated as special category data, requiring particular care and with, in some cases, additional rights for the data subject.

AI applications may bring into play considerations around special category data. If an application designed to track eye movement as the data subject reads a textbook is calculated to reveal cognitive issues the data created would fall into special category status.

 

Buying in readymade AI solutions does not absolve an organisation from responsibility

As is perhaps to be expected, organisations adopting AI solutions where the engine for the data processing has been developed by another organisation, does not absolve the purchasing organisation from responsibility. The guidance mentions at numerous points the role of Data Privacy Impact Assessments to be developed throughout the journey to implementation of a new product. The previous history of the product and how risks have been managed in its development, should be scrutinised in a due diligence manner and reflected in the assessment undertaken.

 

Is the guidance written for me?

ICO treats AI as one of its top three regulatory priorities, making the issue of great importance to those involved in the introduction of AI processes into the learning and assessment experiences. Accessing and developing an understanding of those aspects of the guidance relevant to any particular role played should be of enormous value. Provided, of course, that the guidance is comprehensible to all readers.

The guidance is well written in terms of the understanding the authors have, concerning AI and the processes involved when AI-based solutions are developed and implemented.  Likewise, it is a valuable assessment of data protection compliance issues relevant to AI.

An important test of the value of the guidance is the bridge it should create, taking developers through the legal compliance requirements which must be satisfied, and providing a basis for data protection officers, legal counsel and others concerned with governance, to be able to interpret the implications of the technical work they will be required to relate any compliance advice to.

In Freeths’ response to the consultation version of this guidance, we drew attention to this challenge and we see the challenge remaining in this extensive and complex publication. At the same time the authors have endeavoured to present the guidance in as accessible a form as is possible.

The Freeths Data Protection Team is taking a particular interest in this important area of business activity and can advise in areas of critical importance including the updating of fair processing notices, Data Protection Impact Assessments and responding to data subject access requests.

Click here to view the Guidance