Promising Practice

Application of AI for early recognition of criminal offences that relate to hate crime

KISTRA – Einsatz von künstlicher Intelligenz zur Früherkennung von Straftaten
Research project exploring the necessary ethical and legal requirements and concrete technical solutions that enable the police to use artificial intelligence tools to counter hate crime.
Země
Germany
Type
Improve recording and data collection
Category
Flagging potential hate crimes

Organisation

Central Office for Information Technology in the Security Sector (Zentrale Stelle für Informationstechnologie im Sicherheitsbereich, ZITiS).

Start and end date

Start date:

July 2020.

End date:

June 2023.

Scope

National.

Target group(s)

Federal and state police authorities.

Funding

National (Federal Ministry for Research and Education) – € 3 million.

Objectives

  • Carry out research into the technical possibilities of artificial intelligence (AI) and ethical, legal and social framework conditions for responsible use of AI by police authorities to recognise, prevent and prosecute hate crime.
  • Classification of hate crime by AI according to the criminal code.
  • Classification of hate crime by topic.

Outputs

  • Report (to be finalised in Q3 2023, plus annual intermediate reports).
  • Research papers.
  • Technical tool (prototype, to be finalised by Q2 2023).

Description

In this research project, the use of AI to detect, prevent and prosecute hate crime in police organisations is being investigated within a holistic framework. Analysis of the organisational, ethical and legal framework necessary for the adoption of AI by the police to effectively prevent and prosecute hate crime is being undertaken. This framework, combined with the analysis of the effects of online hate crime on society, is used to define the technical conditions that are necessary for employing AI to classify hate speech according to:

  • the criminal code
  • the phenomena of hate speech
  • the topics of hate speech.

The aim of the project is to produce a number of reports that outline the necessary conditions to enable the police to adopt AI to counter hate crime, and investigate the necessary technical approach and potential solutions. The technical results will be demonstrated in a prototype that the police will evaluate. Together, these analyses can be used to implement AI as a supporting tool for the police to prevent, reduce and prosecute hate crime carried out on the internet.

Critical success factors

As the project has not yet been completed, an analysis of the critical success factors has not yet been undertaken.

Actors involved in the design and implementation of the practice

Nine financed partners from public entities, universities and private enterprise are involved in the project, and a further four associated partners are participating in the design and implementation of the project. From the security sector, the following partners are involved:

  • Federal Criminal Police Office (Bundeskriminalamt)
  • ZITiS (lead partner)
  • State Criminal Police Office North Rhein-Westphalia (Landeskriminalamt Nordrhein Westphalen)
  • State Criminal Police Berlin (Landeskriminalamt Berlin)
  • Police Presidium Munich (Polizeipräsidium München).

The following university partners are involved:

  • University of Duisburg-Essen
  • Technical University of Darmstadt
  • Ludwig Maximilian University
  • Technical University of Berlin
  • Rheinisch-Westfälische Technische Hochschule Aachen University
  • Ruhr University Bochum.

The private firm Munich Innovation Labs is responsible for the technical implementation of the AI solutions.

Monitoring and evaluation

No monitoring or evaluation has yet taken place.

Publicly available contact details

ZITiS Website

Project information 

Department of Big Data Analytics
Email: bd@zitis.bund.de