Cover von: Künstliche Intelligenz zur Auswertung von Social Media Massendaten
Gerrit Hornung

Künstliche Intelligenz zur Auswertung von Social Media Massendaten

Rubrik: Abhandlungen
Jahrgang 147 (2022) / Heft 1, S. 1-69 (69)
Publiziert 07.11.2022
DOI 10.1628/aoer-2022-0002
Normalpreis
  • Artikel PDF
  • lieferbar
  • 10.1628/aoer-2022-0002
Beschreibung
The development and improvement of Artificial Intelligence (AI) systems opens up new opportunities for many stakeholders in a great variety of government and economic sectors. This also applies to security authorities, who hope that »intelligent« data analysis will allow them to improve situation pictures, gain insights into behavior patterns, interdependencies and causations, and be able to target preventive or repressive measures more precisely. Public social media data is particularly suitable for such an AI analysis (»social media intelligence«, SOCMINT), as it can often be accessed on a large scale through technical interfaces and contains a great deal of information about actions, attitudes and social relationships – and this also over very long periods of time. However, this benefit corresponds to a significantly increased intensity of the fundamental rights interference the data analysis brings about when compared to targeted individual measures. The public nature of the data does not deprive it of its fundamental right protection. There is also an interference with fundamental rights in so-called non-hit cases, since the authority's interest extends to all persons involved in the analyses. The increase of data analysis through the use of AI will typically rule out the possibility for security authorities to base the measure on general clauses. Rather, there is a need for clearly defined regulations that guarantee a technically adequate protection of fundamental rights, which must provide for specific procedural safeguards. There are very few examples of such provisions, as German law does traditionally not specifically regulate the phase of data analysis, but assumes that the statutory permissions for data collection also include the subsequent processing, interpretation and analysis. However, this is no longer justifiable if the specific weight of the fundamental rights interference is determined by the AI‑based analysis. If the legislature wants to allow these in the future, it must ensure specific protective instruments. There are concrete approaches and possibilities for this in the provisions of the data protection Directive 2016 / 680 (for law enforcement purposes), which have to be observed anyway but have only been implemented rather generally so far. Introducing more specific measures could be fruitfull particularly as regards data protection impact assessments, measures to safeguard the rights of data subjects and enhanced transparency (»explainable AI«).