Sylvester Tremmel | Article in c’t 10/2023 p. 26

 

Controlled by others.

How prompt injections can corrupt AI search engines.

Click here for the article in c’t

Quoted from c’t:

Language models that paraphrase search results are complex computing systems that work with uncertain inputs. Simply hoping that everything will go well is naive. Fraudsters could use prompt injections to persuade AIs to make arbitrary statements without being noticed.

  • Language models that interpret external content can be forced to give unwanted responses by manipulating input.

  • More and more users are gaining access to potentially vulnerable systems.

  • Whether and how you can protect yourself against prompt injection is the subject of current research.