3

I'm writing about a computer system that relies on Artificial Intelligence and the threats that this may include. One threat vector (for example) is to seed Bayesian AI with content to skew the outcome.

Question

Assuming that AI can't tell the difference between a fact and fiction, what is the term used to intentionally bias the system so that a desired result is achieved?

  • "Hacking" is too broad

  • "phishing" is about misleading humans to disclose information.

  • "Fox-News-ing" humans seem to be the most appropriate analogy, but I need a term the already exists... or I can invent one, given that I define the term in the beginning of the paper.

My draft is using the term "Foxing A.I." for now

makerofthings7
  • 50,090
  • 54
  • 250
  • 536

1 Answers1

1

This question may be better suited for English.SE.

Disinformation is defined as the the act of misinforming someone by giving them deliberately incorrect information, often with the purpose being to change the actions or behavior of the target. This is not specific to humans, and disinformation would be applicable to an artificial intelligence that implicitly trusts input and will act on it. This has a nefarious connotation.

An example could be telling a military tactical advisor computer that its country's adversaries have just amassed a large quantity of weapons of mass destruction. Despite being programmed well, the machine may conclude that the best course of action is a massive preemptive strike.

Garbage In, Garbage Out, or GIGO, is a term in analytics and logic, very often applied to computer programs, which essentially means that the conclusion made by an algorithm is unsound if the premises are flawed. This term came from concepts in scheduling: First In, First Out (FIFO) and Last In, First Out (LIFO). Phrased another way, no matter how sophisticated a computer is, the computer's output is only as accurate as the information given to it.

An example of this could be providing a utilitarian political advisor supercomputer with incorrect utility values, leading to felicific calculus giving an incorrect or even horrific answer, despite the machine attempting to improve the quality of human life. If you tell such a computer that people get a huge sense of thrill out of being robbed, it very well may suggest reducing the budget to fight crime.

forest
  • 64,616
  • 20
  • 206
  • 257