Entities
View all entitiesIncident Stats
CSETv0 Taxonomy Classifications
Taxonomy DetailsProblem Nature
Specification, Robustness, Assurance
Physical System
Software only
Level of Autonomy
High
Nature of End User
Amateur
Public Sector Deployment
No
Data Inputs
Wikipedia articles, edits from other bots
CSETv1 Taxonomy Classifications
Taxonomy DetailsIncident Number
7
AI Tangible Harm Level Notes
It is unclear if any of the Wikipedia bots under study relies on machine learning technology, but it is unlikely. Nobody experienced any harm.
Special Interest Intangible Harm
no
Date of Incident Year
2001
CSETv1_Annotator-1 Taxonomy Classifications
Taxonomy DetailsIncident Number
7
CSETv1_Annotator-3 Taxonomy Classifications
Taxonomy DetailsIncident Number
7
AI Tangible Harm Level Notes
It is unclear if any of the Wikipedia bots studied rely on machine learning technology, but it is unlikely.
Special Interest Intangible Harm
no
Date of Incident Year
2001
Estimated Date
No
Multiple AI Interaction
yes
Incident Reports
Reports Timeline
- View the original report at its source
- View the report at the Internet Archive
For many it is no more than the first port of call when a niggling question raises its head. Found on its pages are answers to mysteries from the fate of male anglerfish, the joys of dorodango, and the improbable death of Aeschylus.
But ben…
- View the original report at its source
- View the report at the Internet Archive
Analysis An investigation into Wikipedia bots has confirmed the automated editing software can be just as pedantic and petty as humans are – often engaging in online spats that can continue for years.
What's interesting is that bots behave …
- View the original report at its source
- View the report at the Internet Archive
It turns out Wikipedia's automated edit 'bots' have been waging a cyber-war between each other for over a decade by changing each other's corrections -- and it's getting worse.
Researchers at the University of Oxford in the United Kingdom r…
- View the original report at its source
- View the report at the Internet Archive
Wiki Bots That Feud for Years Highlight the Troubled Future of AI
The behavior of bots is often unpredictable and sometimes leads them to produce errors over and over again in a potentially infinite feedback loop.
- View the original report at its source
- View the report at the Internet Archive
Getty Images
No one saw the crisis coming: a coordinated vandalistic effort to insert Squidward references into articles totally unrelated to Squidward. In 2006, Wikipedia was really starting to get going, and really couldn’t afford to have…
- View the original report at its source
- View the report at the Internet Archive
Science fiction is lousy with tales of artificial intelligence run amok. There's HAL 9000, of course, and the nefarious Skynet system from the "Terminator" films. Last year, the sinister AI Ultron came this close to defeating the Avengers, …
Variants
Similar Incidents
Did our AI mess up? Flag the unrelated incidents
All Image Captions Produced are Violent
AI Beauty Judge Did Not Like Dark Skin
Amazon Censors Gay Books
Similar Incidents
Did our AI mess up? Flag the unrelated incidents