Zusammenfassungen
Today’s “machine-learning” systems, trained by data, are so effective that we’ve invited them to see and hear for us—and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem.
Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole—and appear to assess Black and White defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And as autonomous vehicles share our streets, we are increasingly putting our lives in their hands.
The mathematical and computational models driving these changes range in complexity from something that can fit on a spreadsheet to a complex system that might credibly be called “artificial intelligence.” They are steadily replacing both human judgment and explicitly programmed software.
In best-selling author Brian Christian’s riveting account, we meet the alignment problem’s “first-responders,” and learn their ambitious plan to solve it before our hands are completely off the wheel. In a masterful blend of history and on-the ground reporting, Christian traces the explosive growth in the field of machine learning and surveys its current, sprawling frontier. Readers encounter a discipline finding its legs amid exhilarating and sometimes terrifying progress. Whether they—and we—succeed or fail in solving the alignment problem will be a defining human story.
The Alignment Problem offers an unflinching reckoning with humanity’s biases and blind spots, our own unstated assumptions and often contradictory goals. A dazzlingly interdisciplinary work, it takes a hard look not only at our technology but at our culture—and finds a story by turns harrowing and hopeful.
Von Klappentext im Buch The Alignment Problem (2020) Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole—and appear to assess Black and White defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And as autonomous vehicles share our streets, we are increasingly putting our lives in their hands.
The mathematical and computational models driving these changes range in complexity from something that can fit on a spreadsheet to a complex system that might credibly be called “artificial intelligence.” They are steadily replacing both human judgment and explicitly programmed software.
In best-selling author Brian Christian’s riveting account, we meet the alignment problem’s “first-responders,” and learn their ambitious plan to solve it before our hands are completely off the wheel. In a masterful blend of history and on-the ground reporting, Christian traces the explosive growth in the field of machine learning and surveys its current, sprawling frontier. Readers encounter a discipline finding its legs amid exhilarating and sometimes terrifying progress. Whether they—and we—succeed or fail in solving the alignment problem will be a defining human story.
The Alignment Problem offers an unflinching reckoning with humanity’s biases and blind spots, our own unstated assumptions and often contradictory goals. A dazzlingly interdisciplinary work, it takes a hard look not only at our technology but at our culture—and finds a story by turns harrowing and hopeful.
Dieses Buch erwähnt ...
Personen KB IB clear | Solon Barocas , George E. P. Box , Joy Buolamwini , Geoffrey Hinton , William MacAskill , Elon Musk , Andrew D. Selbst , Sebastian Thrun | ||||||||||||||||||
Aussagen KB IB clear | All Models Are Wrong, Some Are Useful
Machine Learning kann bestehende Vorurteile/Ungerechtigkeiten verstärken/weitertragen | ||||||||||||||||||
Begriffe KB IB clear | AlexNet , Algorithmusalgorithm , Apple Watch , Blinder Fleckblind spot , E-LearningE-Learning , false positive rate , GenderGender , gender bias , Komplexitätcomplexity , Künstliche Intelligenz (KI / AI)artificial intelligence , machine learning , Selbstfahrende Autosautonomous car , uncanny valley , Word embedding | ||||||||||||||||||
Bücher |
| ||||||||||||||||||
Texte |
|
Tagcloud
Zitationsgraph
Zitationsgraph (Beta-Test mit vis.js)
5 Erwähnungen
- Prediction Machines - The Simple Economics of Artificial Intelligence - Updated Edition (Ajay Agrawal, Joshua Gans, Avi Goldfarb) (2022)
- Pause Giant AI Experiments - An Open Letter (Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, Yuval Noah Harari) (2023)
- You & AI - Alles über Künstliche Intelligenz und wie sie unser Leben prägt (Anne Scherer, Cindy Candrian) (2023)
- 6. Leitplanken der KI - Wie wir KI auf Kurs halten
- Alles überall auf einmal - Wie Künstliche Intelligenz unsere Welt verändert und was wir dabei gewinnen können (Miriam Meckel, Léa Steinacker) (2024)
- 9. Das ethische Spiegelkabinett - Wenn KI Werte nachahmt
- Nexus - Eine kurze Geschichte der Informationsnetzwerke von der Steinzeit bis zur künstlichen Intelligenz (Yuval Noah Harari) (2024)
Co-zitierte Bücher
Volltext dieses Dokuments
The Alignment Problem: Gesamtes Buch als Volltext (: 1069 kByte) | |
The Alignment Problem: Gesamtes Buch als Volltext (: , 3105 kByte) |
Bibliographisches
Beat und dieses Buch
Beat hat dieses Buch während seiner Zeit am Institut für Medien und Schule (IMS) ins Biblionetz aufgenommen. Beat besitzt kein physisches, aber ein digitales Exemplar. (das er aber aus Urheberrechtsgründen nicht einfach weitergeben darf). Es gibt bisher nur wenige Objekte im Biblionetz, die dieses Werk zitieren.