Philosophy of Science and AI
In the Fall of 2022, my co-author and I wrote a piece for the peer-reviewed journal, AI Magazine. Here is the abstract: Researchers are often encouraged to pursue nothing short of revolutionary advances, and those who work in artificial intelligence are no exception. However, an exclusive focus on revolutionary breakthroughs is often counterproductive in science. As explained by Kuhn almost 50 years ago, dramatic breakthroughs usually rely on a foundation of less dramatic advances, which uncover anomalies and make marginal improvements to current efforts. Progress relies on an essential tension between convergent and divergent thinking, each being complementary aspects of the same process. We argue that an overemphasis on, and exclusive rewarding of, divergent thinking in contemporary AI—whether in the form of rejecting funding for nontransformative research, or peer-review criteria rejecting papers for lack of novelty—is counterproductive to artificial intelligence and machine learning research, and may even be fundamentally harmful to progress in the field. To reckon with this problem, we recommend increasing funding for iterative improvement of theories, better guidance for reviewers, and more transparency in public funding.
