Transformative Research Focus Considered Harmful
Researchers are often encouraged to pursue nothing short of revolutionary advances, and those who work in artificial intelligence are no exception. However, an exclusive focus on revolutionary breakthroughs is often counterproductive in science. As explained by Kuhn almost 50 years ago, dramatic breakthroughs usually rely on a foundation of less dramatic advances, which uncover anomalies and make marginal improvements to current efforts. Progress relies on an essential tension between convergent and divergent thinking, each being complementary aspects of the same process. We argue that an overemphasis on, and exclusive rewarding of, divergent thinking in contemporary AI—whether in the form of rejecting funding for nontransformative research, or peer-review criteria rejecting papers for lack of novelty—is counterproductive to artificial intelligence and machine learning research, and may even be fundamentally harmful to progress in the field. To reckon with this problem, we recommend increasing funding for iterative improvement of theories, better guidance for reviewers, and more transparency in public funding.
How to Cite
Copyright (c) 2022 Michael Cooper, John Licato
This work is licensed under a Creative Commons Attribution 4.0 International License.