A few days ago I spoke with Bryce Wiedenbeck, a CS professor at Swarthmore teaching AI, as part of my project of assessing superintelligence risk. Bryce had relatively similar views to Michael: AGI is possible, it could be a serious problem, but we can't productively work on it now.
Conversation with Bryce Wiedenbeck
A few days ago I spoke with Bryce Wiedenbeck, a CS professor at Swarthmore teaching AI, as part of my project of assessing superintelligence risk. Bryce had relatively similar views to Michael: AGI is possible, it could be a serious problem, but we can't productively work on it now.