题目内容

Professor Stephen Hawking has warned that the creation of powerful artificial intelligence (AI) will be “either the best, or the worst thing, ever to happen to humanity”, and praised the creation of an academic institute dedicated to researching the future of intelligence as “crucial to the future of our civilization and our species.”Hawking was speaking at the opening of the Leverhulme Center for the Future of Intelligence (LCFI) at Cambridge University, a multi-disciplinary institute that will attempt to tackle some of the open-ended questions raised by the rapid pace of development in AI research. “We spend a great deal of time studying history,” Hawking said, “which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.”While the world-renowned physicist has often been cautious about AI, raising concerns that humanity could be the architect of its own destruction if it creates a super-intelligence with a will of its own, he was also quick to highlight the positives that AI research can bring. “The potential benefits of creating intelligence are huge,” he said. “We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one — industrialization. And surely we will aim to finally eradicate disease and poverty. And every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilization.”Hum Price, the centre’s academic director and the Bertrand Russell professor of philosophy at Cambridge University, where Hawking is also an academic, said that the centre came about partially as a result of the university’s Centre for Existential Risk. That institute examined a wider range of potential problems or humanity, while the LCFI has a narrow focus.AI pioneer Margaret Boden, professor of cognitive science at the University of Sussex, praised the progress of such discussions. As recently as 2009, she said, the topic wasn’t taken seriously, even among AI researchers. “AI is hugely exciting,” she said, “but it has limitations, which present grace dangers given uncritical use.”The academic community is not alone in warming about the potential dangers of AI as well as the potential benefits. A number of pioneers from the technology industry, most famously the entrepreneur Elon Musk, have also expressed their concerns about the damage that a super-intelligent AI could do to humanity.1. What did Stephen Hawking think of artificial intelligence?

A. It would be vital to the progress of human civilization.
B. It might be a blessing or a disaster in the making.
C. It might present challenges as well as opportunities.
D. It would be a significant expansion of human intelligence.

查看答案
更多问题

What did Hawking say about the creation of the LCFI?

A. It would accelerate the process of AI research.
B. It would mark a step forward in the AI industry.
C. It was extremely important to the destiny of humankind.
D. It was an achievement of multi-disciplinary collaboration.

What did Hawking say was a welcome change in AI research?

A. The shift of research focus from the past to the future.
B. The shift of research from theory to implementation.
C. The greater emphasis on the negative impact of AI.
D. The increasing awareness of mankind’s past stupidity.

What concerns did Hawking raise about AI?

A. It may exceed human intelligence sooner or later.
B. It may ultimately over-amplify the human mind.
C. Super-intelligence may cause its own destruction.
D. Super-intelligence may eventually ruin mankind.

What do we learn about some entrepreneurs from the technology industry?

A. They are much influenced by the academic community.
B. They are most likely to benefit from AI development.
C. They share the same concerns about AI as academic.
D. They believe they can keep AI under human control.

答案查题题库