单选题

    Professor Stephen Hawking has warned that the creation of powerful artificial intelligence (AI) will be “either the best, or the worst thing, ever to happen to humanity”, and praised the creation of an academic institute dedicated to researching the future of intelligence as “crucial to the future of our civilization and our species.”Hawking was speaking at the opening of the Leverhulme Center for the Future of Intelligence (LCFI) at Cambridge University, a multi-disciplinary institute that will attempt to tackle some of the open-ended questions raised by the rapid pace of development in AI research. “We spend a great deal of time studying history,” Hawking said, “which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.”While the world-renowned physicist has often been cautious about AI, raising concerns that humanity could be the architect of its own destruction if it creates a super-intelligence with a will of its own, he was also quick to highlight the positives that AI research can bring. “The potential benefits of creating intelligence are huge,” he said. “We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one — industrialization. And surely we will aim to finally eradicate disease and poverty. And every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilization.”Hum Price, the centre’s academic director and the Bertrand Russell professor of philosophy at Cambridge University, where Hawking is also an academic, said that the centre came about partially as a result of the university’s Centre for Existential Risk. That institute examined a wider range of potential problems or humanity, while the LCFI has a narrow focus.AI pioneer Margaret Boden, professor of cognitive science at the University of Sussex, praised the progress of such discussions. As recently as 2009, she said, the topic wasn’t taken seriously, even among AI researchers. “AI is hugely exciting,” she said, “but it has limitations, which present grace dangers given uncritical use.”The academic community is not alone in warming about the potential dangers of AI as well as the potential benefits. A number of pioneers from the technology industry, most famously the entrepreneur Elon Musk, have also expressed their concerns about the damage that a super-intelligent AI could do to humanity.1. What did Stephen Hawking think of artificial intelligence?

    A. It would be vital to the progress of human civilization.
    B. It might be a blessing or a disaster in the making.
    C. It might present challenges as well as opportunities.
    D. It would be a significant expansion of human intelligence.

    单选题

    Human memory is notoriously unreliable. Even people with the sharpest facial-recognition skills can only remember so much.It’s tough to quantify how good a person is at remembering. No one really knows how many different faces someone can recall, for example, but various estimates tend to hover in the thousands — based on the number of acquaintances a person might have.Machines aren’t limited this way. Give the right computer a massive database of faces, and it can process what it sees — then recognize a face it’s told to find — with remarkable speed and precision. This skill is what supports the enormous promise of facial-recognition software in the 21st century. It’s also what makes contemporary surveillance systems so scary.The thing is, machines still have limitations when it comes to facial recognition. And scientists are only just beginning to understand what those constraints are. To begin to figure out how computers are struggling, researchers at the University of Washington created a massive database of faces — they call it MegaFace — and tested a variety of facial-recognition algorithms as they scaled up in complexity. The idea was to test the machines on a database that included up to 1 million different images of nearly 700,000 different people — and not just a large database featuring a relatively small number of different faces, more consistent with what’s been used in other research.As the database grew, machine accuracy dipped across the board. Algorithms that were right 95% of the time when they were dealing with a 13,000 — image database, for example, were accurate about 70% of the time when confronted with 1 million images. That’s still pretty good, says one of the researchers, Ira Kemelmacher-Shlizerman. “Much better than we expected,” she said.Machines also had difficulty adjusting for people who look a lot alike — either doppelgangers, whom the machine would have trouble identifying as two separate people, or the same person who appeared in different photos at different ages or in different lighting, whom the machine would incorrectly view as separate people.“Once we scale up, algorithms must be sensitive to tiny changes in identities and at the same time invariant to lighting, pose, age,” Kemelmacher-Shlizerman said.The trouble is, for many of the researchers who’d like to design systems to address these challenges, massive datasets for experimentation just don’t exist — at least, not in formats that are accessible to academic researchers. Training sets like the ones Google and Facebook have are private. There are no public databases that contain millions of faces. MegaFace’s creators say that it’s the largest publicly available facial-recognition dataset out there.“An ultimate face recognition algorithm should perform with billions of people in a dataset,” the researchers wrote.6. Compared with human memory, machines can _________.

    A. identify human faces more efficiently
    B. tell a friend from a mere acquaintance
    C. store an unlimited number of human faces
    D. perceive images invisible to the human eye

    火星搜题