AI - the end of humanity or the next evolutionary step?
AI – the end of humanity or the next evolutionary step?
Computers are becoming more powerful. Much more powerful. Last week, Gordon Moore, the co-founder of Intel Corporation died. A computer industry billionaire, he came up with ‘Moore’s Law’ which observed that the power of computers doubles every couple of years. Today a microchip can contain 50 billion transistors, each narrower than a strand of human DNA.
The war of the robots has begun. Microsoft’s ‘ChatGPT’ and its rival, Google’s ‘Bard’ allow you to have a conversation with a computer, much as you would with another person. But it’s not just talk. As well as writing essays, presentations, legal documents and sermons, artificial intelligence can also produce art. We’ve accepted that machines can beat us at chess, but might they soon also beat us at poetry, painting and music? Could they make Shakespeare look second rate? Or will art without human input always be worthless?
Some people are impressed by the quality of what AI can create, but others are scared. It’s one thing for computers to process our knowledge, but quite another when a machine starts to teach itself. If it behaves just like a real person, will we trust it more than we should? Can machines display morality and if not, is it safe to allow them to make decisions for us? We worry that AI might take over our jobs, but should we really be worrying that it might replace humanity altogether?
Some see AI as the next evolutionary step, the latest development by mankind, with potential to transform lives for the better. But what are the risks in asking technology, however impressive, to solve human problems? Should we be excited by AI, or could artificial intelligence mark the start of the end of humanity?
Producer: Jonathan Hallewell
Presenter: Michael Buerk
Editor: Tim Pemberton