Danny Hillis: The Internet could crash. We need a Plan B

Speaker

Danny Hillis is an inventor, scientist, author and engineer. While completing his doctorate at MIT, he pioneered the concept of parallel computers that is now the basis for most supercomputers, as well as the RAID array.

Summary

Danny was one of the first users of the internet – back in 1982 when everyone’s email address and contact details were printed large print in a thin phone book. All users trusted each other, taking only what they needed (domain names) and passing messages on each other’s behalf when bandwidth was low. He jokes that it is remarkable such communist ideals underwrote the US defense department’s efforts during the cold war.

Trust is much lower nowadays, and we are dealing with this by making smaller walled networks: VPNs and subnetworks – that imitate the internet on a smaller scale. The internet protocols are still vulnerable to attack and silly mistakes – for example Youtube was blocked in all of Asia because of an error in Pakistan’s protocols. Recently a mistake was made by Chinese telecom where a large proportion of US internet traffic (including defence networks) went through China – whether or not this action was a mistake it is easy to see how this can be abused by someone doing it intentionally. Industrial control networks can be crippled – these systems do not think of themselves as part of the internet, but they can be made vulnerable for example an Iranian nuclear plant’s centrifuges destroyed themselves in a cyber-attack.

Internet security tends to focus on the target’s computers, and not on the internet itself. An early bug in ARPAnet caused one router to claim it could deliver a packet in negative time, and other routers looking for quickest delivery sent everything through it. To fix this bug they had to reset the whole internet: a process which would be impossible now with so many other systems reliant on it. The internet protocols and building blocks are now being used in ways and systems that it wasn’t designed for, such as mobile phone networks, rocket ship communications, petrol pumps. It has become a system where people understand the individual components, but noone can understand the scale of the system and how it fits together. It was a small system originally built on trust, and now expanded well beyond how it was intended.

Danny proposes we need a separate system independent of the internet as a ‘backup’ if the internet is taken down by an attack. It needn’t be as big and wouldn’t be complicated to design, just something to allow emergency services to keep communication going. It is one of the easiest TED ideas to implement, we just need to convince people that it is worth doing.

My Thoughts

Danny’s discussion of the internet in its early days is fascinating, however I’m not entirely sure what he is asking us to do now. He mentions police need to talk to fire services – is he just advocating that phone networks or radios stay independent of the internet? How does independence work anyway: he said himself that industrial / military networks are designed to be separate from the internet but are vulnerable to attack regardless.

He mentions the technical details are easy to design – perhaps he should put a proposal forward with a ‘build it and they will come’ philosophy, so we can clearly see what he is proposing and what functionality it would give. Until then it is hard to imagine what we need from a ‘backup internet’.

Advertisements

Jeremy Howard: The wonderful and terrifying implications of computers that can learn

Speaker

Jeremy Howard is the CEO of Enlitic, an advanced machine learning company in San Francisco.

Summary

Traditional programming of a computer means telling it in absolute detail how to achieve a task. This is difficult unless the programmer is an expert in the task he is teaching, and prevents the computer being better than the programmer. Machine learning allows a computer to learn on it’s own – as Arthur Samuel programmed a computer to beat himself at checkers. Nowadays machine learning has been successfully commercialised. Google is based on machine learning, LinkedIn and Facebook have learnt how to recommend friends, Amazon can recommend products using machine learning.

Deep learning is an algorithm inspired by how the human brain works, so assuming enough computation time and learning time, there are no limits. For example, deep learning can:

  • Drive cars
  • understand English, translate to Chinese, and read back in Chinese.
  • Image recognition through Deep Learning has an error rate down to 6% – better than human levels.
  • look at an image and identify similar images
  • write a caption for an image
  • understand sentence structure and language.

These are very human-centric that humans are now able to do.

Computers are also exceeding and enhancing human performance. In cancer diagnosis, a computer analysed tumours and discovered some features unknown to human doctors that can help predict survival rate and treatment. Computer predictions of survival were more accurate than humans and the discoveries improved the science of cancer treatment. This system can be developed with no background in medicine, and replaces the data analysis and diagnostics of the medical process. This leaves doctors more time to gather input data and apply treatments. The number of doctors in the developing world is 10 – 20 times less than what is needed, and will take many generations to train enough. If computers can learn to fill these roles, lives will be saved.

On the flipside, computers will wipe out a service industry whose role is to read documents, drive cars, talk. This is >80% of the jobs in the developed world. In the past (eg industrial revolution) a large number of jobs were obsolete at the same time as new jobs came into being, but computer learning is much more disruptive than this since it takes very few people to develop and roll out the algorithms. Once fully rolled out, computers will far surpass humans at an exponential rate – when computers can redesign themselves to be better and better.

To fix the high unemployment, better education and incentives to work will not help if there are no jobs to do. We need to look at this problem differently – by decoupling labour from earnings or moving to a craft based economy. Jeremy asks us all to think about how to adjust to this new reality.

My Thoughts

Google emailed me a Youtube update suggesting I watch this video. As someone who watches a lot of TED talks and an interest in Artificial Intelligence, Google knew I would watch it.

Jeremy makes a lot of good points. Personally I think he used too many examples and it was a little disorienting to follow. Nonetheless, he pulls out the important points. To me his final point is the most important: this massive change in our economy is coming and very few people seem prepared for it. How will we deal with a world with 80% unemployment, where all those jobs are no longer necessary for us to maintain the same standard of living?

Alex Wissner-Gross: A new equation for intelligence

Speaker: Alex Wissner-Gross

Length: 11:48

Summary

“The question of whether machines can think is about as relevant as the question of whether submarines can swim” – Computer scientist Djikstra criticising early computer scientists obsessions with machines ‘thinking’.

Alex looked at developing a universal equation of intelligence. Many of the recent intelligent computer programs made actions to maximise future options – not to be ‘trapped’.

His equation is  F = T ∇ Sτ

  • F is Force of intelligence
  • T is strength to maintain future actions
  • With diversity of future options S over time horizon τ

Universes with more entropy are more conducive to intelligence. Alex discussed Entropica, a program that seems to make it’s own goals by maximising long term entropy. This naturally allows it to balance a pole upright, tool use, social networking, play the stock market – even without being instructed to do so. All these inherently human traits can be encouraged by this one equation.

From this experiment, the following conclusions can be drawn

  • The ability to take control of our universe is not a result of intelligence, but a requirement for intelligence.
  • Goal seeking is important to maximise future actions, even at the cost of today’s action
  • Intelligence is a physical process that maximises future freedom, and resists future confinement

A fascinating talk, with important implications in philosophy of intelligence and computer science in addition to the fields mentioned during the experiments. Strongly recommended.