Ken Jennings: Watson, Jeopardy and me, the obsolete know-it-all


Ken Jennings is a game show star who holds the record for the longest winning streak on the U.S. game show Jeopardy! and as being the second highest-earning contestant in American game show history.


Ken Jennings loved game shows from a young age, and felt extreme satisfaction when he beat his parents at Trivial Pursuit “Knowledge is Power”. In 2004 he appeared on Jeopardy for the first time, but in 2009 he got a call from the producers asking him to play against IBM’s Jeopardy machine: Watson. Because of his love of the game he agreed, but also because he knew about AI at the time and thought he could win. It is extremely difficult for computers to understand language and the nuance of natural communication, so Ken was confident. As the time came closer, he saw graphs of Watson’s performance against other Jeopardy players’ skill level, slowly creeping towards his own. He knew the AI was coming for him – not in the gunsights of Terminator, but in a line of data slowly creeping upwards.

On the day IBM programmers came out to support Watson, and Watson won handily. He remembers feeling the same way a Detroit factory worker did – realising his job had been made obsolete by a robot. He was one of the first, but not only knowledge worker to have this feeling: pharmacists, paralegals, sports journalists are also slowly being overtaken by thinking machines. In a lot of cases, the machines don’t show the same creativity, but they do the job much more cheaply and quickly than a human.

As computers take over thinking jobs, do humans still need to learn anything, or know anything? Will our brains shrink as more tasks get outsourced, and computers remember more facts?

Ken believes having this knowledge in your head is still important because of volume and time.

  • Volume because the amount of information is doubling every 18 months, and we need to make good judgements on these facts. We need the facts in our head to assemble a decision, it is harder to judge these facts while looking them up.
  • Time because sometimes you need a quick decision, or need to know what to do. Ken talks about a child remembering a fact from Geography at the beach: the tide rushing out is a precursor to a Tsunami. Her knowledge and quick response on the day of the 2004 boxing day tsunami saved the people on that beach, which couldn’t be done unless she knew it.

Shared knowledge is also an important social glue: people can bond over a shared experience or knowing something in a way that can’t be simulated by looking things up together.

Ken doesn’t want to live in a world where knowledge is obsolete, or where humanity has no shared cultural knowledge. Right now, we need to make the decision of what our future will be like: will we go to an information golden age where we use our extra access to knowledge, or will we not bother to learn anymore? Ken wants us to keep being curious, inquisitive people – to have an unquenchable curiosity.

My Thoughts

I’m unclear if Ken is talking out against AIs in general, or just about how we manage a transition to increasingly prevalent AIs. I agree with him that people must continue to learn and do things, however I also feel like there is no reason to force us to do jobs once a computer can do it. Instead people should be free to explore, learn, find new hobbies for themselves.

This talk is interesting, especially if you are a fan of Jeopardy or Ken. His experience is one that I’m sure a lot of people will have over the coming years. However, it is mostly of the anecdotal variety: I don’t think it adds much insight to the topic of AI. Regardless, it isn’t supposed to: know that going in and it should be great.

I for one welcome our new robot overlords

Ray Kurzweil: Get ready for hybrid thinking


Ray Kurzweil is an American author, computer scientist, inventor, futurist, and is a director of engineering at Google.


200 million years ago mammals evolved the neocortex. This allowed them to learn and think around problems, to develop new behaviour. Previous reptiles needed to ‘evolve’ new behaviour over thousands of years, but these early rodents could do so instantly. This helped mammals survive the cretaceous extinction event, and since then the neocortex has gotten larger and larger to enable high level thinking.

The brain is a series of ~300 million modules in hierarchies to work on patterns of data: to recognise, learn, implement a pattern. For example a series of modules might look for the crossbar part of an “A”, then a higher module would decide it is an “A”, then the word, sentence etc. It can also work in reverse, using context of higher levels (the rest of the word) to lower thresholds as if asking “I think it is: could this letter possibly be an A?”. This is similar to a Hierarchical Hidden Markov Model, being used in AI to understand language.

In the future hybrid thinking will evolve: combining human and computer thinking. Google will understand language more than just series of keywords, and could anticipate user problems and keep them up to date on research of interest to them. Ray also predicts that nanobots could interface with out neocortex and connect it to ‘the cloud’ – to massively expand our brainpower using an external computer network. This will expand our neocortex: and remember how powerful it was last time mammals developed their neocortex… This time we will not be restricted by the architecture of our heads – there could be no limit.

My Thoughts

The history of the neocortex is one of the better descriptions I have heard. The models he describes are easy to understand for the layman and also useful enough to apply to reality.

His comments on the future seem a bit too sci-fi though. It isn’t that this won’t happen, but he doesn’t really describe how or why. Thoughts of the AI singularity and similar ideas have been knocking around human culture for 50 years, constantly just around the corner. We are no doubt closer now than before, but the nanobots and ‘brain extension’ he talks about are a long way away. Even if AI is ready for this advancement, medical understanding of the brain is still too far away to connect us into computers.

Andrew McAfee: Are droids taking our jobs?


Andrew McAfee is the associate director of the Center for Digital Business at the MIT Sloan School of Management, studying the ways information technology (IT) affects businesses and business as a whole.


As technology advances, the media focus is on how this will affect employment. There are clear signs that technology is decreasing employment rate – with the recovery from the last recession increasing GDP and spending without an increase in jobs.Projecting into the future, Andrew used GDP and productivity growth to predict the in the future jobs will decrease – and this assuming the past will continue without a ‘step change’. Andrew thinks this is very optimistic – in truth there will be a step change that will make this gap far wider still. “you ain’t seen nothing yet”.

In recent years computers have encroached on tasks previously thought exclusively human – in knowledge work. Translation, and journalism have began to be taken over by programs. These do the job, but are criticised for being simple and sometimes flawed. However if these grow at the pace of Moore’s law (which they will), they will be 16 times better in 6 years. In the physical world, Google’s autonomous car is doing a great job and will likely replace truck drivers. The conclusion to this is that computers are going to take over jobs, but that this is not necessarily a bad thing – that it will lead to a utopian future (rather than a dystopia).

Looking at the ‘great achievements’ in human history (religious, empires, wars, disease, age of exploration), none had any significant impact on human population size, or social development. The only one to cause a big step was development of the steam engine and the industrial revolution – which suddenly had an exponential effect on both population and development. This overcame the limitations of human muscles, just as AI revolutions will overcome the limitations of the individual human mind.

Currently innovation is moving away from the ‘ivory tower’ and becoming more widely distributed, merit based and transparent. Technology is giving profound benefits to the wealthy but also to the poor. An economical study of Indian fishing villages showed a much more efficient, fair and less wasteful economy once mobile phones were introduced.

The droids are taking our jobs, but this will free up humans to do other things. We will move on to other endeavours – reducing poverty and living more lightly on the planet. What we do with these machines will make a mockery of all human achievements before it: just as the steam engine did in its time. Ken Jennings – the ultimate Jeopardy champion lost to Watson by a factor of 3:1 in points. One of his answers included the line “I for one welcome our new computer overlords”.

My Thoughts

I’ve looked at Andrew’s talks before (, and “are droids taking our jobs” covers similar material to the more recent “what will future jobs look like”. I think I got more out of “what will future jobs look like” – I suggest seeing that one first.

Having said that, the look at previous human achievements was fascinating, though I’m sure on a different scale the impacts of other technologies (agriculture, use of metals) would have been visible. I agree that AI will similarly give a clear step change – one completely different from previous (and still massive) computing achievements over the previous 70 years.

Jeremy Howard: The wonderful and terrifying implications of computers that can learn


Jeremy Howard is the CEO of Enlitic, an advanced machine learning company in San Francisco.


Traditional programming of a computer means telling it in absolute detail how to achieve a task. This is difficult unless the programmer is an expert in the task he is teaching, and prevents the computer being better than the programmer. Machine learning allows a computer to learn on it’s own – as Arthur Samuel programmed a computer to beat himself at checkers. Nowadays machine learning has been successfully commercialised. Google is based on machine learning, LinkedIn and Facebook have learnt how to recommend friends, Amazon can recommend products using machine learning.

Deep learning is an algorithm inspired by how the human brain works, so assuming enough computation time and learning time, there are no limits. For example, deep learning can:

  • Drive cars
  • understand English, translate to Chinese, and read back in Chinese.
  • Image recognition through Deep Learning has an error rate down to 6% – better than human levels.
  • look at an image and identify similar images
  • write a caption for an image
  • understand sentence structure and language.

These are very human-centric that humans are now able to do.

Computers are also exceeding and enhancing human performance. In cancer diagnosis, a computer analysed tumours and discovered some features unknown to human doctors that can help predict survival rate and treatment. Computer predictions of survival were more accurate than humans and the discoveries improved the science of cancer treatment. This system can be developed with no background in medicine, and replaces the data analysis and diagnostics of the medical process. This leaves doctors more time to gather input data and apply treatments. The number of doctors in the developing world is 10 – 20 times less than what is needed, and will take many generations to train enough. If computers can learn to fill these roles, lives will be saved.

On the flipside, computers will wipe out a service industry whose role is to read documents, drive cars, talk. This is >80% of the jobs in the developed world. In the past (eg industrial revolution) a large number of jobs were obsolete at the same time as new jobs came into being, but computer learning is much more disruptive than this since it takes very few people to develop and roll out the algorithms. Once fully rolled out, computers will far surpass humans at an exponential rate – when computers can redesign themselves to be better and better.

To fix the high unemployment, better education and incentives to work will not help if there are no jobs to do. We need to look at this problem differently – by decoupling labour from earnings or moving to a craft based economy. Jeremy asks us all to think about how to adjust to this new reality.

My Thoughts

Google emailed me a Youtube update suggesting I watch this video. As someone who watches a lot of TED talks and an interest in Artificial Intelligence, Google knew I would watch it.

Jeremy makes a lot of good points. Personally I think he used too many examples and it was a little disorienting to follow. Nonetheless, he pulls out the important points. To me his final point is the most important: this massive change in our economy is coming and very few people seem prepared for it. How will we deal with a world with 80% unemployment, where all those jobs are no longer necessary for us to maintain the same standard of living?

Andrew McAfee: What will future jobs look like?


Andrew McAfee is the associate director of the Center for Digital Business at the MIT Sloan School of Management, studying the ways information technology (IT) affects businesses and business as a whole.


Prophecy is hard, but it is easy to see that in the future there will be more things that sound like science fiction, and fewer jobs. Even in the near future drivers are being replaced with automated cars, something similar to Siri and Watson will take over customer service jobs, and automatic trolleys can automate warehouses. Replacing workers with technology has been happening for 200 years, but most people still had jobs. However, now machines are picking up skills they haven’t had: to understand, speak, think. Jobs will be replaced with machines, but this is wonderful economic news for 2 reasons.

  • Technology is the reason that economies can grow, prices can come down, and quality continue to increase all at the same time.
  • Machines mean that people don’t have to do these jobs any more. No more drudgery or toil, we can evolve society in a new way – to become innovators and explorers and thinkers.

So what are the challenges in this transition?

The first is Economic: it is tough to sell your labour in a world full of machines. Over recent decades company profits have increased while their labour costs (and jobs) have decreased. In the future companies will rely on a prosperous middle class to sell their wares, but the middle class is now under threat. Median income is currently decreasing, while inequality in society is increasing. To take examples of a standard white male US blue collar worker and a similar white collar professional, in the 1960s they were quite similar at 80 and 90% employment. Since then (when automation was starting), the blue collar worker’s employment rate has dropped below 60%, blue collar marriages have become much less happy (from 60% in the 1960s to 20% now), blue collars have disengaged from politics, and they are 5 times more likely to be imprisoned now than 50 years ago. White collar trends have stayed close to where they were in the 1960s.

So how do we deal with the disengagement of blue collar workers? The simple short term solution is to build infrastructure, encourage entrepeneurs and educate to create people who can be employed. But to deal with a more total replacement of workers with machines needs a deeper reaction: such as a guaranteed minimum income. This is often decried as socialism or encouraging laziness, but the US has lower social mobility than European countries with social safety nets.

Education is one solution to the major issues. Primary school education is currently pitched to create factory workers or blue collar clerks – Andrew wants to retool it to aim at a different goal. Andrew is optimistic that things will improve, but the issues need to be embraced, confronted and radical solutions devised. The facts are now becoming more widely known: that the machine age is coming. Abraham Lincoln stated that “if given the truth they (people) can be depended on to face any national crisis”.

My Thoughts

The talk was forseeing a world very different to the one we know now, and was optimistic that humanity could work out the challenges posed by massive unemployment (at least in the jobs we now can see). Personally I am disappointed it only focussed on the decline of blue collar jobs, as I am interested in what would happen if professional jobs were made redundant as well.

Regardless, his suggestion of a minimum income seems to be the only solution to massive unemployment. Nothing else makes sense in a world where (most) people aren’t needed to do jobs – the alternatives are insane: either compelling people to apply for jobs that don’t exist or to stifle innovation so the jobs still exist.