Held at The Babbage Lecture Theatre, New Museums Site, Cambridge and chaired by Dr Sarah Dillon (Leverhulme Centre for the Future of Intelligence) speakers included:
Prof. Simon Schaffer (University of Cambridge – social history of science)
Prof. Murray Shanahan (DeepMind, Imperial College London)
Prof. Margaret A. Boden OBE ScD FBA (University of Sussex, Research Professor of Cognitive Science)
Prof. Nathan Ensmenger (School of Informatics, Computing, and Engineering, Indiana University)
Pamela McCorduck (author of Machines Who Think, an authoritative history of AI)
The event commemorates the 60th anniversary of the 1958 ‘Mechanisation of Thought Conference’ held at the National Physical Lab in Teddington, England – first international symposium on AI.
‘Human-level intelligence is familiar in biological hardware – it happens inside our skulls. Many researchers now take seriously the possibility that similar intelligence will be created in computers, perhaps within this century. Freed of biological constraints, such as limited memory and slow biochemical processing speeds, machines may eventually become more intelligent than we are – with profound implications for us. As Stephen Hawking has put it, “when it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”’
Notes:
Relevance of the 1958 conference
“To dream about what is possible and then create it”
Simon Schaffer –
what is the political genealogy of AI?
The National Physical Lab was established in 1900 to standardise scientific measurement throughout the British empire.
AI was known as ‘inanimate reason’, in 1800s.
Pamela McCorduck –
the model of intelligence has changed. Was seen as exclusively human. Now, primate intelligence develops following AI machine learning. Recurring language of ‘automation’ in original papers 1950. Today the language around AI is focussed almost exclusively on the technological impact on work. In the 1950’s computers were seen as/designed to replace humans, not a comfortable subject for many AI developers then or now
Margaret Boden –
In 1958 John McCarthy said it was too early to discuss social implications of AI. 50s symposium papers discuss different kinds of AI. including cybernetics, digital computers, analog computers, machinery… different sorts of expertise. Sub-fields. In 1958 AI ideas were being used in other fields – “interdisciplinary”, biology, neuroscience and psychology. Less so today.
Murray Shanahan –
Selfridge references (Milton’s) ‘Pandemonium architecture’ – demons. In 1959, O. G. Selfridge proposed an architecture, that he called Pandemonim, that could be useful for pattern recognition In the model, different “demons” operate to identify letters at different levels. He demonstrated that it could effectively used to distinguish dots and dashes in hand-keyed morse code and to distinguish hand-printed characters out of a ten possibilities.
Reference Humphrey Jennings’ book “Pandaemonium, 1660-1886: The Coming of the Machine as Seen by Contemporary Observers” articles and excerpts on the development and impact of the Industrial Revolution across society.
https://en.wikipedia.org/wiki/Pandaemonium_(history_book)
AI Replacing humans in work
Nathan Ensmenger –
Autonomous vehicles need detailed mapping that is done by humans. The labour needed to create infrastructure for autonomous vehicles are not mentioned in the PR. drivers, road teams, coders, google employees, etc. Many employed to facilitate AI. The term ‘Automation’ is often conflated with ‘AI’, not always anything to do with each other. Often depends on whose work is affected – if blue collar it’s called ‘automation’ (robots), if white collar it’s ‘AI’
Women in AI:
Margaret Boden –
Margaret Masterman was known for her pioneering work in the field of computational linguistics and especially machine translation. She founded the Cambridge Language Research Unit and carried out important early work on dictionaries that led to paper by her students in 70s which underpins all search engines today
https://en.wikipedia.org/wiki/Margaret_Masterman
International Competition in AI:
Murray Shanahan –
Technology/AI has always been a national competition (esp from US perspective)… competition came from Russia, then Japan, now China.
Simon Schaffer –
AI’s ‘Sputnik moment’ convinced Chinese gov to invest heavily. Moved on from just being about chess. Catalyst was a game of Go! between 19yo champion Ke-Jie and Google’s AlphaGo. ‘Believed to have been invented more than 2,500 years ago, Go’s history extends further into the past than any board game still played today. In ancient China, Go represented one of the four art forms any Chinese scholar was expected to master. The game was believed to imbue its players with a Zen-like intellectual refinement and wisdom. Where games like Western chess were crudely tactical, the game of Go is based on patient positioning and slow encirclement, which made it into an art form, a state of mind.’
https://asiasociety.org/blog/asia/chinas-sputnik-moment-and-sino-american-battle-ai-supremacy
How do we help those people/countries shut out of the AI revolution?
Simon Schaffer –
Is AI Cultural imperialism? Is Globalisation the new Colonialism? ref. imperialist ideas of the automaton being Eastern… malign influence, invaders. Why the Mechanical Turk chess playing machine was created Turkish by a Hungarian https://en.wikipedia.org/wiki/The_Turk
The failure of other countries to adopt/replicate the western tech is seen as their own fault, denial of responsibility by originators
Margaret Boden –
Conversing with AI devices forces us to use a very limited form of English language, no nuance, irony, idiosyncracy. Worry that children’s use of language will be affected. A lot of money going into replacing human carers (elderly, lonely) with AI system (particularly Japan). In Japanese care homes, inhabitants without visitors receive max 2 mins human contact a day. Unethical. Rules and ethics being developed ad hoc, in a variety of locations, little collaboration or agreement. One common suggestion is that AI should not present itself as being human.
Hype in AI
Pamela McCorduck –
Excessive “hype” in AI conversations. Editors and investors particularly guilty – more saleable to talk of the fantastical, rather than realistic limits and slow progress rates. Be honest and accurate about what can be done. Tendency to sensationalise. Speculation is not prediction. Keep realistic horizon to what can be achieved in 5 years, rather than what might be possible in 100 years.