SYNTHETIC ORGANISMS
About this Workshop
$50B dollars and 30 years worth of effort has been invested into self-driving cars, yet it is an open secret that they still require human operators in the loop. The same can be said in terms of chat-bots, and contextual agents.
Why is that?
Why is it that machines still struggle with what we as humans find easy?
We believe that paradigmatic level shifts in thinking across a multitude of seemingly unrelated fields - some big and some small - need to be realized.
This workshop attempts to tie them together through a series of thought provoking questions, observations, and proposals.
Example items we will touch upon:
How does learning in natural systems differ from artificial ones?
Does it make sense to talk about artificial intelligence without artificial life first? Why or why not?
Should we bypass simulations via hybrid wetware (ie biological neural-nets in-a-vat)?
Should we stop thinking about robots, and start thinking about synthetic-organisms?
What do we miss if we artificially replicate an intelligence, without the natural lineage that intelligence came from?
Is interaction in a/the world decisive in bypassing passive big data?
What would replacing task based paradigms and replacing them with open-endedness gain us?
Is there an "atom" of intelligence/learning?
What is the MVO (minimally viable organism) of natural intelligence?
Do you believe that embodiment can be replaced with data acquired from the Internet and/or simulation?
Is statistical learning a good language to deal with world complexity?
Is it correct to say that current struggles of AI are stemming from an attempt to use "short-tailed" statistical methods to "long-tail" data?
If you had Google/FB/Microsoft compute at your disposal, what would you do with it?
In 1981 the Neocognitron was proposed as a biologically inspired framework that resulted in convolutional neural nets and consequently the current wave of AI. Is there anything equivalent today that might create a similar wave of research in years to come?
How do you see interaction with environment as a short-circuit for big data? Is "big data" a cul-de-sac of AI? Examples?
How do you see your proposal/agenda be implemented using current paradigms if at all?
Schedule
(All times are PST) The workshop is on April 4th 2022 at Robosoft 2022 conference and will be held virtually.
7:00 am - 7:15 am: Tarin / Filip / Todd: Intro
7:15 am - 7:40 am: Dileep George [Evolution, inductive biases, and general intelligence towards Visual Reasoning]
7:40 am - 8:05 am: Stephane Deny [Bio-inspired Deep Learning: Moving away from Pattern Recognition
8:05 am - 8:30 am: Melanie Mitchell [ Abstraction and Analogy: The Keys to Robust AI]
8:30 am - 9:00 am: Food/Coffee break
9:00 am - 9:25 am: Paul Cisek [What can the history of real organisms tell us about synthetic ones?]
9:25 am - 9:50 am: Tony Zador [A Critique of Pure Learning]
9:50 am - 10:15 am: Josh Bongard [Dissolving dichotomous thinking with living robots]
10:15 am - 10:40 am: Michael Levin [Morphogenesis as Collective Intelligence of Cells: non-neural substrates of cognition]
10:40 am - 10:55 am: Coffee break
11 am - 12:15 pm: PANEL DISCUSSION [ALL Speakers]