⚬ Will development in Deep Learning completely solve the problem(s)of Artificial Intelligence?
⚬ If not, how could and/or should AI and ML continue towards such a general solution?
⚬ What is going to be "the next big thing" that lets us build beyond the recent success of Deep Learning?
Extended abstracts outlining (some of) the speakers' positions regarding the topic(s) of the symposium can be found in the pre-proceedings of CoCoSym 2018.
Chair/Host: Tarek R. Besold, City, University of London.
Co-Convenors: Artur d'Avila Garcez, City, University of London. // Michael Spranger, SONY CSL. // Michael Witbrock, IBM Research AI.
⚬ Eduardo Alonso, City, University of London.
⚬ Antoine Bordes, Facebook AI Research.
⚬ Edward Grefenstette, DeepMind.
⚬ Barbara Hammer, Bielefeld University.
⚬ Kristian Kersting, TU Darmstadt.
⚬ Alessio Lomuscio, Imperial College London.
⚬ Stephen Muggleton, Imperial College London.
⚬ Alessandra Russo, Imperial College London.
⚬ Luciano Serafini, Fondazione Bruno Kessler.
⚬ Hava Siegelmann, DARPA & University of Massachusetts Amherst.
⚬ Francesca Toni, Imperial College London.
⚬ Volker Tresp, LMU Munich & Siemens.
⚬ Frank van Harmelen, VU Amsterdam.
⚬ Geraint Wiggins, VU Brussel & Queen Mary, University of London.
⚬ Michael Wooldridge, University of Oxford.
⚬ Willem Zuidema, University of Amsterdam.
City, University of London // Tait Building // Northampton Square, EC1V 0HB, London, UK //Rooms: C307 (9am - 11am) & C312 (11am - 6pm).
Deep Learning currently is the dominant topic in Machine Learning and Artificial Intelligence, delivering newsworthy headlines on a weekly basis and keeping academia, industry, and the general public on their toes.
Still, by now there is a growing number of voices (including one of "the fathers of Deep Learning") questioning how long this level of progress can be sustained, and how far Deep Learning as a paradigm can actually go.
In order to understand many of the corresponding worries, it is worth looking back a few decades into the history of AI and ML. While early work on knowledge representation and inference was primarily symbolic, the corresponding approaches subsequently fell out of favor, and were largely supplanted by connectionist methods (eventually leading into the present dominance of Deep Learning). Symbolic methods performed excellently well in tasks requiring reasoning and related forms of processing knowledge in a structured manner, but were much too inflexible and brittle when it came to tasks requiring adaptive behaviour based on learning from (possibly even noisy) data. The situation today seems like a mirror image thereof: While Deep Learning approaches excel at processing noisy data in numerous application domains, the corresponding systems lack in top-down control and the ability to reason over learned information.
Using "Cognitive Computation" as joint umbrella term and final aim, this symposium brings together established leaders in the fields of neural computation, logic and artificial intelligence, knowledge representation, natural language understanding, machine learning, cognitive science and computational neuroscience. They are invited to share their views on the "Big Questions" stated above, outlining relevant parts of their recent work and engaging in a discussion with each other and the audience on the future of AI and ML.
The ticketing system for CoCoSys 2018 has been closed since all tickets have sold out. Apologies for any inconveniences caused!
General questions concerning the workshop should be addressed to Tarek R. Besold at Tarek(hyphen)R(dot)Besold(at)city(dot)ac(dot)uk.
This symposium is conceptually related to the series of International Workshops on Neural-Symbolic Learning and Reasoning (NeSy).
Please also feel free to join the neural-symbolic integration mailing list for announcements and discussions - it's a low traffic mailing list.