Neural-Symbolic Integration

Workshop series on Neural-Symbolic Learning and Reasoning

At Dagstuhl seminar 14381, Wadern, Germany, marking the tenth edition of the workshop on Neural-Symbolic Learning and Reasoning in September 2014, it was decided that Neural-Symbolic Learning and Reasoning should become an Association with a constitution, and a more formal membership and governance structure.

The object of the NeSy association is to promote research in neural-symbolic learning and reasoning, and communication and the exchange of best practice among associated research areas. As such, the NeSy association primarily conducts the annual workshop on Neural-Symbolic Learning and Reasoning (NeSy workshop), and promotes through the maintenance of the website http://www.neural-symbolic.org, its steering committee and the NeSy workshops, interactions with related associations such as IFCoLog, INNS, BCS, IEEE, ACM, AAAI, CogSci, BICA.

The Steering Committee (2014-2018) of the Neural-Symbolic Learning and Reasoning Association, which oversees the running of the workshop series with the same name is composed of:

We invite every researcher interested in the object of the NeSy association to become a member. Currently, membership is free. To apply for membership, please complete and submit the application form at http://staff.city.ac.uk/~aag/nesy. The NeSy Steering Committee will make a decision about the outcome of your application usually within a few days from the submission of the application.

You are invited to join also the NeSy Google+ community at https://plus.google.com/u/0/communities/118350808562167749910 and the LinkedIn community at https://www.linkedin.com/groups?home=&gid=4006768.

With best regards,
The NeSy Steering Committee

NeSy Workshops and Seminars:

SIG Explainable AI

Quite frequently demands for better explainable and/or interpretable and/or comprehensible Artificial Intelligence (AI) and Machine Learning (ML) systems are being put forward. The Special Interest Group on Explainable AI offers a forum for interaction and collaboration on topics relating to "explanation", "interpretability", and "comprehensibility" in the context of AI and ML, working towards a better understanding of what an explanation is when talking about intelligent systems, what it means to interpret or comprehend a system and its behaviour, and how human-machine interaction can take these dimensions into account.

Related Dagstuhl Seminars

Other Related Events

Overview Articles

Tutorials and Courses at Summer Schools

Books

Journal Special Issues

Community


Page maintained by Pascal Hitzler.