Doina Precup

McGill University & Google DeepMind, Canada

Bio

Doina Precup is an associate professor in the School of Computer Science of McGill University, Montreal. She obtained her BEng degree from the Technical University Cluj-Napoca, Romania (1994), and her MSc (1997) and PhD (2000) from the University of Massachusetts, Amherst, where she was a Fulbright fellow. Her research interests are in the areas of artificial intelligence, machine learning and applications of these methods to real-world problems.
She is the head of the Google Deep Mind Montreal lab.

Keynote topic :
TBA

Alexander Clark

King’s College London, UK

Bio

After a degree in mathematics at Cambridge, Alexander Clark took his D.Phil at the University of Sussex in the Department of Cognitive and Computing Sciences; he then worked as a post-doctoral researcher at the Institute for the Study of Semantics and Cognition in the University of Geneva, and then since 2005 as a lecturer in the department of computer science at Royal Holloway, University of London. He is now a Lecturer at the King’s College in London.

Dr Clark’s research is in the field of formal linguistics, broadly construed: studying language with a particular emphasis on problems of learning and acquisition and how this relates to the nature of linguistic knowledge, and the properties of syntax. Much of his technical work has been in machine learning and in particular in grammatical inference: studying the process of learning hierarchically structured representations or grammars, using generalisations of the sorts of distributional learning studied by American structuralist linguists.

Keynote topic
Weak and Strong learning of Probabilistic Context-Free Grammars: theoretical and empirical perspectives.

The problem of learning context-free grammars from a finite sample of their yields is a classic problem in grammatical inference; in this talk I will look at this problem and a modern variant: namely the problem of learning probabilistic grammars (PCFGs) from a sample of strings drawn i.i.d. from the distribution over strings defined by that grammar.
We will consider both the problem of learning an accurate approximation of the distribution over strings (weak learning) and the much harder problem of learning the underlying unobserved parse trees as well (strong learning) which is related to the task of unsupervised parsing in the NLP community. I will discuss some theoretical results using distributional learning approaches and present some empirical results on both synthetic and natural examples, motivated by the problem of first language acquisition.

Kousha Etessami

(University of Edinburg, UK)

Bio

Kousha Etessami is a professor of Computer Science at the University of Edinburgh, Scotland, UK. He has received his Ph.D from the University of Massachusetts Amherst in 1995. He works on theoretical computer science, in particular on computational complexity theory, game theory and probabilistic systems.

Keynote topic
Algorithms for Weighted and Probabilistic Context-Free Grammars
and Convex Optimization.

 

 

George Argyros

(Columbia University, USA)

Keynote topic
The Learnability of Symbolic Automata.

Symbolic automata allow transitions to carry predicates over rich alphabet theories, such as linear arithmetic, and therefore extend classic automata to operate over infinite alphabets, such as the set of rational numbers. In this paper, we study the problem of learning symbolic automata and exactly characterize under what conditions symbolic automata can be learned efficiently. First, we present \alg, an $L^*$-style algorithm for learning symbolic automata using membership and equivalence queries, which treats the predicates appearing on transitions as their own learnable entities. The main novelty of \alg is that it can take as input an algorithm $\Lambda$ for learning predicates in the underlying alphabet theory and it uses $\Lambda$ to infer the predicates appearing on the transitions in the target automaton. Using this idea, \alg is able to learn automata operating over alphabets theories in which predicates are efficiently learnable using membership and equivalence queries. Furthermore, we prove that a necessary condition for efficient learnability of an \SFA is that predicates in the underlying algebra are also efficiently learnable using queries. Our results settle the learnability of a large class of \SFA instances, leaving open only a subclass of \SFAs over efficiently learnable alphabet theories. We implement \alg in an open-source library and show that it can efficiently learn automata that cannot be learned using existing algorithms and significantly outperform existing automata learning algorithms over large alphabets.

Advertisements