PhD AI Seminars At BeCentral

The past seminars have shown how proud Belgium can be of his Academic and Researchers in the domain of AI. Please keep on registrating and attending the exciting coming Monday. Unfortunately, this initiative will stop after the last Monday of January 2020 since no one, no politics, no university, accepts to support the cost of the organisation. AI4Belgium, a federal initiative that was launched by Alexander De Croo and involved many AI actors of the country, not surprisingly, did not result in anything concrete. It is quite depressing for our country, when you know that Quebec (8 million of inhabitants) is investing today hundreds of millions of dollars in AI, while we are not even able to find 5000 euro for financing an initative like this one.

  • Marco Dorigo - ULB

    Swarm Robotics

    Swarm robotics is about designing, constructing and controlling swarms of autonomous robots that cooperate to perform tasks that go beyond the capabilities of the single robots in the swarm. In this seminar, I will first present results obtained with homogeneous and heterogeneous swarms of robots that cooperate both physically and logically to perform a number of different tasks. I will then discuss ongoing research in three directions: collective decision making, mergeable nervous systems, and stigmergy-guided construction. In collective decision making, we study how a swarm of robots can collectively choose the best among n possible options. In mergeable nervous systems, we study how self-organisation can be made more powerful as a tool to coordinate the activities of a robot swarm by injecting some components of hierarchical control. Finally, in stigmergy-guided construction, we study how a swarm of robots can coordinate their actions to construct a structure by exploiting stigmergic communication.

  • Luc Deraedt - KUL

    Introduction to Probabilistic (Logic) Programming

    The tutorial will provide a motivation for, an overview of and an introduction to the fields of statistical relational learning and probabilistic programming. These combine rich expressive relational representations with the ability to learn, represent and reason about uncertainty. The tutorial will introduce a number of core concepts concerning representation and inference. It shall focus on probabilistic extensions of logic programming languages, such as CLP(BN), BLPs, ICL, PRISM, ProbLog, LPADs, CP-logic, SLPs and DYNA, but also discusses relations to alternative probabilistic programming languages such as Church, IBAL and BLOG and to some extent to statistical relational learning models such as RBNs, MLNs, and PRMs. The concepts will be illustrated on a wide variety of tasks, including models representing Bayesian networks, probabilistic graphs, stochastic grammars, etc. This should allow participants to start writing their own probabilistic programs. We further provide an overview of the different inference mechanisms developed in the field, and discuss their suitability for the different concepts. We also touch upon approaches to learn the parameters of probabilistic programs, show how deep learning and probabilistic programming can be combined, and mention a number of applications in areas such as robotics, vision, natural language processing, web mining, and bioinformatics.

  • Geraint Wiggins - VUB

    Statistical learning and spectral representation of meaning in the Information Dynamics of Thinking

    We present a theory of cognitive architecture called Information Dynamics of Thinking. It posits a statistical model of cognitive function (at a level somewhat more abstract than the neural level) that is intended to explicate the control loop of perception and action in humans. Crucially, unlike many cognitive architectures, the approach is predictive, in the sense that it aims to predict the world, rather than merely react to it. In this way, the architecture aims to optimise its function by being prepared for events in the world; the approach also admits synchronisation where appropriate. The model is driven by a domain-independent statistical heuristic, that of information-efficiency - in other words, it aims to conserve cognitive activity for deployment only when necessary. The architecture learns implicitly, and attempts to find the structure in the perceptual sequences that it learns, using information-theoretic principles. While doing so, it constructs representations of temporal trajectories in the representation space of its input, as spectra, using Fourier transforms. Thus, it becomes possible to consider complex temporal structures as points in high-dimensional spaces with well-understood mathematical properties. In this talk, I will explain the architecture and present some recent results that supply evidence for the ideas behind it.

  • Benoit Macq - UCL

    Trusted coalitions and efficient distributed deep learning for improved image-based decisions

    Image is the reference modality for computer-assisted decision in medicine, in urban surveillance or for many life interactive user experiences like indoor amusement rides. These decision systems require shared and trusted models built from supervised learning among the community of users. In many cases, the learning can be accelerated by constituting coalitions of partners working each of them on similar data. In this case, distributed learning allows the members of the coalition to train and share a model without sharing the data used to optimize this model. A security architecture will be presented and analyzed from an Information Theory point-of-view which guarantees the preservation of data privacy to each member of the coalition, and a fair usage of the shared model, by using adequate encryption, watermarking and blockchain mechanisms. We will demonstrate its effectiveness in the case of the distributed optimization of a Deep Learning Convolutional Neural Network trained on medical images This architecture can be extended by not only sharing data in the coalition but also sharing analysis procedures. Concrete examples will be given in projects in the field of image-guided radiotherapy, urban surveillance and interactive dark rides

  • Johan Suykens - (KU Leuven, ESAT & leuven.ai)

    Deep Learning, Neural Networks and Kernel Machines: towards a Unifying Framework

    Deep learning and neural networks, beyond the use of one-hidden layer architectures, have become powerful and popular models including e.g. convolutional neural networks (CNN), stacked autoencoders, deep Boltzmann machines (DBM), deep generative models and generative adversarial networks (GAN). However, the existence of many local minima solutions in training remains a drawback. Support vector machines (SVM) and kernel methods on the other hand often rely on convex optimization and are very suitable for handling high dimensional input data. In this talk several synergies between neural networks, deep learning, least squares support vector machines and kernel methods will be explained. Primal and dual model representations are playing a key role at this point in supervised, unsupervised and semi-supervised learning problems. The methods will be illustrated with application examples e.g. in electricity load forecasting, weather forecasting, pollution modelling and community detection in large scale networks. Furthermore, restricted kernel machine (RKM) representations will be discussed, which connect least squares support vector machines and related kernel machines to restricted Boltzmann machines (RBM). New generative RKM models will be shown. The use of tensor-based models is also very natural within the new RKM framework. Deep restricted kernel machines (Deep RKM) are explained which consist of restricted kernel machines taken in a deep architecture. In these models a distinction is made between depth in a layer sense and depth in a level sense. Links and differences with stacked autoencoders and deep Boltzmann machines are given. The framework enables to conceive both deep feedforward neural networks (DNN) and deep kernel machines, through primal and dual model representations. Future perspectives and challenges will be outlined.

  • Gianluca Bontempi - ULB

    From supervised learning to causal inference

    We are drowning in data and starving for knowledge” is an old adage of data scientists that nowadays should be rephrased into ”we are drowning in associations and starving for causality”. The democratization of machine learning software and big data platforms is increasing the risk of ascribing causal meaning to simple and sometimes brittle associations. This risk is particularly evident in settings (like bioinformatics [2], social sciences, economics) characterised by high dimension, multivariate interactions, dynamic behaviour where direct manipulation is not only unethical but also impractical. The conventional ways to recover a causal structure from observational data are score-based and constraint-based algorithms. Their limitations, mainly in high dimension, opened the way to alternative learning algorithms which pose the problem of causal inference as the classification of probability distributions. The rationale of those algorithms is that the existence of a causal relationship induces a constraint on the observational multivariate distribution. In other words, causality leaves footprints in the data distribution that can be hopefully used to reduce the uncertainty about the causal structure. This presentation will introduce some basics of causal inference and will discuss the state-of-the-art on machine learning for causality. In particular we will focus on the D2C approach [1] which featurizes observed data by means of information theory asymmetric measures [3,4] to extract meaningful hints about the causal structure. The D2C algorithm performs three steps to predict the existence of a directed causal link between two variables in a multivariate setting: (i) it estimates the Markov Blankets of the two variables of interest and ranks its components in terms of their causal nature, (ii) it computes a number of asymmetric descriptors and (iii) it learns a classifier (e.g. a Random Forest) returning the probability of a causal link given the descriptors value

  • Marco Saerens - UCL

    The bag-of-paths framework for network data and link analysis: a short survey with applications

    Today, network data (e.g., online social networks, document citation networks, protein interaction networks, etc) are studied in almost all areas of science. In this context, we will discuss the bag-of-paths (BoP) framework and its various applications to link prediction, community detection, node classification, node centrality computation, optimal transport on a graph, and Markov decision processes, among others. The BoP is a versatile framework that defines a Gibbs-Boltzmann probability distribution over all paths of the network. The spread of the distribution is controlled by a temperature parameter, monitoring the balance between exploration (a random walk on the graph) and exploitation (following least-cost paths). Therefore, this model usually extends well-known measures of interest by interpolating between an optimal and a completely random behavior.

  • Gilles Louppe - ULiège

    Neural likelihood-free inference

    In many scientific fields such as particle physics, climatology or epidemiology, simulators often provide the best description of real-world phenomena. However, they also lead to challenging inverse problems because the density they implicitly define is often intractable. In this course, we will present a suite of simulation-based inference techniques (frequentist and bayesian) that go beyond the traditional Approximate Bayesian Computation approach, which typically struggles in a high-dimensional setting. We will cover inference methods that use surrogate models based on modern neural networks, including variants of likelihood-ratio estimation algorithms, MCMC sampling techniques or probabilistic programming inference engines. We will also show that additional information, such as the joint likelihood ratio and the joint score, can often be extracted from simulators and used to augment the training data for these surrogate models. We will demonstrate that these new techniques are more sample efficient and provide higher-fidelity inference than traditional methods.

  • Walter Daelemans - University of Antwerpen

    From Text to Knowledge

    In this part of the course, natural language understanding, the automatic extraction of knowledge from text, is addressed. Text is the main medium of human knowledge representation. However, because of its unstructured nature and the inherent problems of ambiguity and implicitness of text, automatic language understanding is still one of the central unsolved problems of Artificial Intelligence. We start with a brief history of this subfield and a characterization of the two main current approaches: machine reading approaches of the IBM Watson type and end-to-end deep neural network approaches. We also explain how absence of context representation and background knowledge (encyclopedic and common sense knowledge) are the main obstacles to full automatic language understanding. Then we will introduce three levels of knowledge extraction from text. At a first level, we try to get objective, factual, knowledge from text types such as news stories and scientific articles using concept and relation extraction techniques. I will illustrate this with current work in our lab on information extraction from clinical notes. A second level of knowledge extraction concerns subjectivity analysis (emotion and sentiment), especially from social media text. This will be illustrated with current projects on political analysis. The third level concerns the extraction of knowledge about the author of the text (metadata) on the basis of linguistic characteristics of the text. For example, demographic and psychological knowledge about the author (age, gender, personality, education level, native language, mental health, etc). This third level will be illustrated with current work in our lab on personality profiling from text.

  • Chris Develder - T2K team

    Neural network applications in NLP

    In these lectures, we will introduce the main neural network approaches that are currently adopted in state-of-the-art natural language processing (NLP) solutions. Starting from our recent research, we will highlight the main neural models to deal with written texts, treating them as sequences of symbols. We will thus discuss recurrent neural network models, and how they are adopted in various NLP tasks, ranging from classification tasks, over sequence labeling tasks, as well as sequence-to-sequence models (seq2seq). We will also touch upon the use of word embeddings, i.e., continuous vector representations, to address the sparse feature issues when modeling language in terms of discrete symbols. These topics will be framed in the context of our recent works, e.g., on joint entity recognition and relation extraction. Other examples may include lyric annotation, as illustration of applying a seq2seq model. Also character-level neural models can be illustrated, e.g., in the context of word-level predictions, such as morphological tagging, while giving insight in the characters supporting the prediction. Also the efficiency of neural models, and techniques to sparsify them (in terms of model parameters) will be discussed. Finally, recent work on solving math word problems will be shown.

  • Thomas Francois - Cental-UCL

    AI readability formulas: how AI helped revived a one-hundred-year-old field

    The field of readability aims at automatically assessing the difficulty of texts for a given population. A readability model uses some of the linguistic characteristics of the texts and combines them with a statistical algorithm, traditionally linear regression. The first attempts dates back to the 1920's and the field has since seen the development of famous formula such as Flesch's (1948) or Gunning's (1952). They have been widely used in the anglo-saxon world, for instance to control the difficulty of articles in mainstream newspapers. However, the limitations of readability models have been stressed as soon as the end of the 70's (Kintsch and Vipond, 1979) and the field somehow has gone into dormancy. In the early 2000s, the combined use of natural language processing (NLP) and machine learning model led to the improvement of the traditional approaches (Collins-Thompson and Callan, 2005 ; Schwarm and Ostendorf, 2005) and the revival of the field. Very recently, word embedding and neural networks have also been applied to automatic text assessment (Jiang et al., 2018; Le et al., 2018). In this presentation, we will first outline the main tendencies in the field, with a focus on recent work that applies AI techniques to readability. We will then cover our own work in the field, first based on standard machine learning algorithm (e.g. SVM), before moving to our latest experiment with deep learning models. To conclude, we will discuss the contributions and limitations of NLP and IA approaches to a notoriously complex task such as readability.

  • Paul Van Eecke - Artificial Intelligence Laboratory Vrije Universiteit Brussel

    Modelling the Emergence and Evolution of Language

    During this seminar, we will discuss a long-term research programme that aims to unravel the mechanisms underlying the emergence and evolution of language. The main hypothesis of this research programme is that language is an evolutionary system that is shaped by processes of variation, selection, self-organisation and emergent functionality. These processes and the mechanisms that are involved are studied through agent-based simulations, in which a population of autonomous agents evolves a language that is adequate for achieving its communicative goals. I will start by discussing the concepts of variation, selection, self-organisation and emergent functionality, and how they apply to language. Then, I will introduce the 'language game paradigm' (Steels 1995) as a methodology to investigate the emergence and evolution of language. I will discuss earlier experiments on the emergence and evolution of concepts and words, and then move to currently ongoing experiments on the emergence and evolution of conceptual systems and grammatical structures. Finally, I will demonstrate the relevance of this research programme for AI applications such as visual question answering.

  • Hugues Bersini - IRIDIA/ULB

    How to twist AI algorithms to favour public good over private ones

    The world has become too complex to entrust management only to flesh-and-blood human governance, which such complexification disarms. Facing this multiplication of threatening complexities, we are accepting more and more to be helped by ubiquitous algorithmic assistances. Often these algorithms treat their user through dedicated focus, in a privileged way, as if they were the only ones in the world. While we might accept such algorithmic orientation and very focused targeting for some specific domains of our life, decisions that impact our public goods are of a complete different nature (like it is well known for long in economic science). In this paper, to mainly convey the idea and for sake of pedagogy, I will use the example of GPS and automatic navigation systems that make an important use of AI shortest path algorithm to connect the departure and the destination points in complex road networks and in a way that is supposed to maximally satisfy the users. Taking for granted that most of these algorithms run in an individualistic manner, I will show how departing from such an individualistic version of them, it is possible, through a succession of iterations and the definition of a cost function that takes into account the cumulated collective impact of the previous iterations, to gradually reach a much more satisfactory solution for the collective whole. I will finally discuss who should be in charge of the writing of these algorithms once they are dedicated to the public goods and by definition escape the private sector.

  • Tarik Roukny - KUL

    The economics of AI

    In this part of the course, we will look into the organization and governance of AI. The rise of industrial AI has triggered several promises and challenges. Using a simple economic framework, we will discuss the determinants and implications of a wide AI adoption across several industries from the reallocation of resources like human capital to the unravelling of new economic activities like digital platforms. Finally, we will explore the benefits and threats of several current and future policies.

  • Tom Lenaerts - ULB/VUB

    Social learning and the question of cooperation

    Classical AI focusses strongly on machine learning methods that either employ supervised, unsupervised or reinforcement learning methods with the aim to create an isolated intelligent system capable of optimally solving a highly specific task. Recent breakthroughs with deep RL in image classification or games like Go and Poker are but a few examples. It is no surprise that at some point methods would be developed that outperform humans in those tasks, simply because computational power has been growing immensely, many data underlying decision-process are just too difficult to understand by humans (see genetics for instance) and humans are not optimally designed individual learners focused on one highly-specific task. What has been missing in the current hype is that human intelligence is social. We acquire many insights and skills from observing and imitating others. Learning thus happens in a social context and depending on the state of this context learning outcomes change. In this session we focus on this type of learning and its underlying theory. Specifically, we focus on evolutionary mechanisms of learning in populations where the social context is defined by the presence of different, competing behaviors. The plan is to first introduce you to the basic principles of evolutionary game theory, playing with some simple tools to determine the equilibria in some classical games. Afterwards the evolutionary approach to social learning will be explained and there will be an opportunity to run some python notebooks to better grasp the mechanisms. This session will be illustrated with many cases wherein the goal is to achieve cooperation among competing agents. As a setting we use social dilemmas like the prisoner dilemma and public good games. Some of the classic mechanisms that promote cooperation like reciprocity, network reciprocity and punishment will be discussed. Some of our recent work on emotions as well as decision-making in the climate change problem will also be discussed, where the latter also involved real experiments with humans and hybrid experiments with agents.

  • Milan Van den Heuvel - UGhent

    Augmenting economic decisions with machine learning

    Digitalisation of our economic activities has produced an avalanche of data which has progressively been trickling down to researchers. With access to novel variables (e.g. geolocation, political sentiment, text messaging, etc.) and extreme granularity, these new data sets present huge opportunities for economic research. Size and covariates are, however, hard to treat when following traditional, theory-driven econometric techniques. As such a recent strand of economic research explores how Machine Learning (ML) techniques augment traditional econometric models when applied to large and unstructured data sets. In this lecture, we will cover the latest progress in this area and set a conceptual overview of how methods at the intersection of ML and econometrics provide great potential to augment economic decisions.

  • Gregory Lewkowicz - ULB

    Like a bull in a china shop: Computer scientists solutions as legal problems

    The number of devices relying on information technologies and artificial intelligence to implement or enforce laws and regulations, as well as to help legal professionals is increasing rapidly. The number of judicial cases in which computer scientists solutions provided in this context are challenged is also increasing. More broadly, the use of AI in law raises ethical and legal questions in most of the countries. In this seminar, I will argue that the idea of a law that is Scientific, Mathematical, Algorithmic, Risk and Technology-driven (SMART) is becoming a concrete reality that needs interdisciplinary scholarly attention for preventing computer scientists to act like bulls in a china shop and to protect the rule of law. Relying heavily on case studies, I will show what types of transformations do SMART devices instigate in the practice of law and why SMART law overhaul the way legal rules are enacted, implemented and enforced. I will eventually conclude by proposing milestones regarding how an overly mathematized and computerized law can be held accountable and how lawyers and computer scientists can collaborate to implement checks and balances and guarantee the rule of law and fundamental rights in the transition to SMART law.

  • Mirelle Hildebrandt - VUB

    Distant reading of legal text and access to law

    In this lecture I will explore the question of whether the use of analytics, such as NLP, on legal texts will increase or inhibit access to law. First, we will investigate the difference between (1) machine readable access to legal texts and (2) human access to the law by those subject to its binding force. Under (1) the work of Francesco Moretti will be employed to better understand what distant reading entails. Under (2) we will inquire what access to law means in a constitutional democracy. Second, the lecture will examine the use of NLP to predict judgements of the European Court of Human Rights, and assess the conclusions that can be drawn from this. Third and finally, we will assess how the use of this type of legal tech may affect access to justice, based on what kind of dependencies.

  • Alexandre De Streel - Université de Namur

    AI to better enforce law: Use cases and legal issues

    This session will review the different possible uses AI to improve the enforcement of the law as well as the emerging legal issues. It will review AI for (i) better detecting violations of law (such as Connect used by the UK tax authority), (ii) better predicting future facts and behaviours (such as Compass used by the US criminal authorities to predict the probability of recidivism), (iii) better predicting legal outcomes (such as the many new tools offered by the legal tech), (iv) adjudicating disputes and (v) directly executing the law (such as the smart contracts). For each use case, the session will analyse the risks and opportunities as well as the applicable existing legal rules and the ones that may emerge in the future.

  • Geert Van Calster - KUL

    Computer told No : Multi-jurisdictional insights into regulation of AI

    As is often the case with emerging technologies, after an initial hesitation, various jurisdictions have started to regulate AI in often widely diverging directions. In this talk I aim to provide some structure into the global responses to regulating AI.

  • Nathalie Smuha - KUL

    Towards an EU ethical framework for Artificial Intelligence

    In its Communication of 25 April 2018, the European Commission set out a European strategy for Artificial Intelligence (AI). A core pillar of that strategy was the establishment of a sound ethical and legal framework for AI. Since April 2018, the Commission has undertaken a number of steps in that direction, which will be the focus of this seminar. After a short introduction to the ethics of AI and its role in EU policy-making, the seminar will particularly focus on the Commission's initiative to establish a High-Level Expert Group on Artificial Intelligence (AI HLEG) and on the work of this group. The AI HLEG was tasked with the drafting of two documents: AI Ethics Guidelines and Policy and Investment Recommendations for AI. The former document offers guidance to AI practitioners on how to ensure their AI is trustworthy , i.e. lawful, ethical and robust, building on a framework of fundamental rights. The latter document proposes recommendations to prepare Europe for the age of AI, as well as to enact regulatory measures that can safeguard individuals from AI adverse impact. These documents, amongst other initiatives, are expected to constitute an important source of input to the EU's policy- and law-making processes on AI, and to shape the ethical framework for AI that the European Union is aiming to work towards also at the global level.

  • Antoinette Rouvroy - Université de Namur

    Could algorithms be FAT (fair, accountable, transparent) and data protection compliant enough to preempt the rule of law?

    Fair (unbiased), accountable (justiciable), transparent (interpretable, explainable) and data protection compliant algorithms have become the holly grail for a growing community of computer scientists and designers eager to ensure that the intervention of algorithms in decision making processes sustain the values of fundamental rights, democracy, and the rule of law. In this talk, I would like to reflect on the virtues, but also the difficulties and the limitations of this "value by design" approach to the societal challenges raised by the rise of algorithmic governmentality. Among other issues, the talk will evoke the challenges facing whoever intends to "model" values with contested meanings like fairness, accountability and transparency (which, in most societies, depend on incompletely theorised agreements), and the tensions between concepts of optimisation and fairness; between algorithmic accountability and due process; between transparency (allowing, for example, traceability of data processing) and publicity (allowing deliberation in the public space); and between data protection and the pursuit of FAT algorithms. The talk will also raise a series of questions ensuing from the perception, dominant in the "FAT algorithms community", of fairness, accountability and transparency as properties of the "algorithmic black box" rather than as properties of the whole socio-technic assemblage (including tech designers, data, algorithms, users,...) whose "behaviour" (and not the algorithm's "behaviour" only) is what, in the end, "produces" situations which may or may not be sustainable in terms of fundamental rights, democracy and the rule of law.

  • Gianluca Bontempi - ULB

    Embodying ethics into AI code: a data science perspective

    The growing adoption of AI (Artificial Intelligence) and Machine Learning (ML) technology in industry and society led to an explosion of national and international initiatives to define ethical guidelines. But if AI/ML is a technology, and as such supposed to be neutral, why such an intense debate? May a philosophical principle (like ethics) be coded in an AI/ML application? And how exactly? Most manifestos about ethics in AI are more a long list of vague statements about fairness and trustworthiness rather than formal specifications to be adopted by designers. It follows that such efforts deliver more often a set of moral principles rather than useful inputs for computer scientists and engineers. This talk will take a data science perspective to discuss how ethical principles may be embodied into actual AI/ML technology. We will first introduce the two major functionalities of an AI/ML agent, notably prediction and decision making. We will then discuss the relation between the most common ethical approaches and the computing principles underlying AI/ML. Finally we will discuss some practical examples of AI/ML technologies (e.g. self driving car), the related ethical concerns and we will emit some recommendations on the basis of our data scientist experience.

  • Gloria Gonzalez Fuster - VUB-LSTS

    Gender and AI: Issues, challenges & responses

    Gender-based discrimination is one the most visible and commented negative impacts of Artificial Intelligence (AI). Algorithmic-based solutions have been denounced, and occasionally scrapped, for taking biased decisions that appear to favour men to the detriment of women - be it due to the use of inappropriate or ‘partial’ training data, wrongly designed software, unconscious prejudice, persistent societal inequality, or a combination of all these factors. AI gender-related issues are as a matter of fact manifold, and encompass also other questions such as the reproduction or amplification of gender stereotypes through technology design (cf. softly spoken, obedient female-sounding smart assistants and home appliances), and, more generally, the possibilities for women in a digital world (still) predominantly designed by men. Importantly, gender-related challenges also concern the recognition and rights of non-binary and non-gender-conforming individuals in a reality based in pervasive data classification. This talk will provide an overview of the main challenges in this area and consider possible responses - in and beyond the law - that could contribute to the development of gender-responsive AI, supporting a future of digital gender agency.

  • Jean-Marie Gregoire M.D. and Cedric Gilon - ULB

    Paroxysmal atrial fibrilation diagnosis and forecast with Deep Neural Networks

    The number of patients suffering of Atrial Fibrillation (AF) is now estimated at 50 millions worldwide and AF prevalence is still increasing. In this context AF screening is becoming a challenge. Machine learning algorithms are growingly used for automatic AF diagnostic in medical and commercially available devices. Usefulness of diagnostic wearable devices should be evaluated. We made a retrospective study from a dataset of 10484 ECG Holter monitorings. 209 patients presented paroxysmal AF (464 recorded AF episodes). Integrality of the raw data was analysed by a qualified nurse and a cardiologist. AF was considered from a >30 sec ECG validated recording. We used a digital sampling rate of 200Hz. The data was anonymized and only RR intervals were used. We tested two different types of machine learning algorithms: a static one and a dynamic one. Results will be shown and discussed.

  • Yves Moreau - KUL

    Bayesian matrix factorization and deep learning for drug discovery and precision medicine

    Matrix factorization/completion methods provide an attractive framework to handle sparsely observed data, also called scarce data. A typical setting for scarce data are is clinical diagnosis in a real-world setting. Not all possible symptoms (phenotype/biomarker/etc.) will have been checked for every patient. Deciding which symptom to check based on the already available information is at the heart of the diagnostic process. If genetic information about the patient is also available, it can serve as side information (covariates) to predict symptoms (phenotypes) for this patient. While a classification/regression setting is appropriate for this problem, it will typically ignore the dependencies between different tasks (i.e., symptoms). We have recently focused on a problem sharing many similarities with the diagnostic task: the prediction of biological activity of chemical compounds against drug targets, where only 0.1% to 1% of all compound-target pairs are measured. Matrix factorization searches for latent representations of compounds and targets that allow an optimal reconstruction of the observed measurements. These methods can be further combined with linear regression models to create multitask prediction models. In our case, fingerprints of chemical compounds are used as side information to predict target activity. By contrast with classical Quantitative Structure-Activity Relationship (QSAR) models, matrix factorization with side information naturally accommodates the multitask character of compound-target activity prediction. This methodology can be further extended to a fully Bayesian setting to handle uncertainty optimally, and our reformulation allows scaling up this MCMC scheme to millions of compounds, thousands of targets, and tens of millions of measurements, as demonstrated on a large industrial data set from a pharmaceutical company. We further extend these methods into deep learning architectures. We also show applications of this methodology to the prioritization of candidate disease genes and to the modeling of longitudinal patient trajectories. We have implemented our method as an open source Python/C++ library, called Macau, which can be applied to many modeling tasks, well beyond our original pharmaceutical setting. https://github.com/jaak-s/macau/tree/master/python/macau.

  • Yvan Saeys - UGent

    Trajectory inference: a novel class of machine learning models to study cellular dynamical processes

    Recent advances in biomedical research allow measuring the immune system into unprecedented detail. Several single-cell omics technologies nowadays enable researchers to profile millions of individual cells at the genome, proteome and transcriptome level, resulting in large and high-dimensional datasets. In addition to applying standard machine learning techniques to get more insight into these data, novel machine learning approaches have also been developed to study the underlying molecular mechanisms in greater depth, for example revealing new insights in the underlying dynamics of cellular processes [1]. In this talk, I will introduce the novel field of trajectory inference methods, a new type of unsupervised learning techniques that aim to model the underlying dynamics of cellular developmental processes. I will also present our recent efforts in constructing a modular and reproducible framework to benchmark trajectory inference methods [2]. Next to introducing the algorithmic challenges I will also give examples of how these methods are being increasingly used in health applications of artificial intelligence. In addition, I will also give an overview of the different types of computational challenges that result from the novel and expanding area of single-cell bioinformatics.

  • Olivier Debeir - ULB

    Bigger networks need bigger data, but what if data quantity impairs data quality?

    Deep-learning approaches had shown its high efficacy for a wide range of applications, in particular in image processing where until recently human vision was always more effective than machine approaches. This is maybe changing, or at least for some (not so) limited applications. These techniques benefit on one hand from the cheep available computing power and on the other hand from huge image databases that are collected everywhere. Though, to achieve a good performance, these deep learning-based algorithms need a huge amount of supervised data. In the medical domain, supervision may be costly in terms of expertise and time needed, resulting in incomplete or partially wrong supervision. We, therefore, discuss here the trade-off that exists between the quality of the available training data and their size. We will show on a practical example how incomplete or wrong supervision can degrade network performances and what correcting strategies can be implemented.

  • Matvei Tsishyn - ULB-IRIDIA

    AI Shortest Path Algorithms for Intermodal Mobility

    In response to the exploding amount of data and the increasing complexity of the systems we are challenged to manage, numerous variations and specifications to shortest path algorithms where proposed to increase their efficiency. However, due to the fact that these techniques are mostly based on preprocessing, they often fall apart when applied to intermodal and used-adapted search. In the first part of this lecture we will look at the basics of intermodal search and mention some ideas and limitations to the current speed-up techniques in this context. In the second part of this lecture, we will speak about how selfish routing can affect the global wealth and propose some ideas about how to make shortest path optimisation more altruistic.

  • Tias Guns - VUB

    Data-driven approaches to mobility planning and transport planning

    The increasing availability of data on movements of people and goods is creating exciting new opportunities for data mining and machine learning. In the first part of this seminar, I will talk about approaches to analysing large-scale GPS and ANPR data with the purpose of identifying mobility patterns that can support policy making; this has challenges both on the algorithmic side and on the visualisation and interpretation side. The second part is about transport planning, and more specifically the ill-defined and subjective process of formulating all hard and soft-requirements that transportation companies have in their vehicle routing planning. We investigate a hybrid machine learning + combinatorial optimisation approach where we first learn the preferences of the planners from historic solutions, and then find the most preferred VRP solution. This too poses interesting challenges on the algorithmic as well as the 'current practice' side.

  • Thomas Stuetzle and Federeco Pagnozzi - ULB

    Automated Design of Metaheuristic Algorithms: Methods, Applications and Perspectives

    The design and development of metaheuristic algorithms can be time-consuming and difficult for a number of reasons including the complexity of the problems being tackled, the large number of degrees of freedom when designing an algorithm and setting its numerical parameters, and the difficulties of algorithm analysis due to heuristic biases and stochasticity. Commonly, this design and development is done manually, mainly guided by the expertise and intuition of the algorithm designer. However, the advancement of automatic algorithm configuration methods offers new possibilities to make this process more automatic, avoid some methodological issues, and at the same time improve the performance of metaheuristic algorithms. In this talk we review the recent advances by addressing automatic metaheuristic algorithm configuration and design. We describe the main existing automatic algorithm configuration techniques and discuss some of the main uses of such techniques, ranging from the mere optimization of the performance of already developed algorithms to their pivotal role in modifying the way metaheuristic algorithms are designed and developed.

  • Pierre Schaus - UCL

    Discrete and Combinatorial Optimization for Mining and Learning Problems over Structured Data

    In this talk I will present an overview of discrete optimization techniques for solving various data-science problems such as - Discovery of region of interests from trajectory data - Matrix-Mining and bi-clustering - Optimal-decision trees inference - Optimal rule-lists inference - Frequent sequence mining - Frequent item-set mining - Block-Modelling Some of the optimization techniques we will use are constraint programming, branch and bound, integer linear programming, greedy or local search algorithms.

  • Kenneth Sorensen - University of Antwerpen

    Solving very large vehicle routing problems using a knowledge-based approach

    We take a fresh look at the development of vehicle routing algorithms. First, we use data mining to gain knowledge on what distinguishes good solutions from not-so-good solutions. We then build an algorithm that is entirely based on a small set of well-chosen, complementary, and efficiently implemented local search operators and apply that knowledge to make them work effectively. Finally, the efficiency of the algorithm is improved by heavy heuristic pruning. The result is an algorithm that is both very fast and very effective, and can be scaled to solve instances that are orders of magnitude larger than those considered "large" in the literature.

  • Patrick De Causmaecker - KULeuven

    Local search based on approximate solutions and polynomially solvable subproblems

    We present the results of a study on the Hamiltonialn Completion Problem where approximate solutions allowed to generate polynomially solvable subproblems which then could be used in an efficient local search approach that allowed to handle larger instances of the problem than are normally tackled in literature. We discuss how this approach generalises based on an analysis of the instance space of the problem. As a second case study, we present recent results on nurse rostering where polynomially solvable instances play a similar role.

  • Mauro Birattari - ULB

    Automatic Design of Collective Behaviors for Robot Swarms: A Modular Approach

    Automatic design is a promising approach to the design of control software for robot swarms. In an automatic design method, the design problem is cast into an optimization problem and is addressed using an optimization algorithm. Most of the research devoted so far to the automatic design of robot swarms is based on the so called neuro-evolutionary robotics: robots are controlled by neural networks that are obtained via artificial evolution performed on computer-based simulations. Although notable results have been obtained using this approach, neuro-evolutionary robotics is unfortuately prone to produce behaviors that suffer the reality gap—the possibly subtle but unavoidable differences between simulation models and reality. Indeed, a noticeble performance drop is typically observed when behaviors developed in simulation are ported to reality. In my seminar, I will discuss a novel approach that produces control software for robot swarms by combining preexisting behavioral modules and fine-tuning their parameters. This approach appears to be intrinsically more robust to the reality gap than neuro-evolution.

  • Bram Vanderborght - VUB

    Merging embodied intelligence with artificial intelligence for Human-Robot collaboration

    One of the primary aims in robotics is to make the robot follow rules that we want, so the robot has to have a kind of intelligence. The field of artificial intelligence is richly studied and has already lead to important applications, but still has many shortcomings. Recently an increasing number of researchers have realized how in animals not only the brain creates the intelligence of the body, but that the morphology and biomechanics have great impact on the way animals and humans think and move. Different projects focus on exploring new forms of actuators and bodies to be able to interact with unknown dynamic and social environments. Idea is that the embodiment of the system allows a behavior, which is not strictly programmed, but robustly emerges from the interaction of the various components of the body and its appropriate control algorithm and the environment. So by a smart design of the body and the actuators part of the computational intelligence can be outsourced to the embodied intelligence. An equilibrium between those two need to be found. Only embodied intelligence is good to show principles, but not enough to make an autonomous robot. E.g. passive walkers are highly energy efficient but of no practical use since can only walk with one speed. Manipulators with compliant actuators are necessary for safe behaviour, but can become very unsafe due to the energy storing capabilities of the elastic elements. So in both examples a control is required to take care. Although nature provides a rich source of inspiration, the way robots and humans work is still very different, with complementary strengths. So the vision of Brubotics is to collaborate with robots for applications in health and manufacturing as exoskeletons, cobots, social robots, prosthesis etc.

  • Tony Belpaeme - UGent-imec

    Social robots: quirky gadgets, scientific tools, life changers

    Social robots are robots which manipulate the social world, rather than the physical world of traditional robotics. They are designed to tap into our desire for social interaction. Because social interaction comes naturally to all of us, there is a misconception that building artificial social interaction is easy. This talk will argue that social robotics is on the one hand one of the hardest challenges in robotics but will at same time show that relatively simple designs and behaviours can go a long way in achieving social human-robot interaction. To illustrate the power of social robots, the talk will give examples of social robots being used in education, therapy and entertainment.

  • Ann Nowé - VUB

    Reinforcement Learning the basics and beyond

    In this lecture the basics of Reinforcement Learning (RL) will be introduced. More advanced topics including Deep RL, Multi-criteria RL and Muti-agent settings, and their main challenges from a Reinforcement Learning point of view will also be covered. The lecture therefore addresses students with no prior knowledge of RL as well as those who know the basics.

  • Thierry Dutoit - Université de Mons - Numediart

    Is creativity soluble in artificial intelligence?

    Digital creation has expanded considerably in recent years, reach art in all its forms, and involving various media. Web, smartphone, digital, 3D, augmented universes, networked man-machines intertwine, share and multiply the designer's palette of colours. Artificial intelligence has a special place in this pannel, both for creation, interpretation and dissemination of works. What impact do these technologies have on our culture, on art, on the way we consume it and express ourselves? A jazz student, professor at UMONS, president of NUMEDIART, the Research Institute of the University of Mons, whose mission is to provide training and research activities in the field of creative technologies, Thierry Dutoit analyses with us the visible parts of this cultural iceberg, and seeks to reveal its underwater parts.

  • Yves-Alexandre de Montjoye - UCL

    The search for anonymous data: Balancing privacy and utility in the modern age

    In this talk, I will first show how historical anonymization methods fail on modern large-scale datasets including how to quantify the risk of re-identification, how noise addition doesn't fundamentaly help, and finally recent work on how the incompleteness of datasets or sampling methods can be overcomed. This has lead to the development of online anonymization systems which are becoming a growing area of interest in industry and research. Second, I will discuss these the limits of these systems and more specifically new research attacking a dynamic anonymization system called Diffix. I will describe the system, both our noise-exploitation attacks, and their efficiency against real-world datasets. I will finally conclude by discussing the potential of online anonymization systems moving forward.

  • Souhaib Ben Taieb - Université de Mons & Monash University (Australia)

    Hierarchical Time Series Forecasting

    We are in the era of big data where massive amounts time series data are continuously produced and stored in many areas of scientific, industrial and economic activities. Producing accurate forecasts from these big time series data is essential for optimal decision making. We will consider the problem of hierarchical time series forecasting where time series data is represented in a hierarchical or grouped structure; e.g, time series representing a quantity for a whole country disaggregated by states, cities and homes. Hierarchical forecasting require not only good prediction accuracy at each level of the hierarchy, but also the coherency between different levels — the property that forecasts add up appropriately across the hierarchy. We will present learning algorithms to generate both point and probabilistic predictions for hierarchical time series data. Finally, we will discuss experimental results using both simulated and real-world data.

  • Hendrik Blockeel - KUL

    Multi-directional ensembles of decision trees

    Ensembles of decision trees are among the most frequently used predictive models, with Random Forests and Gradient Boosted Trees as their most successful representatives. In this talk, I will discuss another instantiation: multi-directional ensembles of decision trees, as implemented in the MERCS approach. In a multi-directional ensemble, different trees may have different targets, so that the ensemble as a whole can be used to predict any variable from any other variable. This makes MERCS models very flexible in terms of the tasks they can perform. At the same time, they inherit the efficiency, accuracy, versatility and ease-of-use of decision tree ensembles. I will discuss the advantages and challenges of learning MERCS models and illustrate how they can be used for a variety of tasks.