The modern world is full of artificial, abstract environments that challenge people's natural intelligence. The goal of our research is to develop artificial intelligence that helps people master these challenges. We are currently active in the following research topics:
LIA has been active in this topic for many years, including work on preference-based search and truthful reputation systems. See here for an overview of earlier research. We are currently exploring how to aggregate ratings and rankings obtained from multiple sources. We are also developing systems for generating history-dependent recommendations, in particular for news.
Another project aims at personalizing the rankings constructed by review sites. By analyzing review texts, we discover the subjective sentiment expressed about different facets of a product, thus allowing to personalize the ranking according to the relative importance of the different facets.
We develop techniques for automatically analyzing sentiment in natural language texts, taking into account context and multi-faceted emotions. We use such analysis to better understand interactions in social networks, and to develop new forms of recommendation systems based on review texts.
Information about the world is often distributed among multiple sources, such as different experts or sensors in a sensor network. We study large-scale open systems for aggregating such information. In the Opensense project, we consider aggregating information from mobile and fixed air quality sensors into a single coherent air quality map. For sentiment analysis, we develop games that extract human judgement on the sentiment features in texts. This provides data for active learning algorithms as well as benchmarks for evaluating learning results. We thus hope to replace the massive datasets that are commonly used in machine learning approaches.
Another important issue is to provide incentives to provide high-quality, truthful information. We have developed game-theoretic schemes for sensor networks and for human computation platforms such as Amazon Mechanical Turk.
Autonomous agents become ubiquitous to help people manage the complexities of the modern world. Our goal is to make such agents behave intelligently. We have a long-standing line of research in distributed constraint optimization to solve problems of coordination among distributed agents, and we have recently developed even more efficient techniques using sampling and for learning to coordinate the use of resources through a decentralized protocol.
Reinforcement learning is the problem of learning how to act within an unknown environment through interaction and limited reinforcement. It is one of the most general problems in AI, with applications such as game playing, resource management, optimisation and optimal control. We are interested in deriving computationally efficient algorithms for reinforcement learning, particularly for large environments or problems with multiple agents.
Using computational game theory, we analyze selfish behavior of intelligent agents in strategic environments, and design mechanisms that incentivize truthful behavior in various settings, including auctions and truthful information elicitation.
Service-oriented computing is a way to implement systems of multiple agents. In the SOSOA project, we are developing new techniques for automated service composition. See here for earlier work in the area of web services and agents.
Research in Natural Language Processing at LIA focuses on text-mining (knowledge extraction out of textual data), automatic production of syntactic tools and evaluation of NLP tools. More information can be found here.