The modern world is full of artificial, abstract environments that challenge people's natural intelligence. The goal of our research is to develop artificial intelligence that helps people master these challenges.
We are currently active in the following research topics:
Earlier, we also worked extensively in case-based reasoning and in model-based and qualitative reasoning.
- Distributed constraint satisfaction and optimization: algorithms for solving constraint satisfaction and optimization problems that are distributed among multiple agents, possibly with self-interest and privacy considerations.
- Technologies for search, recommendation and reputation: intelligent methods for applications on the WWW
- Software agents and web services
- Natural language processing
Constraint satisfaction and optimization is a general framework for expressing many combinatorial problems, and is widely used, particularly for resource allocation, scheduling and planning applications. Often, such problems are distributed over multiple agents that control parts of the variables and constraints, and might want to apply different methods to solve them. Over the past decade, we have developed a series of methods for distributed constraint satisfaction and optimization.
- Asynchronous Aggregation Search (AAS) was our first distributed CSP algorithm. It forms aggregates of values and thus dramatically reduces the complexity of earlier methods such as asynchronous backtracking (ABT). By the same token, it also allows constraints to be controlled by single agents.
- in the context of AAS, we also developed new techniques for variable reordering and maintaining arc concistency. An cleaned-up overview appears here.
- we considered coordination problems in large projects with multiple agents, such as the LHC constructed at CERN. We developed new techniques based on local search and validated them on real planning data from CERN. Carlos Eisenberg has started a company, Synctec, mostly based on this work.
- we have considered the question of how to minimize the amount of information that must be collected from agents to compute a consistent or optimal solution to a constraint satisfaction problem. We have defined the framework of open constraint programming to characterize this situation. In open constraint programming, variable domains are not known a priori and are only discovered through queries to agents. In most cases, even optimal solutions require only partial knowledge of domains. For optimization, we have an algorithm that is probably optimal, and we also have a distributed version of this algorithm.
- we have developed a series of algorithms based on dynamic programming. They exploit the fact that distributed problems often have a relatively low treewidth and are thus very suitable for solution by dynamic programming methods. Starting with the basic algorithm, DPOP, we have developed a series of algorithms that build on it and address various shortcomings of dynamic programming. This framework has turned out to be many orders of magnitude more efficient than search-based approaches and has allowed us to solve much larger problems than any earlier algorithm. An overview of the large amount of work on this topic can be gained from the publications of Adrian Petcu.
- in ongoing work, we are developing new techniques for dealing with uncertainty and problem changes within the DPOP framework, as well as techniques for running DPOP in an asynchronous manner.
Additionally, we have developed and publicly released a "FRamework for Open/Distibuted Optimization" FRODO, that simulates in a single Java virtual machine a multiagent platform geared towards the implementation and testing of (distributed) optimization algorithms. The framework comes with two implemented algorithms (Distributed Breakout Algorithm - DBA, and DPOP). There are also available two testbeds: one for resource allocation in a sensor network, and one for meeting scheduling problems. Both have random problem generators, and GUIs to display the problem instances. More details, paper, source download etc. on the FRODO page.
- JCL - Java constraint library
- COCONUT - COntinuous COnstraints Updating the Technology
- Constraint Satisfaction for Case Adaptation
- Constraint Satisfaction Techniques for Centralized and Distributed numerical problems
- SpaceSolver - Approximate solution spaces using an Internet-based environment
- CO2/3/4 - Constraint based configuration
- Global consistency with continous constraints
- Abstractions in CSP
- IDIOM - Interactive Design using Intelligent Objects and Models
- CADRE - Case-based Architectural Design
- DISA - Distributed Interactive Scheduling with Abstractions
- Qualitative spatial reasoning based on algebraic topology
One of the most important tasks on the WWW is to find items that best satisfy a set of preferences, a problem we call preference-based search. Most sites require the user to specify a fixed set of criteria and then retrieve the most preferred items from a database. However, due to various shortcomings of human decision-making, people are in general not able to state their preferences. Studies show that only a small portion of users actually manage to find their most preferred options, and often end up with suboptimal results. We have developed new mixed-initiative tools using example-critiquing and suggestions, and shown through user studies that they dramatically increase decision accuracy. For more information, see page on preference-based search with suggestions by Paolo Viappiani .
Often, products or other choices cannot be characterized in a vocabulary that would allow people to express their preferences, or people might not even be aware of their preferences. In such cases, they would like to use a recommender system that gives them ideas of items they might like, based on the user's rating of other items. A major problem in existing recommendation systems is the cold-start problem: a relatively large number of ratings is required to provide accurate recommendations. We have developed (and patented) at new technique called ontology filtering that reduces the required information to a practically manageable amount. We are in the process of commercializing this through a startup company, Prediggo.
Another use of product ratings and services is to indicate the reputation for quality. Systems for reputation feedback become increasingly important to help people deal with the anonymity of interactions through the internet. Reputation mechanisms can have two roles: signaling the true quality of products or services, or sanctioning bad quality or dishonest behavior.
We are addressing the design of trustworthy reputation mechanisms that are:
Here is an overview of some of our results!
- incentive-compatible: explicit rewards make it in the best interest of rational agents to report the truth. Honest reporting thus becomes a Nash equilibrium of the system;
- collusion-resistant: small coalitions of rational agents do not have an interest to coordinate on lying strategies;
- robust: small noise and errors do not perturb the properties of the mechanism;
In many software systems, it is advantageous to decompose the processing into relatively independent software agents. In some cases, the decomposition is imposed because different users participate in a decision process. In others, it is required in order to balance the computational load to several processors. At the LIA, research into software agents focuses above all on the decomposition aspect: how can a problem solving process be decomposed so that the agents become as independent as possible?
- Reliable Quality of Service Monitoring Based on Client Feedback
- CASCOM (Context-aware Business Application Service Co-ordination in Mobile Computing Environments)
- Knowledge Web
- Data, Information, and Process Integration with Semantic Web Services (DIP)
- AgentCities @ LIA (our contribution to the AgentCities project)
- CCL - Choice Constraint Language
- JATlite-ACL - agent development package
- MACH - MArket meCHanism for selling on-demand IP bandwidth
- IMMuNe - Resource Allocation in Telecom Networks
- ICCS - Information, communication and collaboration environment for the Swiss AEC industry.
- Resource allocation in communication networks
Industrial use of natural language processing techniques is subject to specific constraints (real time, short development cycles, low linguistic expertise locally available, etc...) which are not particularly compatible with the methods usually applied in classical approaches of computational linguistics. However, recent advances in the field of corpora-based linguistics open a whole set of new possibilities. In particular, the research in Natural Language Processing at the LIA focuses on text-mining (knowledge extraction out of textual data), automatic production of syntactic tools and evaluation of NLP tools.
Our text-mining methods ar based on techniques developed for information retrieval using a Distributional Semantic approach. In such methods, semantic proximities are derived from co-frequency matrices computed on large textual corpora. Different similarity measures are used to characterize the proximity between queries and documents which are represented in an unified way as projections in a high-dimensional vector space of pertinent terms.
Methods for automatic production of syntactic tools aim to implement probabilistic techniques and models operating on textual corpora (raw or annotated texts) in order to adapt various generic algorithms to specific applications: part-of-speech tagging, speech recognition, information retrieval, etc...
- Automatic Structuring of Textual Data: Applications to Text-Mining
- INSPECT - Integration of acoustic and advanced linguistic models into speech understanding systems,
- STING: Evaluation of scientific & technological innovation and progress in Europe through patents
- NLP-FPGA - Hardware NLP coprocessor
- INFOVOX - Interactive Voice Servers for Advanced Computer Telephony Applications
- EXTRACT: Automated Information Extraction from Classified Newspaper Advertisements
- GRACE - part-of-speech tagging evaluation
- ELSE - Evaluation of Language and Speech engineering
- Data-oriented probabilistic syntactic analysis
- Distributional Semantics: application to retrieval from large textual bases
- ISIS - Design of Advanced Vocal Information Servers
- Industrial tools for Natural Language Processing
Humans do not solve problems by constructing elaborate theories, but by adapting solutions which worked earlier on similar problems. Case-based reasoning supports a similar function in computer programs. It is especially useful when there is no analytical model of the problem domain. At the LIA, we have applied the case-based reasoning approach to design and prediction problems.
One important form of intelligent behavior is problem-solving: finding a solution which satisfies certain criteria. It requires abductive reasoning where computer programs are only capable of deductive reasoning. In classical programs as well as in expert systems, abductive inference must be simulated by deductive procedures or rules. This make them difficult to construct and maintain.
In model-based reasoning, abduction is performed explicitly on a model of the domain where problem-solving takes place. Often, abduction is possible only when models allow only a finite number of states. Qualitative reasoning is concerned with modeling and inference techniques where continuous phenomena are discretized into a finite number of qualitative categories. It is often a prerequisite for problem solving, especially in physical domains.
- Qualitative spatial reasoning based on algebraic topology
- Diagnosing HVAC systems
- Nestec - Qualitative Models as Indices for Memory-based Prediction
- Model-based control
- Qualitative Modelling and Simulation of Spatially Distributed Parameter Systems
In addition, the LIA is involved in the European projects SpaceNet and MONET (QRnet).