Currently, the opacity of learning processes, such as deep neural networks presents a result interpretation challenge for humans. The search for explanatory artificial intelligence has become a key objective.
It is argued that explainability can be achieved most effectively through the use of appropriate data representations, such as heterogeneous multi-attribute representations, which preserve the original nature of the results.
In this context, this project focuses on two fundamental aspects. First, we undertake theoretical research to define, analyse and construct fusion functions that select representative data in problems with real and heterogeneous multi-attribute data. Second, we seek to apply this research to general machine learning or deep learning problems.
Project funded by:
More projects