LÍNEAS DE INVESTIGACIÓN

En nuestro equipo de investigación la curiosidad nunca se agota. Cada uno de nosotros(as) aporta una perspectiva única, una habilidad especial y pasión a este equipo. Nos complementamos y nos desafiamos constantemente para alcanzar nuevas alturas. No importa cuánto hayamos logrado hasta ahora, siempre sabemos que hay más por descubrir, explorar y crear. La IA es un campo en constante evolución, y estamos aquí para liderar el camino.

RL1 / Aprendizaje profundo para visión y lenguaje

RL1

Nuevas teorías y métodos para continuar desentrañando el potencial del Aprendizaje Profundo para crear sistemas cognitivos avanzados con un enfoque en la visión y el lenguaje

Investigadores(as) Principales: Domingo Mery – Felipe Bravo-Márquez

RL2 / IA neuro-simbólica

Integración de la IA lógica-probabilística y la basada en el aprendizaje profundo, invocando mutuamente las soluciones de cada parte, inyectando y utilizando la semántica en el aprendizaje profundo.

Investigadores(as) Principales: Pablo Barceló – Cristóbal Rojas
RL2

RL3 / IA inspirada en el cerebro

RL3

Reunir a científicos de la neurociencia, la psicología cognitiva y la IA para explotar los conocimientos de las operaciones anatómicas y cognitivas de los cerebros biológicos para iluminar a los investigadores de la IA.

Investigadores(as) Principales: Marcela Peña – Pedro Maldonado

RL4 / Aprendizaje automático basado en la física

Reunir a matemáticos, físicos y científicos de la IA para explotar los conocimientos de las ciencias físicas para desarrollar modelos de aprendizaje de máquina basados en relaciones causales.

Investigadores(as) Principales: Paula Aguirre  – Carlos Sing-Long
RL4

RL5 / IA centrada en las personas

RL5

Nuevas tecnologías para un uso justo, seguro y transparente de la IA en la sociedad, así como metodologías para evaluar su impacto en la misma. Promover nuevas herramientas para una IA interpretable y explicable.

Investigadores(as) Principales: Marcelo Mendoza – Claudia López

Investigadores(as) principales

Investigadores(as) Cenia

La meta de este grupo de investigadores(as) permanentes es producir investigación de frontera en IA, a través de colaboraciones internas, con las otras líneas de investigación, nacionales e internacionales. Reunimos a expertos(as) en visión, procesamiento del lenguaje natural, integración multimodal, aprendizaje adaptativo, redes de grafos y  neurociencia. También participan con el equipo de transferencia tecnológica para desarrollar aplicaciones de IA en smart cities, sustentabilidad, análisis, agricultura con datos satelitales, detección y prevención de incendios, y salud.

Comité Científico Cenia

Investigadores(as) asociados

Investigadores(as) jóvenes

Postdoctorados

Colaboradores(as) asociados

Colaboradores internacionales

Publisher: IEEE Access, Link>

ABSTRACT

Continuous learning occurs naturally in human beings. However, Deep Learning methods suffer from a problem known as Catastrophic Forgetting (CF) that consists of a model drastically decreasing its performance on previously learned tasks when it is sequentially trained on new tasks. This situation, known as task interference, occurs when a network modifies relevant weight values as it learns a new task. In this work, we propose two main strategies to face the problem of task interference in convolutional neural networks. First, we use a sparse coding technique to adaptively allocate model capacity to different tasks avoiding interference between them. Specifically, we use a strategy based on group sparse regularization to specialize groups of parameters to learn each task. Afterward, by adding binary masks, we can freeze these groups of parameters, using the rest of the network to learn new tasks. Second, we use a meta learning technique to foster knowledge transfer among tasks, encouraging weight reusability instead of overwriting. Specifically, we use an optimization strategy based on episodic training to foster learning weights that are expected to be useful to solve future tasks. Together, these two strategies help us to avoid interference by preserving compatibility with previous and future weight values. Using this approach, we achieve state-of-the-art results on popular benchmarks used to test techniques to avoid CF. In particular, we conduct an ablation study to identify the contribution of each component of the proposed method, demonstrating its ability to avoid retroactive interference with previous tasks and to promote knowledge transfer to future tasks.


Publisher: ACM Computing Surveys, Link>

ABSTRACT

Every year physicians face an increasing demand of image-based diagnosis from patients, a problem that can be addressed with recent artificial intelligence methods. In this context, we survey works in the area of automatic report generation from medical images, with emphasis on methods using deep neural networks, with respect to: (1) Datasets, (2) Architecture Design, (3) Explainability and (4) Evaluation Metrics. Our survey identifies interesting developments, but also remaining challenges. Among them, the current evaluation of generated reports is especially weak, since it mostly relies on traditional Natural Language Processing (NLP) metrics, which do not accurately capture medical correctness.


Publisher: CLEF2021 Working Notes, CEUR Workshop Proceedings, Link>

ABSTRACT

This article describes the participation and results of the PUC Chile team in the Turberculosis task in the context of ImageCLEFmedical challenge 2021. We were ranked 7th based on the kappa metric and 4th in terms of accuracy. We describe three approaches we tried in order to address the task. Our best approach used 2D images visually encoded with a DenseNet neural network, which representations were concatenated to finally output the classification with a softmax layer. We describe in detail this and other two approaches, and we conclude by discussing some ideas for future work.


Publisher: Logical Methods in Computer Science, Link>

ABSTRACT

We investigate the application of the Shapley value to quantifying the contribution of a tuple to a query answer. The Shapley value is a widely known numerical measure in cooperative game theory and in many applications of game theory for assessing the contribution of a player to a coalition game. It has been established already in the 1950s, and is theoretically justified by being the very single wealth-distribution measure that satisfies some natural axioms. While this value has been investigated in several areas, it received little attention in data management. We study this measure in the context of conjunctive and aggregate queries by defining corresponding coalition games. We provide algorithmic and complexity-theoretic results on the computation of Shapley-based contributions to query answers; and for the hard cases we present approximation algorithms.


Publisher: arXiv, Link>

ABSTRACT

Word embeddings are vital descriptors of words in unigram representations of documents for many tasks in natural language processing and information retrieval. The representation of queries has been one of the most critical challenges in this area because it consists of a few terms and has little descriptive capacity. Strategies such as average word embeddings can enrich the queries’ descriptive capacity since they favor the identification of related terms from the continuous vector representations that characterize these approaches. We propose a data-driven strategy to combine word embeddings. We use Idf combinations of embeddings to represent queries, showing that these representations outperform the average word embeddings recently proposed in the literature. Experimental results on benchmark data show that our proposal performs well, suggesting that data-driven combinations of word embeddings are a promising line of research in ad-hoc information retrieval.


Publisher: Elsevier, Expert Systems with Applications  Link>

ABSTRACT

We present a study of an artificial neural architecture that predict human ocular scanpaths while they are free-viewing different images types. This analysis is made by comparing different metrics that encompass scanpath patterns, these metrics aim to measure spatial and temporal errors; such as the MSE, ScanMatch, cross-correlogram peaks, and MultiMatch. Our methodology begin by choosing one architecture and training different parametric models per subject and image type, this allows to adjust the models to each person and a given set of images. We find out that there is a clear difference in prediction when people free-view images with high visual content (high-frequency contents) and low visual content (no-frequency contents). The input features selected for predicting the scanpath are saliency maps calculated from foveated images together with the past of the ocular scanpath of subjects, modeled by our architecture called FovSOS-FSD (Foveated Saliency and Ocular Scanpath with Feature Selection and Direct Prediction).

The results of this study could be used to improve the design of gaze-controlled interfaces, virtual reality, as well as to better understand how humans visually explore their surroundings and pave a way to make future research.


Publisher: Advances in Neural Information Processing Systems, Link > When learning tasks over time, artificial neural networks suffer from a problem known as Catastrophic Forgetting (CF). This happens when the weights of a network are overwritten during the training of a new task causing forgetting of old information. To address this issue, we propose MetA Reusable Knowledge or MARK, a new method that fosters weight reusability instead of overwriting when learning a new task. Specifically, MARK keeps a set of shared weights among tasks. We envision these shared weights as a common Knowledge Base (KB) that is not only used to learn new tasks, but also enriched with new knowledge as the model learns new tasks. Key components behind MARK are two-fold. On the one hand, a metalearning approach provides the key mechanism to incrementally enrich the KB with new knowledge and to foster weight reusability among tasks. On the other hand, a set of trainable masks provides the key mechanism to selectively choose from the KB relevant weights to solve each task. By using MARK, we achieve state of the art results in several popular benchmarks, surpassing the best performing methods in terms of average accuracy by over 10% on the 20-Split-MiniImageNet dataset, while achieving almost zero forgetfulness using 55% of the number of parameters. Furthermore, an ablation study provides evidence that, indeed, MARK is learning reusable knowledge that is selectively used by each task.

Publisher: , Link >

Embeddings are core components of modern model-based Collaborative Filtering (CF) methods, such as Matrix Factorization (MF) and Deep Learning variations. In essence, embeddings are mappings of the original sparse representation of categorical features (eg, user and items) to dense low-dimensional representations. A well-known limitation of such methods is that the learned embeddings are opaque and hard to explain to the users. On the other hand, a key feature of simpler KNN-based CF models (aka user/item-based CF) is that they naturally yield similarity-based explanations, ie, similar users/items as evidence to support model recommendations. Unlike related works that try to attribute explicit meaning (via metadata) to the learned embeddings, in this paper, we propose to equip the learned embeddings of MF with meaningful similarity-based explanations. First, we show that the learned user/item …


In X-ray testing, the aim is to inspect those inner parts of an object that cannot be detected by the naked eye. Typical applications are the detection of targets like blow holes in casting inspection, cracks in welding inspection, and prohibited objects in baggage inspection. A straightforward solution today is the use of object detection methods based on deep learning models. Nevertheless, this strategy is not effective when the number of available X-ray images for training is low. Unfortunately, the databases in X-ray testing are rather limited. To overcome this problem, we propose a strategy for deep learning training that is performed with a low number of target-free X-ray images with superimposition of many simulated targets. The simulation is based on the Beer–Lambert law that allows to superimpose different layers. Using this method it is very simple to generate training data. The proposed method was used to train known object detection models (e.g. YOLO, RetinaNet, EfficientDet and SSD) in casting inspection, welding inspection and baggage inspection. The learned models were tested on real X-ray images. In our experiments, we show that the proposed solution is simple (the implementation of the training can be done with a few lines of code using open source libraries), effective (average precision was 0.91, 0.60 and 0.88 for casting, welding and baggage inspection respectively), and fast (training was done in a couple of hours, and testing can be performed in 11ms per image). We believe that this strategy makes a contribution to the implementation of practical solutions to the problem of target detection in X-ray testing.

[:es]Publisher: Computers in biology and medicine, Link > Abstract: Recent advances in medical imaging have confirmed the presence of altered hemodynamics in bicuspid aortic valve (BAV) patients. Therefore, there is a need for new hemodynamic biomarkers to refine disease monitoring and improve patient risk stratification. This research aims to analyze and extract multiple correlation patterns of hemodynamic parameters from 4D Flow MRI data and find which parameters allow an accurate classification between healthy volunteers (HV) and BAV patients with dilated and non-dilated ascending aorta using machine learning. Sixteen hemodynamic parameters were calculated in the ascending aorta (AAo) and aortic arch (AArch) at peak systole from 4D Flow MRI. We used sequential forward selection (SFS) and principal component analysis (PCA) as feature selection algorithms. Then, eleven machine-learning classifiers were implemented to separate HV and BAV patients (non- and dilated ascending aorta). Multiple correlation patterns from hemodynamic parameters were extracted using hierarchical clustering. The linear discriminant analysis and random forest are the best performing classifiers, using five hemodynamic parameters selected with SFS (velocity angle, forward velocity, vorticity, and backward velocity in AAo; and helicity density in AArch) a 96.31 ± 1.76% and 96.00 ± 0.83% accuracy, respectively. Hierarchical clustering revealed three groups of correlated features. According to this analysis, we observed that features selected by SFS have a better performance than those selected by PCA because the five selected parameters were distributed according to 3 different clusters. Based on the proposed method, we concluded that the feature selection method found five potentially hemodynamic biomarkers related to this disease.[:]