LÍNEAS DE INVESTIGACIÓN

En nuestro equipo de investigación la curiosidad nunca se agota. Cada uno de nosotros(as) aporta una perspectiva única, una habilidad especial y pasión a este equipo. Nos complementamos y nos desafiamos constantemente para alcanzar nuevas alturas. No importa cuánto hayamos logrado hasta ahora, siempre sabemos que hay más por descubrir, explorar y crear. La IA es un campo en constante evolución, y estamos aquí para liderar el camino.

RL1 / Aprendizaje profundo para visión y lenguaje

RL1

Nuevas teorías y métodos para continuar desentrañando el potencial del Aprendizaje Profundo para crear sistemas cognitivos avanzados con un enfoque en la visión y el lenguaje

Investigadores(as) Principales: Felipe Bravo-Márquez – Iván Sipirán 

RL2 / IA neuro-simbólica

Integración de la IA lógica-probabilística y la basada en el aprendizaje profundo, invocando mutuamente las soluciones de cada parte, inyectando y utilizando la semántica en el aprendizaje profundo.

Investigadores(as) Principales: Pablo Barceló – Cristóbal Rojas
RL2

RL3 / IA inspirada en el cerebro

RL3

Reunir a científicos de la neurociencia, la psicología cognitiva y la IA para explotar los conocimientos de las operaciones anatómicas y cognitivas de los cerebros biológicos para iluminar a los investigadores de la IA.

Investigadores(as) Principales: Marcela Peña – Pedro Maldonado

RL4 / Aprendizaje automático basado en la física

Reunir a matemáticos, físicos y científicos de la IA para explotar los conocimientos de las ciencias físicas para desarrollar modelos de aprendizaje de máquina basados en relaciones causales.

Investigadores(as) Principales: Paula Aguirre  – Carlos Sing Long
RL4

RL5 / IA centrada en las personas

RL5

Nuevas tecnologías para un uso justo, seguro y transparente de la IA en la sociedad, así como metodologías para evaluar su impacto en la misma. Promover nuevas herramientas para una IA interpretable y explicable.

Investigadores(as) Principales: Marcelo Mendoza – Claudia López

Investigadores(as) principales

Investigadores(as) Cenia

La meta de este grupo de investigadores(as) permanentes es producir investigación de frontera en IA, a través de colaboraciones internas, con las otras líneas de investigación, nacionales e internacionales. Reunimos a expertos(as) en visión, procesamiento del lenguaje natural, integración multimodal, aprendizaje adaptativo, redes de grafos y  neurociencia. También participan con el equipo de transferencia tecnológica para desarrollar aplicaciones de IA en smart cities, sustentabilidad, análisis, agricultura con datos satelitales, detección y prevención de incendios, y salud.

Comité Científico Cenia

Investigadores(as) asociados

Investigadores(as) jóvenes

Postdoctorados

Colaboradores(as) asociados

Colaboradores internacionales

Publisher: IEEE Access, Link>

ABSTRACT

Continuous learning occurs naturally in human beings. However, Deep Learning methods suffer from a problem known as Catastrophic Forgetting (CF) that consists of a model drastically decreasing its performance on previously learned tasks when it is sequentially trained on new tasks. This situation, known as task interference, occurs when a network modifies relevant weight values as it learns a new task. In this work, we propose two main strategies to face the problem of task interference in convolutional neural networks. First, we use a sparse coding technique to adaptively allocate model capacity to different tasks avoiding interference between them. Specifically, we use a strategy based on group sparse regularization to specialize groups of parameters to learn each task. Afterward, by adding binary masks, we can freeze these groups of parameters, using the rest of the network to learn new tasks. Second, we use a meta learning technique to foster knowledge transfer among tasks, encouraging weight reusability instead of overwriting. Specifically, we use an optimization strategy based on episodic training to foster learning weights that are expected to be useful to solve future tasks. Together, these two strategies help us to avoid interference by preserving compatibility with previous and future weight values. Using this approach, we achieve state-of-the-art results on popular benchmarks used to test techniques to avoid CF. In particular, we conduct an ablation study to identify the contribution of each component of the proposed method, demonstrating its ability to avoid retroactive interference with previous tasks and to promote knowledge transfer to future tasks.


Publisher: ACM Computing Surveys, Link>

ABSTRACT

Every year physicians face an increasing demand of image-based diagnosis from patients, a problem that can be addressed with recent artificial intelligence methods. In this context, we survey works in the area of automatic report generation from medical images, with emphasis on methods using deep neural networks, with respect to: (1) Datasets, (2) Architecture Design, (3) Explainability and (4) Evaluation Metrics. Our survey identifies interesting developments, but also remaining challenges. Among them, the current evaluation of generated reports is especially weak, since it mostly relies on traditional Natural Language Processing (NLP) metrics, which do not accurately capture medical correctness.


Publisher: CLEF2021 Working Notes, CEUR Workshop Proceedings, Link>

ABSTRACT

This article describes the participation and results of the PUC Chile team in the Turberculosis task in the context of ImageCLEFmedical challenge 2021. We were ranked 7th based on the kappa metric and 4th in terms of accuracy. We describe three approaches we tried in order to address the task. Our best approach used 2D images visually encoded with a DenseNet neural network, which representations were concatenated to finally output the classification with a softmax layer. We describe in detail this and other two approaches, and we conclude by discussing some ideas for future work.


Publisher: Logical Methods in Computer Science, Link>

ABSTRACT

We investigate the application of the Shapley value to quantifying the contribution of a tuple to a query answer. The Shapley value is a widely known numerical measure in cooperative game theory and in many applications of game theory for assessing the contribution of a player to a coalition game. It has been established already in the 1950s, and is theoretically justified by being the very single wealth-distribution measure that satisfies some natural axioms. While this value has been investigated in several areas, it received little attention in data management. We study this measure in the context of conjunctive and aggregate queries by defining corresponding coalition games. We provide algorithmic and complexity-theoretic results on the computation of Shapley-based contributions to query answers; and for the hard cases we present approximation algorithms.


Publisher: arXiv, Link>

ABSTRACT

Word embeddings are vital descriptors of words in unigram representations of documents for many tasks in natural language processing and information retrieval. The representation of queries has been one of the most critical challenges in this area because it consists of a few terms and has little descriptive capacity. Strategies such as average word embeddings can enrich the queries’ descriptive capacity since they favor the identification of related terms from the continuous vector representations that characterize these approaches. We propose a data-driven strategy to combine word embeddings. We use Idf combinations of embeddings to represent queries, showing that these representations outperform the average word embeddings recently proposed in the literature. Experimental results on benchmark data show that our proposal performs well, suggesting that data-driven combinations of word embeddings are a promising line of research in ad-hoc information retrieval.


Publisher: Elsevier, Expert Systems with Applications  Link>

ABSTRACT

We present a study of an artificial neural architecture that predict human ocular scanpaths while they are free-viewing different images types. This analysis is made by comparing different metrics that encompass scanpath patterns, these metrics aim to measure spatial and temporal errors; such as the MSE, ScanMatch, cross-correlogram peaks, and MultiMatch. Our methodology begin by choosing one architecture and training different parametric models per subject and image type, this allows to adjust the models to each person and a given set of images. We find out that there is a clear difference in prediction when people free-view images with high visual content (high-frequency contents) and low visual content (no-frequency contents). The input features selected for predicting the scanpath are saliency maps calculated from foveated images together with the past of the ocular scanpath of subjects, modeled by our architecture called FovSOS-FSD (Foveated Saliency and Ocular Scanpath with Feature Selection and Direct Prediction).

The results of this study could be used to improve the design of gaze-controlled interfaces, virtual reality, as well as to better understand how humans visually explore their surroundings and pave a way to make future research.


Publisher: Journal of Research in Science Teaching  Link>

ABSTRACT

Artificial intelligence (AI) technologies generate increasingly sophisticated non-human cognition; however, foundational learning theories only contemplate human cognition, and current research conceptualizes AI as a pedagogical tool. We argue that the incipient abilities of AI for mutual engagement with people could allow AI to participate as a legitimate member in social constructivist learning environments and suggest some potential structures and activities to explore AI’s capabilities for full participation. “Participation is an active process, but I will reserve the term for actors who are members of social communities. For instance, I will not say that a computer “participates” in a community of practice…. (Wenger, 1998, p. 56)” Twenty-five years ago, Etienne Wenger published his influential book Communities of practice: Learning, meaning, and identity (Wenger, 1998), where he specifically discounted computers as potential members of a community of practice (CoP). Recently, however, the abilities of computational systems like generative artificial intelligence (AI) oblige us to reconsider the roles non-human cognition could play in communities of practice centered on learning. Recently, the editorial article “Artificial Intelligence and the Journal of Research in Science Teaching” (Sadler et al., 2024) describes the potential for AI technology to transform science education, but notes that “the science education research community is not as far along as it needs to be in terms of understanding, theorizing, and studying the intersections of AI and science education.” (p. 742). In response, this commentary presents our theorization and conceptualization of AI in science education. We apply the lens of social constructivism (Wenger, 1998) to theorize about this question and we argue that the nature of generative AI allows it to transcend an instrumental role and achieve full participation in a CoP. We are convinced that socio-constructivist theory in general, and CoP specifically, can provide conceptual tools and theoretical underpinnings to guide the use of AI in education. In this commentary, we synthesize ideas from current literature to construct a theoretical framework and offer suggestions for the transformative use of generative AI.

Publisher: Journal of Engineering Education  Link>

ABSTRACT

Background

We examine the efficacy of an online collaborative problem-solving (CPS) teaching approach in academic performance and student connections with other peers, among first-year engineering calculus students at a Latin American university. Our research uses communities of practice (CoP) to emphasize the social nature of learning and the importance of participation and interaction within a community.

Methods

The work applies a quasi-experimental design and social network analysis (SNA). A total of 202 engineering students were instructed using CPS methodology (experimental group), while 380 students received traditional online teaching methods (control group) during one semester in the first calculus class for engineers.

Results

Results show no significant difference in the grades obtained between the experimental and control groups. However, students exposed to CPS reported a statistically significant higher passing rate, as well as larger and more significant academic and social connections. Additionally, SNA results suggest that CPS facilitated stronger peer connections and promoted a more equitable distribution of participation among students, particularly women, compared to students taught under traditional online teaching methods.

Conclusions

The study underscores the importance of fostering collaborative learning environments and highlights CPS as a strategy to enhance student performance and network formation. Findings suggest that CPS can improve academic outcomes and promote more equitable learning practices, potentially reducing dropout rates among women engineering students. These findings contribute to the ongoing efforts to address systematic biases and enhance learning experiences in engineering education.

Automatic Short Answer Grading (ASAG) refers to automated scoring of open-ended textual responses to specific questions, both in natural language form. In this paper, we propose a method to tackle this task in a setting where annotated data is unavailable. Crucially, our method is competitive with the state-of-theart while being lighter and interpretable. We crafted a unique dataset containing a highly diverse set of questions and a small amount of answers to these questions; making it more challenging compared to previous tasks. Our method uses weak labels generated from other methods proven to be effective in this task, which are then used to train a white-box (linear) regression based on a few interpretable features. The latter are extracted expert features and learned representations that are interpretable per se and aligned with manual labeling. We show the potential of our method by evaluating it on a small annotated portion of the dataset, and demonstrate that its ability compares with that of strong baselines and state-of-the-art methods, comprising an LLM that in contrast to our method comes with a high computational price and an opaque reasoning process. We further validate our model on a public Automatic Essay Scoring dataset in English, and obtained competitive results compared to other unsupervised baselines, outperforming the LLM. To gain further insights of our method, we conducted an interpretability analysis revealing sparse weights in our linear regression model, and alignment between our features and human ratings.

Deep neural networks (DNNs) struggle at systematic generalization (SG). Several studies have evaluated the possibility of promoting SG through the proposal of novel architectures, loss functions, or training methodologies. Few studies, however, have focused on the role of training data properties in promoting SG. In this work, we investigate the impact of certain data distributional properties, as inductive biases for the SG ability of a multi-modal language model. To this end, we study three different properties. First, data diversity, instantiated as an increase in the possible values a latent property in the training distribution may take. Second, burstiness, where we probabilistically restrict the number of possible values of latent factors on particular inputs during training. Third, latent intervention, where a particular latent factor is altered randomly during training. We find that all three factors significantly enhance SG, with diversity contributing an 89% absolute increase in accuracy in the most affected property. Through a series of experiments, we test various hypotheses to understand why these properties promote SG. Finally, we find that Normalized Mutual Information (NMI) between latent attributes in the training distribution is strongly predictive of out-of-distribution generalization. We find that a mechanism by which lower NMI induces SG is in the geometry of representations. In particular, we find that NMI induces more parallelism in neural representations (i.e., input features coded in parallel neural vectors) of the model, a property related to the capacity of reasoning by analogy.