Publicaciones

Denis Parra

RL1, Publisher: Frontiers in Robotics and AI, Link>

AUTHORS

Manuel Cartagena, Rodrigo Cádiz, Agustín Macaya, Denis Parra

ABSTRACT

Deep learning, one of the fastest-growing branches of artificial intelligence, has become one of the most relevant research and development areas of the last years, especially since 2012, when a neural network surpassed the most advanced image classification techniques of the time. This spectacular development has not been alien to the world of the arts, as recent advances in generative networks have made possible the artificial creation of high-quality content such as images, movies or music. We believe that these novel generative models propose a great challenge to our current understanding of computational creativity. If a robot can now create music that an expert cannot distinguish from music composed by a human, or create novel musical entities that were not known at training time, or exhibit conceptual leaps, does it mean that the machine is then creative? We believe that the emergence of these generative models clearly signals that much more research needs to be done in this area. We would like to contribute to this debate with two case studies of our own: TimbreNet, a variational auto-encoder network trained to generate audio-based musical chords, and StyleGAN Pianorolls, a generative adversarial network capable of creating short musical excerpts, despite the fact that it was trained with images and not musical data. We discuss and assess these generative models in terms of their creativity and we show that they are in practice capable of learning musical concepts that are not obvious based on the training data, and we hypothesize that these deep models, based on our current understanding of creativity in robots and machines, can be considered, in fact, creative.


212 visualizaciones Ir a la publicación

RL1, Publisher: CEUR Workshop Proceedings, Link>

AUTHORS

Hans Löbel, Gregory Schuit, Vicente Castro, Pablo Pino, Denis Parra

ABSTRACT

This article describes PUC Chile team’s participation in the Concept Detection task of ImageCLEFmedical challenge 2021, which resulted in the team earning the fourth place. We made two submissions, the first one based on a naive approach which resulted in a F-1 score of 0.141, and an improved version which leveraged the Perceptual Similarity among images and obtained a final F-1 score of 0.360. We describe in detail our data analysis, our different approaches, and conclude by discussing some ideas for future work


99 visualizaciones Ir a la publicación

RL1, Publisher: CLEF2021 Working Notes, CEUR Workshop Proceedings, Link>

AUTHORS

Vicente Castro, Hans Löbel, Pablo Pino, Denis Parra

ABSTRACT

This article describes PUC Chile team’s participation in the Caption Prediction task of ImageCLEFmedical challenge 2021, which resulted in the team winning this task. We first show how a very simple approach based on statistical analysis of captions, without relying on images, results in a competitive baseline score. Then, we describe how to improve the performance of this preliminary submission by encoding the medical images with a ResNet CNN, pre-trained on ImageNet and later fine-tuned with the challenge dataset. Afterwards, we use this visual encoding as the input for a multi-label classification approach for caption prediction. W


113 visualizaciones Ir a la publicación

2022, Publisher: CLEF2021 Working Notes, CEUR Workshop Proceedings, Link>

AUTHORS

José Miguel Quintana, H Lobel, Pablo Messina, Ria Deane, Pablo Pino, Denis Parra, Daniel Florea

ABSTRACT

This article describes the participation and results of the PUC Chile team in the Turberculosis task in the context of ImageCLEFmedical challenge 2021. We were ranked 7th based on the kappa metric and 4th in terms of accuracy. We describe three approaches we tried in order to address the task. Our best approach used 2D images visually encoded with a DenseNet neural network, which representations were concatenated to finally output the classification with a softmax layer. We describe in detail this and other two approaches, and we conclude by discussing some ideas for future work.


156 visualizaciones Ir a la publicación

RL1, Publisher: Working Notes of CLEF, Link>

AUTHORS

Pablo Messina, H Lobel, Ricardo Schilling, Denis Parra

ABSTRACT

This paper describes the submission of the IALab group of the Pontifical Catholic University of Chile to the Medical Domain Visual Question Answering (VQA-Med) task. Our participation was rather simple: we approached the problem as image classification. We took a DenseNet121 with its weights pre-trained in ImageNet and fine-tuned it with the VQA-Med 2020 dataset labels to predict the answer. Different answers were treated as different classes, and the questions were disregarded for simplicity since essentially they all ask for abnormalities. With this very simple approach we ranked 7th among 11 teams, with a test set accuracy of 0.236.


151 visualizaciones Ir a la publicación

RL1, Publisher: , Link>

AUTHORS

Martin Anselmo, Isabel Hilliger, Gregory Schuit, Fernando Duarte, Constanza Miranda, Denis Parra

ABSTRACT

In recent years, instructional design has become even more challenging for teaching staff members in higher education institutions. If instructional design causes student overload, it could lead to superficial learning and decreased student well-being. A strategy to avoid overload is reflecting upon the effectiveness of teaching practices in terms of time-on-task. This article presents a Work-In-Progress conducted to provide teachers with a dashboard to visualize student self-reports of time-on-task regarding subject activities. A questionnaire was applied to 15 instructors during a set trial period to evaluate the perceived usability and usefulness of the dashboard. Preliminary findings reveal that the dashboard helped instructors became aware about the number of hours spent outside of class time. Furthermore, data visualizations of time-on-task evidence enabled them to redesign subject activities. Currently, the dashboard has been adopted by 106 engineering instructors. Future work involves the development of a framework to incorporate user-based improvements.


628 visualizaciones Ir a la publicación

RL1, Publisher:, Link>

AUTHORS

Manuel Cartagena, Patricio Cerda-Mardini, Felipe del Río, Antonio Ossa-Guerra, Denis Parra

ABSTRACT

This tutorial serves as an introduction to deep learning approaches to build visual recommendation systems. Deep learning models can be used as feature extractors, and perform extremely well in visual recommender systems to create representations of visual items. This tutorial covers the foundations of convolutional neural networks and then how to use them to build state-of-the-art personalized recommendation systems. The tutorial is designed as a hands-on experience, focused on providing both theoretical knowledge as well as practical experience on the topics of the course.


154 visualizaciones Ir a la publicación

RL1, Publisher:, Link>

AUTHORS

Fernando V Paulovich, Osnat Mokryn, Axel J Soto, Evangelos Milios, Dorota Glowacka, Denis Parra

ABSTRACT

This is the fourth edition of the Workshop on Exploratory Search and Interactive Data Analytics (ESIDA). This series of workshops emerged as a response to the growing interest in developing new methods and systems that allow users to interactively explore large volumes of data, such as documents, multimedia, or specialized collections, such as biomedical datasets. There are various approaches to supporting users in this interactive environment, ranging from developing new algorithms through visualization methods to analyzing users’ search patterns. The overarching goal of ESIDA is to bring together researchers working in areas that span across multiple facets of exploratory search and data analytics to discuss and outline research challenges for this novel area.


105 visualizaciones Ir a la publicación

RL1, Publisher: arXiv, Link>

AUTHORS

Iván Cantador, Fernando Diez, Andrés Carvallo, Denis Parra

ABSTRACT

The success of neural network embeddings has entailed a renewed interest in using knowledge graphs for a wide variety of machine learning and information retrieval tasks. In particular, current recommendation methods based on graph embeddings have shown state-of-the-art performance. These methods commonly encode latent rating patterns and content features. Different from previous work, in this paper, we propose to exploit embeddings extracted from graphs that combine information from ratings and aspect-based opinions expressed in textual reviews. We then adapt and evaluate state-of-the-art graph embedding techniques over graphs generated from Amazon and Yelp reviews on six domains, outperforming baseline recommenders. Our approach has the advantage of providing explanations which leverage aspect-based opinions given by users about recommended items. Furthermore, we also provide examples of the applicability of recommendations utilizing aspect opinions as explanations in a visualization dashboard, which allows obtaining information about the most and least liked aspects of similar users obtained from the embeddings of an input graph.


96 visualizaciones Ir a la publicación