Denis Parra

Denis Parra

PUBLICATIONS

Green AI aims to develop accurate AI models that are also sustainable without compromising the environment, especially in terms of carbon emissions. There are few studies on this topic in recommender systems, so we analyzed the trade-offs between recommendation performance and carbon footprint in session-based recommender systems. We use five public e-commerce datasets to predict the next item a user will interact with based solely on their past click events. The GRU4Rec algorithm and five unofficial reimplementations in different deep learning frameworks (Theano, PyTorch, TensorFlow, Keras, and Reckpack) are evaluated. The results indicate a strong effect of the loss function and dataset size on the carbon footprint without significantly affecting the accuracy metrics. We show evidence that the implementation choice for the same algorithm strongly affects the CO emitted, and optimized implementations do not sacrifice recommendation efficiency, which should be considered when choosing a framework or implementation for an algorithm.

Publisher: Elsevier, Data in Brief  Link>

ABSTRACT

The COVID-19 pandemic has underlined the need for reliable information for clinical decision-making and public health policies. As such, evidence-based medicine (EBM) is essential in identifying and evaluating scientific documents pertinent to novel diseases, and the accurate classification of biomedical text is integral to this process. Given this context, we introduce a comprehensive, curated dataset composed of COVID-19-related documents.

This dataset includes 20,047 labeled documents that were meticulously classified into five distinct categories: systematic reviews (SR), primary study randomized controlled trials (PS-RCT), primary study non-randomized controlled trials (PS-NRCT), broad synthesis (BS), and excluded (EXC). The documents, labeled by collaborators from the Epistemonikos Foundation, incorporate information such as document type, title, abstract, and metadata, including PubMed id, authors, journal, and publication date.

Uniquely, this dataset has been curated by the Epistemonikos Foundation and is not readily accessible through conventional web-scraping methods, thereby attesting to its distinctive value in this field of research. In addition to this, the dataset also includes a vast evidence repository comprising 427,870 non-COVID-19 documents, also categorized into SR, PS-RCT, PS-NRCT, BS, and EXC. This additional collection can serve as a valuable benchmark for subsequent research. The comprehensive nature of this open-access dataset and its accompanying resources is poised to significantly advance evidence-based medicine and facilitate further research in the domain.


Publisher: ACM Computing Surveys, Link>

ABSTRACT

Every year physicians face an increasing demand of image-based diagnosis from patients, a problem that can be addressed with recent artificial intelligence methods. In this context, we survey works in the area of automatic report generation from medical images, with emphasis on methods using deep neural networks, with respect to: (1) Datasets, (2) Architecture Design, (3) Explainability and (4) Evaluation Metrics. Our survey identifies interesting developments, but also remaining challenges. Among them, the current evaluation of generated reports is especially weak, since it mostly relies on traditional Natural Language Processing (NLP) metrics, which do not accurately capture medical correctness.


Publisher: Computers and Electronics in Agriculture, Link>

ABSTRACT

Decision support systems have become increasingly popular in the domain of agriculture. With the development of automated machine learning, agricultural experts are now able to train, evaluate and make predictions using cutting edge machine learning (ML) models without the need for much ML knowledge. Although this automated approach has led to successful results in many scenarios, in certain cases (e.g., when few labeled datasets are available) choosing among different models with similar performance metrics is a difficult task. Furthermore, these systems do not commonly allow users to incorporate their domain knowledge that could facilitate the task of model selection, and to gain insight into the prediction system for eventual decision making. To address these issues, in this paper we present AHMoSe, a visual support system that allows domain experts to better understand, diagnose and compare different regression models, primarily by enriching model-agnostic explanations with domain knowledge. To validate AHMoSe, we describe a use case scenario in the viticulture domain, grape quality prediction, where the system enables users to diagnose and select prediction models that perform better. We also discuss feedback concerning the design of the tool from both ML and viticulture experts.


Publisher: Revista Bits de Ciencia, Link>

ABSTRACT

Corría el año 2010 y yo cursaba mi doctorado enfocado en personalización y sistemas de recomendación en la Universidad de Pittsburgh, ubicada en la ciudad homónima (Pittsburgh) al oeste del estado de Pennsylvania en Estados Unidos. Las técnicas más avanzadas de mi tema de investigación eran del área conocida como Aprendizaje Automático (en inglés, Machine Learning), por lo que sentía la necesidad de tomar un curso avanzado para completar mi formación. En el semestre de otoño finalmente me inscribí en el curso de Aprendizaje Automático, y gracias a un convenio académico pude cursarlo en la universidad vecina, Carnegie Mellon University. Yo estaba realmente emocionado de tomar un curso en un tema de tan creciente relevancia en unas de las mejores universidades del mundo en el área de computación.


[:en]Publisher:, Link>

ABSTRACT

Techniques for presenting objects spatially via density maps have been thoroughly studied, but there is lack of research on how to display this information in the presence of several classes, i.e., multiclass density maps. Moreover, there is even less research on how to design an interactive visualization for comparison tasks on multiclass density maps. One application domain which requires this type of visualization for comparison tasks is crime analytics, and the lack of research in this area results in ineffective visual designs. To fill this gap, we study four types of techniques to compare multiclass density maps, using car theft data. The interactive techniques studied are swipe, translucent overlay, magic lens, and juxtaposition. The results of a user study (N=32) indicate that juxtaposition yields the worst performance to compare distributions, whereas swipe and magic lens perform the best in terms of time needed to complete the experiment. Our research provides empirical evidence on how to design interactive idioms for multiclass density spatial data, and it opens a line of research for other domains and visual tasks.

[:]

Publisher: Link>

ABSTRACT

We address the task of automatically generating a medical report from chest X-rays. Many authors have proposed deep learning models to solve this task, but they focus mainly on improving NLP metrics, such as BLEU and CIDEr, which are not suitable to measure clinical correctness in clinical reports. In this work, we propose CNN-TRG, a Template-based Report Generation model that detects a set of abnormalities and verbalizes them via fixed sentences, which is much simpler than other state-of-the-art NLG methods and achieves better results in medical correctness metrics. We benchmark our model in the IU X-ray and MIMIC-CXR datasets against naive baselines as well as deep learning-based models, by employing the Chexpert labeler and MIRQI as clinical correctness evaluations, and NLP metrics as secondary evaluation. We also provide further evidence indicating that traditional NLP metrics are not suitable for this task by presenting their lack of robustness in multiple cases. We show that slightly altering a template-based model can increase NLP metrics considerably while maintaining high clinical performance. Our work contributes by a simple but effective approach for chest X-ray report generation, as well as by supporting a model evaluation focused primarily on clinical correctness metrics and secondarily on NLP metrics.


Publisher: Frontiers in Robotics and AI, Link>

ABSTRACT

Deep learning, one of the fastest-growing branches of artificial intelligence, has become one of the most relevant research and development areas of the last years, especially since 2012, when a neural network surpassed the most advanced image classification techniques of the time. This spectacular development has not been alien to the world of the arts, as recent advances in generative networks have made possible the artificial creation of high-quality content such as images, movies or music. We believe that these novel generative models propose a great challenge to our current understanding of computational creativity. If a robot can now create music that an expert cannot distinguish from music composed by a human, or create novel musical entities that were not known at training time, or exhibit conceptual leaps, does it mean that the machine is then creative? We believe that the emergence of these generative models clearly signals that much more research needs to be done in this area. We would like to contribute to this debate with two case studies of our own: TimbreNet, a variational auto-encoder network trained to generate audio-based musical chords, and StyleGAN Pianorolls, a generative adversarial network capable of creating short musical excerpts, despite the fact that it was trained with images and not musical data. We discuss and assess these generative models in terms of their creativity and we show that they are in practice capable of learning musical concepts that are not obvious based on the training data, and we hypothesize that these deep models, based on our current understanding of creativity in robots and machines, can be considered, in fact, creative.


Publisher: , Link>

ABSTRACT

In recent years, instructional design has become even more challenging for teaching staff members in higher education institutions. If instructional design causes student overload, it could lead to superficial learning and decreased student well-being. A strategy to avoid overload is reflecting upon the effectiveness of teaching practices in terms of time-on-task. This article presents a Work-In-Progress conducted to provide teachers with a dashboard to visualize student self-reports of time-on-task regarding subject activities. A questionnaire was applied to 15 instructors during a set trial period to evaluate the perceived usability and usefulness of the dashboard. Preliminary findings reveal that the dashboard helped instructors became aware about the number of hours spent outside of class time. Furthermore, data visualizations of time-on-task evidence enabled them to redesign subject activities. Currently, the dashboard has been adopted by 106 engineering instructors. Future work involves the development of a framework to incorporate user-based improvements.


Advancing representation learning in specialized fields like medicine remains challenging due to the scarcity of expert annotations for text and images. To tackle this issue, we present a novel two-stage framework designed to extract high-quality factual statements from free-text radiology reports in order to improve the representations of text encoders and, consequently, their performance on various downstream tasks.In the first stage, we propose a Fact Extractor that leverages large language models (LLMs) to identify factual statements from well-curated domain-specific datasets. In the second stage, we introduce a Fact Encoder (CXRFE) based on a BERT model fine-tuned with objective functions designed to improve its representations using the extracted factual data. Our framework also includes a new embedding-based metric (CXRFEScore) for evaluating chest X-ray text generation systems, leveraging both stages of our approach. Extensive evaluations show that our fact extractor and encoder outperform current state-of-the-art methods in tasks such as sentence ranking, natural language inference, and label extraction from radiology reports. Additionally, our metric proves to be more robust and effective than existing metrics commonly used in the radiology report generation literature. The code of this project is available at https://github.com/PabloMessina/CXR-Fact-Encoder.

agencia nacional de investigación y desarrollo
Edificio de Innovación UC, Piso 2
Vicuña Mackenna 4860
Macul, Chile