Ongoing Projects
Currently, we are engaged in projects focused on advancing research in Artificial Intelligence (AI), particularly in Trustworthy AI, AI & Ethics, and Applied AI in Data for Good, Social Science and Healthcare. Specifically, we are working on the following projects:
- Exploring Large Language Models (LLMs) Alignment Through Personality Psychology.
- AINarratives: Enhancing Chronic Pain Assessment and Treatment through AI-aided Narratives.
- Advancing Social Science with Synthetic Populations via Large Language Models (LLMs).
- Industrial Doctorate: Exploring the Impact of AI on Political Candidates News
- Industrial Doctorate: AI for Disaster Risk Management
Exploring Large Language Models (LLMs) Alignment Through Personality Psychology
Due to their remarkable proficiency in natural language conversation and reasoning, LLMs are revolutionizing human-computer interaction. They are increasingly permeating our personal, professional, and social lives, significantly advising people on individual or collective decisions with real-world implications. This evolution prompts the critical question of how well LLMs align with human judgment and values, for example, Wang et al. (2023) and Kirk et al. (2024). Taking into account research in psychology which characterises judgment and values in the context of personality – for example, positive correlations have been identified between personality traits and moral judgments (Sun and Smillie, 2024; Luke and Gawronski, 2022; Andrejević et al., 2022; Schwartz et al., 2021) as well as value systems (Czerniawska and Szydło, 2021), civic engagement (Stahlmann et al., 2024), preferences for and voting for green parties (Bleidorn et al., 2024a), vaccination attitudes, intentions, and behaviours (Bleidorn et al., 2024b) in addition to ethical vegetarianism and attitudes about animal welfare legislation (Smillie et al., 2024; Trenkenschuh et al., 2024)-, in this project, we propose to tackling the question of LLM alignment through the lens of personality psychology. In other words, we aim to explore the extent to which insights from personality psychology can be applied to fine-tune LLMs’ personalities to address the alignment problem.
References:
- Andrejević, M., Smillie, L. D., Feuerriegel, D., Turner, W. F., Laham, S. M., & Bode, S. (2022). How do basic personality traits map onto moral judgments of fairness-related actions? Social Psychological and Personality Science, 13(3), 710-721.
- Bleidorn, W., Schilling, T., & Hopwood, C. J. (2024). High Openness and Low Conscientiousness Predict Green Party Preferences and Voting. Social Psychological and Personality Science, 19485506241245157.
- Bleidorn, W., Stahlmann, A. G., & Hopwood, C. J. (2024). Big Five personality traits and vaccination: A systematic review and meta-analysis.
- Czerniawska, M., & Szydło, J. (2021). Do values relate to personality traits and if so, in what way?–analysis of relationships. Psychology Research and Behavior Management, 511-527.
- Kirk, H. R., Vidgen, B., Röttger, P., & Hale, S. A. (2024). The benefits, risks and bounds of personalizing the alignment of large language models to individuals. Nature Machine Intelligence, 1-10.
- Luke, D. M., & Gawronski, B. (2022). Big Five Personality Traits and Moral-Dilemma Judgments: Two preregistered studies using the CNI model. Journal of Research in Personality, 101, 104297.
- Schwartz, F., Djeriouat, H., & Trémolière, B. (2021). The association between personality traits and third-party moral judgment: A preregistered study. Acta Psychologica, 219, 103392.
- Smillie, L. D., Ruby, M. B., Tan, N. P., Stollard, L., & Bastian, B. (2024). Differential responses to ethical vegetarian appeals: Exploring the role of traits, beliefs, and motives. Journal of Personality, 92(3), 800-819.
- Stahlmann, A. G., Hopwood, C. J., & Bleidorn, W. (2024). Big Five personality traits predict small but robust differences in civic engagement. Journal of Personality, 92(2), 480-494.
- Sun, J., & Smillie, L. D. (2024). Why moral psychology needs personality psychology. Journal of Personality, 92(3), 653-665.
- Trenkenschuh, M., Hopwood, C. J., & Dillard, C. (2024). Personality aspects and attitudes about animal welfare legislation. Anthrozoös, 1-20.
- Wang, Y., Zhong, W., Li, L., Mi, F., Zeng, X., Huang, W., … & Liu, Q. (2023). Aligning large language models with human: A survey. arXiv preprint arXiv:2307.12966.
Papers:
- TBA
Posters & Slide presentation:
- TBA
For more information about the project or to express interest in collaborating, please contact:
- akaltenbrunner@uoc.edu
- jamidei@uoc.edu
- rnietol@uoc.edu
Keywords: Trustworthy AI, AI, LLMs, LLMs alignment problem, Personality Psychology.
AINarratives: Enhancing Chronic Pain Assessment and Treatment through AI-aided Narratives.
Chronic pain poses a widespread challenge, affecting approximately 25% of the global population (Johannes et al., 2010; Leadley et al., 2012). Affecting people’s daily activities and quality of life (Breivik et al., 2006), chronic pain significantly contributes to the demand for medical services, imposing a noteworthy economic burden on both individuals experiencing the pain and society at large. Accordingly, improving assessment tools is crucial to understanding pain experiences and designing effective interventions.
This project aims to improve pain assessment processes by combining qualitative approaches to pain assessment and the use of artificial intelligence (AI). Although standardized questionnaires are effective in gathering information about individuals, they may not capture the full spectrum of chronic pain experiences. Qualitative methods – for example, written narrative (WN) -, can help overcome these limitations by offering a deeper understanding of a subjective experience such as pain (Hall and Powell, 2011; Vindrola-Padros and Johnson, 2014).
Although WN are valuable tools for assessment they can be time-consuming for clinicians and challenging for patients to produce. This project aims to streamline this process by helping patients articulate their pain experiences and aiding clinicians in their evaluations. More precisely, in this project, we will use AI, specifically large language models (LLMs), to make it easier for people with pain to narrate their experience, while facilitating a quick assessment of the same for professionals. To achieve this, we will develop AINarratives, a platform where people can express their pain through writing or speech. AINarratives will ask people for additional details to help them explain, but will also automatically evaluate the content communicated and provide summary information in relation to different parameters relevant to practitioners. This information will empower professionals to explore in more depth the essential elements highlighted in the writings and formulate interventions aimed at the expressed needs.
References:
- Johannes CB, Le TK, Zhou X, et al. (2010). The prevalence of chronic pain in United States adults: results of an Internet-based survey. J Pain; 11:1230-9.
- Leadley RM, Armstrong N, Lee YC, et al. (2012). Chronic diseases in the European Union: the prevalence and health cost implications of chronic pain. J Pain Palliat Care Pharmacother; 26:310-325.
- Breivik H, Collett B, Ventafridda V, et al. (2006). Survey of chronic pain in Europe: prevalence, impact on daily life, and treatment. Eur J Pain; 10: 287-333.
- Hall JM, Powell J. (2011). Understanding the person through narrative. Nurs Res Pract; 2011:293837.
- Vindrola-Padros C, Johnson GA. (2014). The narrated, nonnarrated, and the disnarrated: conceptual tools for analyzing narratives in health services research. Qual Health Res; 24:1603-11.
Papers:
- TBA
Posters & Slide presentation:
- Nieto, R., Ferreira de Sa, J. G., Amidei, J. & Kaltenbrunner, A. (Poster, 2024). Written narratives to understand the experience of individuals with pain: Could Artificial Intelligence (AI) help in integrating them in clinical practice? XX Congress of the Spanish Society of Pain, Leon, Spain (29-31 May).
For more information about the project or to express interest in collaborating, please contact:
- akaltenbrunner@uoc.edu
- jamidei@uoc.edu
- rnietol@uoc.edu
Keywords: Applied AI in Healthcare, AI, LLMs, Chronic pain, Written Narrative.
Advancing Social Science with Synthetic Populations via Large Language Models (LLMs)
Questionnaires and surveys are efficient research methods for acquiring information about individuals, especially valuable for uncovering details that are not directly observable or measurable (Sarantakos, 2005; Schofield and Forrester-Knauss, 2013). They are particularly useful for gathering data on people’s subjective experiences. These tools can collect views on a wide range of topics, such as marketing research, customer satisfaction and service performance, political vote orientation, healthcare services utilization or usage intention, opinions about a service or new procedure/intervention, and the occurrence of health issues for epidemiological purposes. Given their applications, efficiency in data collection, and low cost, surveys and questionnaires are widely used in Social Science (Gideon, 2012) and Healthcare (Forrester-Knauss, 2013).
Designing effective surveys and questionnaires involves more than just assembling a series of questions. Attention must be paid to the overall structure, flow, coherence, and relevance of the questions (Sarantakos, 2005). The two most critical and time-consuming steps in this process are the pre-test (or pilot test) and the assessment of psychometric properties, such as reliability and validity. These steps are complex, requiring the computation of multiple indices and the application of the questionnaire to diverse populations.
In light of these challenges, this project aims to explore the potential of using Large Language Models (LLMs) to assist researchers and healthcare professionals in simulating populations for testing surveys and questionnaires. Specifically, we seek to develop strategies—both prompt engineering (Marvin et al., 2023) and few-shot training (Patil et al., 2024)—to leverage LLMs for this purpose. Our goal is to facilitate the testing and development of surveys and questionnaires, thereby supporting researchers and healthcare professionals in their work.
References:
- Gideon, L. (Ed.). (2012). Handbook of survey methodology for the social sciences (Vol. 513). New York: Springer.
- Marvin, G., Hellen, N., Jjingo, D., & Nakatumba-Nabende, J. (2023, June). Prompt engineering in large language models. In International conference on data intelligence and cognitive informatics (pp. 387-402). Singapore: Springer Nature Singapore.
- Patil, R., & Gudivada, V. (2024). A review of current trends, techniques, and challenges in large language models (llms). Applied Sciences, 14(5), 2074.
- Sarantakos, S., (2005). Social Research. 3rd edn, Palgrave Macmillan, New York.
- Schofield, M., and Forrester-Knauss, C., (2013). Surveys and questionnaires in health research. In book: Research methods in health: Foundations for evidence-based practice (2013), 198–218.
Papers:
- De Sá, J. G. F., Kaltenbrunner, A., Amidei, J., & Nieto, R. (2024). How well do simulated populations with GPT-4 align with real ones in clinical trials? The case of the EPQR-A personality test. In Artificial Intelligence and Data Science for Healthcare: Bridging Data-Centric AI and People-Centric Healthcare.
- Ferreira, J.G., Amidei, J., Nieto, R., Kalenbrunner, A. (2024). Matching GPT-simulated populations with real ones in psychological studies – the case of the EPQR-A personality test, TO APPEAR, ACM Transactions on Intelligent Systems and Technology.
Posters & Slide presentation:
- Nieto, R., Ferreira de Sa, J. G., Amidei, J. & Kaltenbrunner, A. (Oral Communication, 2024). Puede ChatGPT simular poblaciones para experimentos clínicos? El caso del cuestionario de personalidad EPQR-A. XXXVII Jornada de Terapia del Comportamiento y Medicina Conductual en la Práctica Clínica (Sociedad Catalana de Psiquiatria y Salud Mental), Barcelona (21 de Marzo del 2024).
For more information about the project or to express interest in collaborating, please contact:
- akaltenbrunner@uoc.edu
- jamidei@uoc.edu
- rnietol@uoc.edu
Keywords: Applied AI in Social Science and Healthcare, AI, LLMs, Questionnaires, Surveys.
Industrial PhD: Exploring the Impact of AI on Political Candidates News:
Artificial Intelligence models, such as Recommender Systems (RS) and Large Language Models (LLM), are increasingly mediating access to political information, raising concerns about their impact on democracy, particularly during elections. However, despite the growing body of research on AI systems, a cross-platform and cross-country comparison of the impact of different AI systems within the same election is still missing. This project seeks to investigate the effects of RS and LLM on the news landscape using the 2024 EU elections as a case study.
Focusing on YouTube and TikTok’s RS and various LLMs, the research will examine how prominent politicians are presented in different nations and platforms. Additionally, the project will assess potential biases, risks, and legal implications for European Regulations.
By employing a combination of algorithmic auditing techniques and digital methods to collect the data and a combination of computational methods/techniques to analyze them, the research aims to provide insights into how AI systems shape the diffusion of political candidates’ information and offer recommendations for assessing compliance with mandatory Risk Assessment and independent audits outlined in the Digital Services Act. The findings from this study will be presented through a series of scholarly papers and a final thesis, with the goal of fostering discussion and collaboration across academia, industry, and regulatory bodies.
In particular, the project aims to answer the following research questions:
RQ1: Explore how AI systems shape the diffusion of political candidates’ information during elections.
RQ2: Realize independent algorithmic audits with new cross-platform methodologies for comparing RSs and LLMs.
RQ3: Provide recommendations to independently assess how online platforms comply with the mandatory Risk Assessment for foreseeable adverse effects on electoral processes (DSA, art. 34), and the best techniques to perform the mandatory independent audits (DSA, art. 37).
To that purpose the research will employ a combination of algorithmic auditing techniques (Bandy, 2021) and digital methods to conduct a cross-platform examination (Rogers, 2017. Venturini, 2018), replicating the same approach across the two most prevalent video-sharing platforms, Youtube and TikTok, and the two most used large language models software.
The data collection will consider at least two countries during the lead-up (three months) to and aftermath (two weeks) of their respective European Elections. This will provide a comprehensive view of how AI systems re-mediate the election candidates’ information. For the part related to recommender systems, a list of politically categorized news actors will be used to recreate the users’ browsing activity of automated accounts or “suck puppets” (Sandvig, 2014) and simulate their echo chambers to register their filter bubbles consequently (Zimmer, 2019). The recommendations will be collected with passive scraping tools (Tracking Exposed and DMI). This technology is also adaptable to crowdsourced data donations for a parallel collection involving actual voters. Alternatively, a Data Subject Access Request (GDPR, art. 15) will be used for the same scope.
Furthermore, the research will utilize a mix of topic modelling, network analysis, and hypothesis testing, along with other methodologies, such as sentiment analysis, clustering, and deep learning, to compare the data collected from different recommender systems and large language models (Romano, 2021. Romano, 2022). By incorporating these diverse analytical techniques, the research will provide a comprehensive and holistic understanding of the impact of AI systems on political discourse during European Elections, as well as identify potential biases and discrepancies in the recommendations and outputs generated by these systems. This in-depth analysis will ultimately contribute to the development of more transparent and accountable AI technologies in the realm of politics and elections.
References:
- Rogers, R. (2017). Digital methods for cross-platform analysis. The SAGE handbook of social media, 91-110.
Papers:
Posters & Slide presentation:
- TBA
For more information about the project or to express interest in collaborating, please contact:
- akaltenbrunner@uoc.edu
- jamidei@uoc.edu
- sromano1@uoc.edu
Keywords: Trustworthy AI, AI & Ethics, AI, Algorithm Auditing,
Industrial PhD: AI for Disaster Risk Management
Anticipation of weather impacts with Artificial Intelligence techniques
In the context of Climate Change, extreme weather-induced events are expected to increase in frequency and intensity. In this sense, Early Warning Systems (EWS) have been identified as a crucial instrument to trigger actions supporting situational awareness and rapid response in front of weather-related emergencies. International organizations (such as thw UN, WMO, and UNDRR) are promoting the development and implementation of Impact-based EWS (IEWS) adapted to the local needs of authorities, first responders and the population. IEWS are oriented to anticipate the actual impact of extreme weather on people, infrastructures and critical services and elements than the meteorological extremes themselves. Artificial Intelligence and Machine Learning techniques can identify patterns in meteorological data that may relate to impacts with higher anticipation than traditional methods. In this project, highly detailed real-life impact data as well as a wide variety of meteorological inputs will be used to the recognition of these types of patterns and to provide significant improvement in impact early warning.
Papers:
- Kooshki Forooshani M., van den Homberg M., Kalimeri, K., Kaltenbrunner, A., Mejova, Y., Milano, L., Ndirangu, P., Paolotti, D., Teklesadik A., Turner M. L. (2024)
Towards a global impact-based forecasting model for tropical cyclones.
Natural Hazards and Earth System Sciences, 24(1), 309-329. 2024.
For more information about the project or to express interest in collaborating, please contact:
- akaltenbrunner@uoc.edu
- alapedriza@uoc.edu
Keywords: Applied AI, Disaster Prediction