Research Engineer (RE1) per el Barcelona Supercomputing Center

The Computer Architecture and Operating System group at the Barcelona Supercomputing Center aims at carrying out research on programming models for critical embedded systems in charge of controlling fundamental parts of cars, airplanes and satellites. Our work is mainly done in the context of bilateral projects with several processor companies as well as several European-funded projects. For a complete list of publications of the group in the last years, please visit: www.bsc.es/caos

The deployment of Artificial Intelligence (AI) based solutions to deliver advanced software functionalities is consolidating as a key competitive factor in several industrial domains. In the automotive industry, autonomous driving (AD) software is meant to support autonomous operation and decision making for all aspects in a vehicle, by processing of a massive amount of data coming from multiple sensors like cameras and LiDARs. The entailed computational requirements can only be matched by complex MPSoCs (Multi-Processor System on Chip) with generic and ah-hoc hardware accelerators. Moreover, the increasing complexity of AI-based software functionalities encourages the use of highly modular middleware frameworks such as ROS2, CyberRT, or Autoware, running on top of general-purpose and/or automotive operating systems. Performance and (timing) analyzability are two fundamental (and sometimes conflicting) requirements for this type of system, where extensive guarantees must be provided on the capability to deliver correct results in a timely manner, as dictated by domain-specific Functional Safety (FuSa) standards.

We invite applications for a PhD student to join us in a groundbreaking journey as we work on the AI4Debunk project. AI4Debunk is dedicated to combating disinformation on two critical fronts: the war in Ukraine and disinformation related to climate change. Through interdisciplinary expertise and sociological insights, we will deeply analyze how to extend the Trustworthy AI framework developed for safety-critical systems to these case studies, with a specific focus on Large Language Models (LLMs) from a mathematical standpoint to provide explainable decisions based on causal inference.

Key Duties

  • Explainability in Large Language Models: Investigate and enhance the explainability of LLMs to provide insights into their decision-making processes when analyzing disinformation content, thus making AI-driven debunking more transparent and trustworthy.
  • Causality Techniques for Disinformation Analysis: Explore and develop novel causality models and methodologies to identify and understand the causal relationships between disinformation campaigns, their sources, and their impact on society.
  • Safety-Critical Systems Integration: Study the application of causality and explainability techniques in the context of safety-critical systems, such as autonomous driving and embedded systems, to ensure that AI4Debunk's methods are robust and reliable.
  • Improving Trustworthiness: Extending research to improve the trustworthiness of AI systems in various applications, including healthcare and finance.
  • Predictability: Developing techniques for enhancing the predictability of AI systems, making them more reliable and interpretable for critical decision-making.
  • Machine Learning for Embedded Systems: Applying machine learning techniques to embedded systems in domains such as automotive, space, and industrial automation to enhance their performance, safety, and adaptability.

Data de tancament: Divendres, 15 Novembre, 2024

Més informació

Més posts de Recerca

Entrada destacada

Accedeix a la nova Borsa de Treball de la UPC

Si ets membre d'UPC Alumni o estàs a punt de finalitzar els teus estudis a la nostra universitat, accedeix al portal d'ocupació de l...