Join Us for the ELLIS PreNeurIPS 2025 Poster Session in Warsaw

We are pleased to invite you to this year’s ELLIS PreNeurIPS 2025 Poster Session, which will take place on Tuesday, 25 November 2025, from 14:00 to 17:00 at the Faculty of Mathematics, Informatics and Mechanics, University of Warsaw (2 Stefana Banacha Street, Warsaw).

PreNeurIPS poster sessions are organized by ELLIS units across Europe and offer a unique opportunity to showcase research accepted to leading conferences in machine learning and artificial intelligence. The event brings together emerging scholars in the field, providing a space for scientific exchange, insightful discussions, and building new collaborations.

This year’s Warsaw edition is being organized by ELLIS Unit Warsaw, operating at IDEAS Research Institute, in cooperation with the University of Warsaw and the Warsaw University of Technology.

During the PreNeurIPS 2025 session in Warsaw, we expect the following posters:

NeurIPS
• “1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities” (Kevin Wang, Ishaan Javali, Michał Bortkiewicz, Tomasz Trzciński, Benjamin Eysenbach)

• “CLIPGaussian: Universal and Multimodal Style Transfer Based on Gaussian Splatting” (Kornel Howil, Joanna Waczyńska, Piotr Borycki, Tadeusz Dziarmaga, Marcin Mazur, Przemysław Spurek)

• “Accurate and Efficient Low-Rank Model Merging in Core Space” (Aniello Panariello, Daniel Marczak, Simone Magistri, Angelo Porrello, Bartłomiej Twardowski, Andrew D. Bagdanov, Simone Calderara, Joost van de Weijer)

• “Covariances for Free: Exploiting Mean Distributions for Training-free Federated Learning” (Dipam Goswami, Simone Magistri, Kai Wang, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost van de Weijer)

• “Contrastive Representations for Temporal Reasoning” (Alicja Ziarko, Michał Bortkiewicz, Michał Zawalski, Benjamin Eysenbach, Piotr Miłoś)

• “Inference-Time Hyper-Scaling with KV Cache Compression” (Adrian Łańcucki, Konrad Staniszewski, Piotr Nawrot, Edoardo M. Ponti)

• “ProSpero: Active Learning for Robust Protein Design Beyond Wild-Type Neighborhoods” (Michał Kmicikiewicz, Vincent Fortuin, Ewa Szczurek)

• “Mamba Goes HoME: Hierarchical Soft Mixture-of-Experts for 3D Medical Image Segmentation” (Szymon Płotka, Maciej Chrabaszcz, Gizem Mert, Ewa Szczurek, Arkadiusz Sitek)

• “FlySearch: Exploring How Vision-Language Models Explore” (Adam Pardyl, Dominik Matuszek, Mateusz Przebieracz, Marek Cygan, Bartosz Zieliński, Maciej Wołczyk)

• “Bigger, Regularized, Categorical: High-Capacity Value Functions are Efficient Multi-Task Learners” (Michał Nauman, Marek Cygan, Carmelo Sferrazza, Aviral Kumar, Pieter Abbeel)

• “Compute-Optimal Scaling for Value-Based Deep RL” (Preston Fu, Oleh Rybkin, Zhiyuan Zhou, Michał Nauman, Pieter Abbeel, Sergey Levine, Aviral Kumar)

• “Explaining Similarity in Vision-Language Encoders with Weighted Banzhaf Interactions” (Hubert Baniecki, Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer, Eyke Hüllermeier, Przemysław Biecek)

• “System-Embedded Diffusion Bridge Models” (Bartłomiej Sobieski, Matthew Tivnan, Yuang Wang, Siyeop Yoon, Pengfei Jin, Dufan Wu, Quanzheng Li, Przemysław Biecek)


ECAI
• “Knowledge-Driven Bayesian Uncertainty Quantification for Reliable Fake News Detection” (Julia Puczyńska, Michał Bizon, Youcef Djenouri, Tomasz Michalak, Piotr Sankowski)

• “GUIDE: Guidance-Based Incremental Learning with Diffusion Models” (Bartosz Cywiński, Kamil Deja, Tomasz Trzciński, Bartłomiej Twardowski, Łukasz Kuciński)

• “ExpertSim: Fast Particle Detector Simulation Using Mixture-of-Generative-Experts” (Patryk Będkowski, Jan Dubiński, Filip Szatkowski, Kamil Deja, Przemysław Rokita, Tomasz Trzciński)


CoRL
• “Beyond Constant Parameters: Hyper Prediction Models and Hyper MPC” (Jan Węgrzynowski, Piotr Kicki, Grzegorz Czechmanowski, Maciej Krupka, Krzysztof Walas)


ICCV
• “Joint Diffusion Models in Continual Learning” (Paweł Skierś, Kamil Deja)


AAAI
• “The GATTACA Framework: Graph Neural Network-Based Reinforcement Learning for Controlling Biological Networks” (Andrzej Mizera, Jakub Zarzycki)

• “EPIC: Explanation of Pretrained Image Classification Networks via Prototypes” (Piotr Borycki, Magdalena Trędowicz, Szymon Janusz, Jacek Tabor, Przemysław Spurek, Arkadiusz Lewicki, Łukasz Struski)

• “On Stealing Graph Neural Network Models” (Marcin Podhajski, Jan Dubiński, Franziska Boenisch, Adam Dziedzic, Agnieszka Pręgowska, Tomasz P. Michalak)

We look forward to welcoming you and celebrating cutting-edge AI research with the community!