PreNeurIPS 2025 in Warsaw: Summary of the Event

The ELLIS Unit Warsaw had the pleasure of hosting the PreNeurIPS 2025 poster session, organized in parallel with leading ELLIS units across Europe. While sister events took place in Barcelona, Berlin, Darmstadt, Edinburgh, Freiburg, Genoa, London, Lausanne, Modena, Paris, Tel Aviv, Turin, Tübingen and Zurich, Warsaw was the only location in this part of Europe to join the initiative. We are proud to have contributed to this international network of pre-conference gatherings showcasing cutting edge machine learning research.

Presented Research: 21 Posters from Top Conferences

This year’s session featured 21 posters selected from premier international conferences such as NeurIPS, ECAI, CoRL, ICCV and AAAI. Their breadth reflects the diversity and maturity of the research community associated with ELLIS and with Polish ML groups. Below is the full list of posters presented during the Warsaw event.

NeurIPS
• 1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities (Kevin Wang, Ishaan Javali, Michał Bortkiewicz, Tomasz Trzciński, Benjamin Eysenbach)
• CLIPGaussian: Universal and Multimodal Style Transfer Based on Gaussian Splatting (Kornel Howil, Joanna Waczyńska, Piotr Borycki, Tadeusz Dziarmaga, Marcin Mazur, Przemysław Spurek)
• Accurate and Efficient Low-Rank Model Merging in Core Space (Aniello Panariello, Daniel Marczak, Simone Magistri, Angelo Porrello, Bartłomiej Twardowski, Andrew D. Bagdanov, Simone Calderara, Joost van de Weijer)
• Covariances for Free: Exploiting Mean Distributions for Training-free Federated Learning (Dipam Goswami, Simone Magistri, Kai Wang, Bartłomiej Twardowski, Andrew D. Bagdanov, Joost van de Weijer)
• Contrastive Representations for Temporal Reasoning (Alicja Ziarko, Michał Bortkiewicz, Michał Zawalski, Benjamin Eysenbach, Piotr Miłoś)
• Inference-Time Hyper-Scaling with KV Cache Compression (Adrian Łańcucki, Konrad Staniszewski, Piotr Nawrot, Edoardo M. Ponti)
• ProSpero: Active Learning for Robust Protein Design Beyond Wild-Type Neighborhoods (Michał Kmicikiewicz, Vincent Fortuin, Ewa Szczurek)
• Mamba Goes HoME: Hierarchical Soft Mixture-of-Experts for 3D Medical Image Segmentation (Szymon Płotka, Maciej Chrabaszcz, Gizem Mert, Ewa Szczurek, Arkadiusz Sitek)
• FlySearch: Exploring How Vision-Language Models Explore (Adam Pardyl, Dominik Matuszek, Mateusz Przebieracz, Marek Cygan, Bartosz Zieliński, Maciej Wołczyk)
• Bigger, Regularized, Categorical: High-Capacity Value Functions are Efficient Multi-Task Learners (Michał Nauman, Marek Cygan, Carmelo Sferrazza, Aviral Kumar, Pieter Abbeel)
• Compute-Optimal Scaling for Value-Based Deep RL (Preston Fu, Oleh Rybkin, Zhiyuan Zhou, Michał Nauman, Pieter Abbeel, Sergey Levine, Aviral Kumar)
• Explaining Similarity in Vision-Language Encoders with Weighted Banzhaf Interactions (Hubert Baniecki, Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer, Eyke Hüllermeier, Przemysław Biecek)
• System-Embedded Diffusion Bridge Models (Bartłomiej Sobieski, Matthew Tivnan, Yuang Wang, Siyeop Yoon, Pengfei Jin, Dufan Wu, Quanzheng Li, Przemysław Biecek)

ECAI
• Knowledge-Driven Bayesian Uncertainty Quantification for Reliable Fake News Detection (Julia Puczyńska, Michał Bizon, Youcef Djenouri, Tomasz Michalak, Piotr Sankowski)
• GUIDE: Guidance-Based Incremental Learning with Diffusion Models (Bartosz Cywiński, Kamil Deja, Tomasz Trzciński, Bartłomiej Twardowski, Łukasz Kuciński)
• ExpertSim: Fast Particle Detector Simulation Using Mixture-of-Generative-Experts (Patryk Będkowski, Jan Dubiński, Filip Szatkowski, Kamil Deja, Przemysław Rokita, Tomasz Trzciński)

CoRL
• Beyond Constant Parameters: Hyper Prediction Models and Hyper MPC (Jan Węgrzynowski, Piotr Kicki, Grzegorz Czechmanowski, Maciej Krupka, Krzysztof Walas)

ICCV
• Joint Diffusion Models in Continual Learning (Paweł Skierś, Kamil Deja)

AAAI
• The GATTACA Framework: Graph Neural Network-Based Reinforcement Learning for Controlling Biological Networks (Andrzej Mizera, Jakub Zarzycki)
• EPIC: Explanation of Pretrained Image Classification Networks via Prototypes (Piotr Borycki, Magdalena Trędowicz, Szymon Janusz, Jacek Tabor, Przemysław Spurek, Arkadiusz Lewicki, Łukasz Struski)
• On Stealing Graph Neural Network Models (Marcin Podhajski, Jan Dubiński, Franziska Boenisch, Adam Dziedzic, Agnieszka Pręgowska, Tomasz P. Michalak)

Best Poster Awards

A highlight of the session was the announcement of this year’s Best Poster Awards.
The Jury Award was presented to “Beyond Constant Parameters: Hyper Prediction Models and Hyper MPC” by Jan Węgrzynowski, Grzegorz Czechmanowski and Piotr Kicki. We extend our gratitude to the jury members for their expertise and careful evaluation: Przemysław Biecek, Piotr Skowron and Tomasz Trzciński.

The Audience Award was given to “Inference-Time Hyper-Scaling with KV Cache Compression” by Adrian Łańcucki, Konrad Staniszewski, Piotr Nawrot and Edoardo M. Ponti.

Acknowledgements

The event would not have been possible without the collaboration of many partners. We thank all co-organizers for their dedication and coordination. We express our appreciation to the Faculty of Mathematics, Informatics and Mechanics at the University of Warsaw for hosting the session. We also thank the Warsaw University of Technology and the MLinPL community that made the logistics smooth and efficient.