Tomek Korbak bio photo

Tomek Korbak

Senior Research Scientist,
UK AISI

Email Twitter Scholar LinkedIn GitHub

Papers

Conference and workshop papers

  1. Korbak, T., Shi, K., Chen, A., Bhalerao, R., Buckley, C. Phang, J., Bowman S. & Perez. E. (2023). Pretraining Language Models with Human Preferences.
  2. Scheurer, J., Ander Campos, J., Korbak, T., Jun Shern, C., Chen, A., Cho, K., Perez, E. (2023). Training Language Models with Language Feedback at Scale.
  3. Chen, A., Scheurer, J., Korbak, T., Ander Campos, J., Jun Shern, C., Bowman, S., Cho, K., Perez, E. (2023). Improving Code Generation by Training with Natural Language Feedback.
  4. Go, D., Korbak, T. Rozen, J., Ryu, N., Kruszewski, G. Dymetman, M. (2023). Aligning Language Models with Preferences through f-divergence Minimization.
  5. Korbak, T., Elsahar, H., Kruszewski, G. & Dymetman, M. (2022). On reinforcement learning and distribution matching for fine-tuning language models with no catastrophic forgetting. NeurIPS 2022.
  6. Korbak, T., Elsahar, H., Kruszewski, G. & Dymetman, M. (2022). Controlling conditional language models without catastrophic forgetting. ICML 2022.
  7. Korbak, T., Perez, E. & Buckley, C. (2022). RL with KL penalties is better viewed as Bayesian inference. Findings in EMNLP 2022.
  8. Kuciński, Ł., Korbak, T., Kołodziej, P. & Miłoś, P. (2021). Catalytic role of noise and necessity of inductive biases in emergence of compositional communication. NeurIPS 2021.
  9. Korbak, T., Elsahar, H., Dymetman, M. & Kruszewski, G. Energy-based models for code generation under compilability constraints. ACL 2021 workshop on NLP for Programming.
  10. Korbak, T., Zubek, J. & Rączaszek-Leonardi, J. (2020). Measuring non-trivial compositionality in emergent communication. NeurIPS 2020 workshop on Emergent Communication.
  11. Korbak, T., Zubek, J., Kuciński, Ł., Miłoś, P. & Rączaszek-Leonardi, J. (2019). Developmentally motivated emergence of compositional communication via template transfer. NeurIPS 2019 workshop on Emergent Communication.
  12. Korzeniowski, R., Rolczyński, R., Sadownik, P., Korbak, T. & Możejko, M. (2019). Exploiting Unsupervised Pre-training and Automated Feature Engineering for Low-resource Hate Speech Detection in Polish. Proceedings of the PolEval 2019 Workshop.
  13. Korbak, T. & Żak, P. (2017). Fine-tuning Tree-LSTM for phrase-level sentiment classification on a Polish dependency treebank. In Z. Vetulani and P. Paroubek (eds.) Proceedings of the 8th Language & Technology Conference.

Journal papers

  1. Rorot, W., Korbak, T., Litwin, P. & Miłkowski, M. (2022). Enough blanket metaphysics, time for data-driven heuristics. Behavioral and Brain Sciences.
  2. Seth, A., Korbak, T. & Tschantz, A. (2022). A continuity of Markov blanket interpretations under the Free Energy Principle. Behavioral and Brain Sciences.
  3. Korbak, T., Zubek, J., Kuciński, Ł., Miłoś & P. & Rączaszek-Leonardi, J. (2022). Interaction history as a source of compositionality in emergent communication. Interaction Studies.
  4. Korbak, T. (2022). Self-organisation, (M, R)–systems and enactive cognitive science. Adaptive Behavior.
  5. Korbak, T. (2019). Computational enactivism under the free energy principle. Synthese.
  6. Korbak, T. (2019). Unsupervised learning and the natural origins of content. Avant.
  7. Korbak, T. (2015). Scaffolded minds and the evolution of content in signaling pathways. Studies in Logic, Grammar and Rhetoric, 41 (54).
  8. Korbak, T. (2015). Apercepcja transcendentalna w kantowskim modelu epigenezy czystego rozumu. Przegląd Filozoficzny – Nowa seria, 3 (95).