Hi, I’m Shlomi.

Hi, I'm Shlomi.


During the 2023-2024 academic year, I’ll be visiting Columbia University to work with Prof. Rachel Cummings.

I’m a fourth-year CS Ph.D. student at Boston University, being supervised by Prof. Ran Canetti.

I’m interested in Responsible AI, particularly:

  1. Differentially private synthetic data for government data
  2. The interaction between Computer Science and the Law
  3. Interpretable machine learning

For the last years, I teach courses in Responsible AI, Law, Ethics & Society in various institutes including Boston University, Cornell Tech, Bocconi University, Tel Aviv University and the Technion. Our materials are available for faculty here. In August 2023, I taught a two-day congressional workshop for US Congress staffers based on our course.

In summer 2022, I interened at Twitter Cortex where I leverged human-in-the-loop research to improve toxicity models. In 2020-2021, I was an Associated Researcher at the Alexander von Humboldt Institute for Internet and Society (HIIG) in Berlin. In summer 2019, I did a research internship at the Center for Human-Compatible AI at UC Berkeley, working on neural network interpretability.

In my previous life, I was a social entrepreneur - co-founder of the Israeli Cyber Education Center. There I led the development of nationwide educational programs in computing for kids and teens. The center aims to increase the social mobility of underrepresented groups in tech, such as women, minorities, and individuals from the suburbs of Israel. I co-authored a Computer Network textbook in a tutorial approach (in Hebrew). Before that, I was an algorithmic research team leader in cybersecurity.

Publications

Shlomi Hod, Karni Chagal-Feferkorn, Niva Elkin-Koren and Avigdor Gal. “Data Science Meets Law: Learning Responsible AI Together”. Communications of the ACM (2022). Featured on the journal cover.

*Gavin Brown, *Shlomi Hod, *Iden Kalemaj. “Performative Prediction in a Stateful World”. International Conference on Artificial Intelligence and Statistics - AISTATS (2022). Preliminary version at NeurIPS Workshop on Consequential Decision Making in Dynamic Environments, with contributed talk (2020).

*Shlomi Hod, *Stephen Casper, *Daniel Filan, Cody Wild, Andrew Critch and Stuart Russell. “Detecting Modularity in Deep Neural Networks”. arXiv preprint arXiv:2110.08058 (2021).

*Daniel Filan, *Stephen Casper, *Shlomi Hod, Cody Wild, Andrew Critch, and Stuart Russell. “Clusterability in Neural Networks”. arXiv preprint arXiv:2103.03386 (2021).

*Daniel Filan, *Shlomi Hod, Cody Wild, Andrew Critch, and Stuart Russell. “Neural Networks are Surprisingly Modular”. arXiv preprint arXiv:2003.04881 (2020).