Hi, I'm Shlomi.

he/him
shlomi <AT> bu <DOT> edu

(Anonymous) feedback welcome.

I’m a second-year CS Ph.D. student at Boston University, being supervised by Prof. Ran Canetti.

I’m interested in responsible AI, particularly:

  1. Societal impact of algorithms and machine learning systems, mainly through long-term and scale-up lenses
  2. Interpretable machine learning

I’m also an Associated Researcher at the Alexander von Humboldt Institute for Internet and Society (HIIG) in Berlin. In summer 2019, I did a research internship at the Center for Human-Compatible AI at UC Berkeley, working on neural network interpretability.

I taught courses in Responsible AI, Law, Ethisc & Society. Occasionally, I’m consulting for startups and companies with data science projects.

In my previous life, I was a social entrepreneur - co-founder of the Israeli Cyber Education Center. There I led the development of nationwide educational programs in computing for kids and teens. The center aims to increase the social mobility of underrepresented groups in tech, such as women, minorities, and individuals from the suburbs of Israel. I co-authored a Computer Network textbook in a tutorial approach (in Hebrew). I taught also a new academic course on Problem Solving using Python. Before that, I was an algorithmic research team leader in cybersecurity.

Publications

*Daniel Filan, *Stephen Casper, *Shlomi Hod, Cody Wild, Andrew Critch, and Stuart Russell. “Clusterability in Neural Networks.” arXiv preprint arXiv:2103.03386 (2021).

*Gavin Brown, *Shlomi Hod, *Iden Kalemaj. “Performative Prediction in a Stateful World.” Appeared with a contributed talk at Workshop on Consequential Decision Making in Dynamic Environments (NeurIPS 2020).

*Daniel Filan, *Shlomi Hod, Cody Wild, Andrew Critch, and Stuart Russell. “Neural Networks are Surprisingly Modular.” arXiv preprint arXiv:2003.04881 (2020).