CV

Education

Work experience

Skills

Awards

2021 - MLIS scholarship for excellent graduate student in data science. 2019 – ICML top 5% reviewers award 2018 – Jury Award for excellent graduate student 2017 – AI Grant 1.0 2017 – MSc Electrical Engineering with highest honor – summa cum laude

Publications

E Kinderman, I Hubara, H Maron, D Soudry Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks. arxiv preprint 2024

Y Blumenfeld, I Hubara, D Soudry Towards Cheaper Inference in Deep Networks with Lower Bit-Width Accumulators ICLR 2024

• Chmiel, B., Hubara, I., Banner, R., & Soudry, D. (2022). Optimal Fine-Grained N: M sparsity for Activations and Neural Gradients. ICLR 2023.

Hubara, I., Chmiel, B., Island, M., Banner, R., Naor, S., Soudry, D., Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N: M Transposable Masks. NeurIPS 2021

Hubara, I., Nahshan, Y., Hanani, Y., Banner, R., & Soudry, D. A_ccurate Post Training Quantization With Small Calibration Sets._ ICML 2021

• Hoffer, E., Ben-Nun, T., Hubara, I., Giladi, N., Hoefler, T., & Soudry, D. Augment Your Batch: Improving Generalization Through Instance Repetition. CVPR 2020

• Haroush, Matan, Itay Hubara, Elad Hoffer, and Daniel Soudry. The knowledge within: Methods for data- free model compression. CVPR 2020.

• Ron Banner, Itay Hubara, Elad Hoffer, and Daniel Soudry. Scalable methods for 8-bit training of neural networks (NIPS 2018)

• Elad Hoffer, Itay Hubara, and Daniel Soudry. Fix your classifier:the marginal value of training the last weight layer (ICLR 2018)

• Hoffer, E., Hubara I., & Soudry, C. D. T_rain longer, generalize better: closing the generalization gap in large batch training of neural networks_ (NIPS 2017)

Hubara I., Courbariaux, M., Soudry, C. D., El-Yaniv, R., & Bengio, Y. Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations (JMLR 2017)

Hubara I., Courbariaux, M., Soudry, C. D., El-Yaniv, R., & Bengio, Y. Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to + 1 or −1 (NIPS 2016)

• Bhonker, Nadav, Shai Rozenberg, and Itay Hubara. Playing SNES in the Retro Learning Environment. (ICLR 2016 workshop).

• Soudry, D., Hubara, I., & Meir, R. (2014). Expectation backpropagation: parameter-free training of multilayer neural networks with continuous or discrete weights. (NIPS 2014)

• Additional publication can be found at: https://scholar.google.com/citations?user=dyYryZYAAAAJ&hl=en

Patents

• Hubara, Itay. Large-scale computations using an adaptive numerical format. U.S. Patent No. 10,491,239. 26 Nov. 2019. • El-Yaniv, Ran, Hubara, Itay, and Soudry Daniel. Quantized neural network training and inference. U.S. Patent Application No. 15/478,531.