- This event has passed.
BIU learning club – Students’ talks
January 15, 2023 @ 12:00 pm - 2:00 pm IST
On Sunday 15.01.23, at 12:00 PM, we will have our second session of students’ presentations. In this session, four students from BIU will present their work. Note that, unlike regular learning club meetings, this meeting will last 2 hours, and will include lunch. It will take place at the Engineering building (1103) in room 329. Please see the schedule below.
12:00 – 12:15
Presenter: Ohad Amosy
Lab Head: Gal Chechik
Title: Text2Model: Model Induction for Zero-shot Generalization Using Task Descriptions
Abstract: In this work, we study the problem of generating a training-free task-dependent visual classifier from text descriptions without visual samples. We analyze the symmetries of Text-to-Model (T2M), and characterize the equivariance and invariance properties of corresponding models. In light of these properties we design an architecture based on hypernetworks that, given a set of new class descriptions, predicts the weights for an object recognition model which classifies images from those zero-shot classes.
We demonstrate the benefits of our approach compared to zero-shot learning from text descriptions in image and point-cloud classification using various types of text descriptions: From single words to rich text descriptions.
12:15 – 12:30
Presenter: Idan Cohen
Lab Head: Ofir Lindenbaum & Sharon Gannot
Title: Unsupervised Acoustic Scene Mapping Based on Acoustic Features and Dimensionality Reduction
Abstract: Classical methods for acoustic scene mapping require the estimation of time difference of arrival (TDOA) between microphones. Unfortunately, TDOA estimation is very sensitive to reverberation and additive noise. We introduce an unsupervised data-driven approach that exploits the natural structure of the data. Our method builds upon local conformal autoencoders (LOCA) – an offline deep learning scheme for learning standardized data coordinates from measurements. Our experimental setup includes a microphone array that measures the transmitted sound source at multiple locations across the acoustic enclosure. We demonstrate that LOCA learns a representation that is isometric to the spatial locations of the microphones. The performance of our method is evaluated using a series of realistic simulations and compared with other dimensionality-reduction schemes. We further assess the influence of reverberation on the results of LOCA and show that it demonstrates considerable robustness.
12:30 – 12:45
Presenter: Shauli Ravfogel
Lab Head: Yoav Goldberg
Title: Identifying and Neutralizing Concepts in Neural Representations
Abstract: I will introduce two works on neutralizing specific concepts in neural models trained on textual data. The first work proposes a concept-neutralization pipeline that involves training linear classifiers and projecting the representations onto their null-space. The second work formulates the problem as a constrained, linear minimax game and derives a closed-form solution for certain objectives, while proposing efficient solutions for others. Both methods are demonstrated to be effective in various use cases, including bias and fairness in word embeddings and multi-class classification. These techniques provide a means of controlling the content of pre-trained representations, which is increasingly important as they are being used in real-world applications.
12:45 – 13:00
Presenter: Ireman Brauner
Lab Head: Ethan Fetaya & Shai Gordin
Title: Optical Character Recognition to Cuneiform- from hand-copy to transliteration
Abstract: In this presentation, I will describe the process of constructing an OCR system for cuneiform signs as part of my thesis research. We will examine the various challenges encountered and the methods employed to overcome them, including the implementation of an iterative algorithm that significantly improved our results. Additionally, I will present a web application that was developed to facilitate the utilization of these research findings.
13:00 – 14:00
Lunch & Mingling