Michael Fink

Michael Fink

Michael Fink’s work bridges media research, machine learning and cognitive science. Michael initiated the YouTube interactive video annotations project, which became a major driving force in making YouTube videos truly interactive. Previously, Michael worked at Google Research, focusing on image and audio fingerprinting for applications such as the “mass personalization” of broadcast television. His PhD research at the Hebrew University of Jerusalem focuses on large scale object recognition in humans and machines, generating publications ranging from machine learning, computer vision and artificial intelligence to cognitive science, justice and economics. In the last few years, Michael has initiated the Computer Science and Design program in a joint collaboration with The Hebrew University of Jerusalem and the Bezalel Design Academy.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract We study conversational domain exploration (CODEX), where the user’s goal is to enrich her knowledge of a given domain by conversing with an informative bot. Such conversations should be well grounded in high-quality domain knowledge as well as engaging and open-ended. A CODEX bot should be proactive and introduce relevant information even if not directly asked for by the user. The bot should also appropriately pivot the conversation to undiscovered regions of the domain. To address these dialogue characteristics, we introduce a novel approach termed dynamic composition that decouples candidate content generation from the flexible composition of bot responses. This allows the bot to control the source, correctness and quality of the offered content, while achieving flexibility via a dialogue manager that selects the most appropriate contents in a compositional manner. We implemented a CODEX bot based on dynamic composition and integrated it into the Google Assistant. As an example domain, the bot conversed about the NBA basketball league in a seamless experience, such that users were not aware whether they were conversing with the vanilla system or the one augmented with our CODEX bot. Results are positive and offer insights into what makes for a good conversation. To the best of our knowledge, this is the first real user experiment of open-ended dialogues as part of a commercial assistant system. View details
    YouTube's Collaborative Annotations
    Sigalit Bar
    Aviad Bazilai
    Nir Kerem
    Isaac Elias
    Julian Frumar
    Herb Ho
    Ryan Junee
    Simon Ratner
    Jasson Schrock
    Ran Tavory
    Webcentives (2009), pp. 18-19
    Preview abstract More and more YouTube videos no longer provide a passive viewing experience, but rather entice the viewer to interact with the video by clicking on objects with embedded links. These links are part of YouTube’s Annotations system, which enables content owners to add active overlays on top of their videos. YouTube Annotation overlays also enable adding dynamic speech bubbles and pop-ups which can function as an ever-changing layer of supplementary information and entertainment, augmenting the video experience. This paper addresses the question of whether the ability to add annotation overlays on a given video should be opened to the YouTube public. The basic dilemma in opening a video to collaborative annotations is derived from the tension between the benefits of collaboration and the risks of visual clutter and spam. We term the degree to which a video is open to external contributions as the collaboration spectrum, and describe several models that let content owners to explore this spectrum in order to find the optimal way to harness the power of the masses. View details
    Coordinated Multi-Device Presentations: Ambient-Audio Identification
    Michele Covell
    Encyclopedia of Wireless and Mobile Communications, Taylor & Francis (2008), pp. 274-285
    Preview
    Online Multiclass Learning by Interclass Hypothesis Sharing
    Shai Shalev-Shwartz
    Yoram Singer
    Shimon Ullman
    Proceedings of the 23rd International Conference on Machine Learning (2006)
    Preview
    Advertisement Detection and Replacement using Acoustic and Visual Repetition
    Michele Covell
    Proceedings of the 2006 International Workshop on Multimedia Signal Processing, IEEE
    Preview