Conflab - Meet the Chairs!

Conflab was an initiative at ACM Multimedia 2019 where conference attendees could meet peers and chairs of ACM MM 2019 while co-creating a community data set you could be using for your own research. 

The idea was to turn the Multimedia conference into a in-the-wild living lab It aimed to bring different themes of the Multimedia community together in an introspective multimodal data collection and measuring event that provides an example of best practices in data privacy and data sharing. The motivation was to open possibilities for new multimedia grand challenges related to social behaviour analysis and community behaviour understanding. The overarching objective of the data collection carried out during ConfLab was to study key concerns about scientific diversity. ConfLab analysed and gave feedback about social networking behaviour to both the conference participants and aggregated information back to the community in general. We provided information on community cohesiveness/heterogeneity, the emergence of trending topics, and the embedding of newcomers.

Read more at https://conflab.ewi.tudelft.nl/

MediaEval Grand Challenge -  No-Audio Multimodal Speech Detection Task

The goal of the task is to automatically estimate when the person seen in the video starts speaking, and when they stop speaking using these alternative modalities. In contrast to conventional speech detection, for this task, no audio is used. Instead, the automatic estimation system must exploit the natural human movements that accompany speech (i.e., speaker gestures, as well as shifts in pose and proximity).

This task consists of two subtasks:

  • Unimodal classification: Design and implement separate speech detection algorithms exploiting each modality separately: Teams must submit separate decisions for the wearable modality and for the video modality.
  • Multimodal classification: Design and implement a speech detection approach that integrates modalities. Teams must submit a multimodal estimation decision, using some form of early, late or hybrid fusion.

Read more at http://www.multimediaeval.org/mediaeval2019/speakerturns/

ConfFlow

As a community, Multimedia is so diverse that it is easy for community members to miss out on very useful expertise and potentially fruitful collaborations. There is a lot of latent knowledge and potential synergies that could exist if conference attendees were offered an alternative perspective on their similarities to other attendees. This is exactly what we tried to do with ConfFlow and you are all invited to join its launch!

ConfFlow is an application to encourage people with similar or complementary research interests to find each other at conferences. It allows its users to browse a similarity space that is created by analyzing participants' recent publications, in a similar manner to the Toronto Paper Matching System (TPMS). ConfFlow also has features similar to social media applications. You can manipulate the similarity space by selecting the embedding method, creating favorites and hide-lists, and hiding-visualizing your co-authors. Since we know how hard it might be to initiate a new contact, ConfFlow also has the functionality of identifying mutual trusted connections to help break the ice. If you are more interested in observing the recently emerging topics in the Multimedia community and people doing research on those topics, ConfFlow also has an option of research topic based visualization. You can reach all these functionalities, and more, from https://confflow.web.app/ !