Description

MR2AMC is a series of workshops on Multimodal Representation, Retrieval, and Analysis of Multimedia Content organized by the MIDAS lab in IIIT-Delhi. MIDAS stands for Multimodal Digital Media Analysis Lab and it consists a group of researchers who study, analyze, and build different multimedia systems for society leveraging multimodal information. The first iteration of the workshop held in conjunction with the IEEE MIPR conference. The second iteration of the workshop will be held in conjunction with the 20th IEEE International Conference on Multimedia in Taichung, Taiwan. This year's workshop theme is social media. Thus, MR2AMC aims to provide an international forum for researchers in the field of multimedia data processing, analysis, search, mining, and management leveraging multimodal information in social media. This workshop will provide a forum to researchers and practitioners from both academia and industry for original research contributions and practical system design, implementation, and applications of multimodal multimedia information processing, mining, representation, management, and retrieval. MR2AMC 2018 invites research papers in the area of multimodal multimedia content analysis, search and retrieval, semantic computing, and affective computing. Accepted papers of MR2AMC 2018 will be published as part of the workshop proceedings in the IEEE Digital Library. Extended versions of the accepted workshop papers will be invited for publication in Springer Cognitive Computation and IEEE Computational Intelligence Magazine.

Call for paperSubmit paper

Important Dates

Draft paper submission deadline:2018-09-30

Abstract submission deadline:2018-09-30

Author guidelines

All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome:

The MR2AMC workshop 2018 follows the submission guidelines of the 20th IEEE International Symposium on Multimedia(ISM 2018). Papers reporting original and unpublished research results pertaining to the topics in the CFP are solicited. There are two different submission categories: REGULAR and SHORT whose lengths are expected to be 8 and 4 pages long (using theIEEE two-column template instructions), respectively. Submissions should include the title, author(s), affiliation(s), e-mail address(es), tel/fax numbers, abstract, and postal address(es) on the first page. The online submission site is: Submission Link (click here). If web submission is not possible, please contact the program co-chairs for alternate arrangements. Papers will be selected based on their originality, timeliness, significance, relevance, and clarity of presentation. Paper submission implies the intent of at least one of the authors to register and present the paper, if accepted. For the detailed information on submission templates and instructions, check the MIPR's submission instruction page (click here).

Topics of submission

The primary goal of the proposed workshop is to investigate whether multimedia content when fused with other modalities (e.g., contextual, crowd-source, and relationship information) can enhance the performance of unimodal (e.g., when only multimedia content) multimedia systems. The broader context of the workshop comprehends Multimedia Information Processing (e.g., Natural Language Processing, Image Processing, Speech Processing, and Video Processing), Multimedia Embedding (e.g., Word Embedding and Image Embedding), Web Mining, Machine Learning, Deep Neural Networks, and AI. Topics of interest include but are not limited to:

  • Multimodal Multimedia Search, Retrieval and Recommendation
  • Multimodal Personalized Multimedia Retrieval and Recommendation
  • Multimodal Event Detection, Recommendation, and Understanding
  • Multimodal Multimedia based FAQ and QA Systems
  • Multimodal based Diverse Multimedia Search, Retrieval and Recommendation
  • Multimodal Multimedia Content Analysis
  • Multimodal Semantic and Sentiment based Multimedia Analysis
  • Multimodal Semantic and Sentiment based Multimedia Annotation
  • Multimodal Semantic-based Multimedia Retrieval and Recommendation
  • Multimodal Sentiment-based Multimedia Retrieval and Recommendation
  • Multimodal Filtering, Time-Sensitive and Real-time Search of Multimedia
  • Multimodal Multimedia Annotation Methodologies
  • Multimodal Sentiment-based Multimedia Retrieval and Annotation
  • Multimodal Context-based Multimedia Retrieval and Annotation
  • Multimodal Location-based Multimedia Retrieval and Annotation
  • Multimodal Relationship-based Multimedia Retrieval and Annotation
  • Multimodal Mobile-based Retrieval and Annotation of Big Multimedia
  • Multimodal Multimedia Data Modeling and Visualization
  • Multimodal Feature Extraction and Learning for Multimedia Data Representation
  • Multimodal Multimedia Data Embedding
  • Multimodal Medical Multimedia Information Retrieval
  • Multimodal Subjectivity Detection Extraction from Multimedia
  • Multimodal High-Level Semantic Features from Multimedia
  • Multimodal Information Fusion
  • Multimodal Affect Recognition
  • Multimodal Deep Learning in Multimedia and Multimodal Fusion
  • Multimodal Spatio-Temporal Multimedia Data Mining
  • Multimodal Multimedia based Massive Open Online Courses (MOOC)
  • Multimodal/Multisensor Integration and Analysis
  • Multimodal Affective and Perceptual Multimedia
  • Multimedia based Education

Committee

Program Committee

  • Amir Zadeh, Carnegie Mellon University, USA 
  • Luming Zhang, Hefei University of Technology, China 
  • Zhenguang Liu, Zhejiang University, China 
  • Vivek Singh, Rutgers University, USA 
  • Pradeep K. Atrey, University at Albany, USA 
  • Mukesh Saini, Indian Institute of Technology Ropar, India 
  • A V Subramanyam, Indraprastha Institute of Information Technology Delhi, India 
  • Debashis Sen, Indian Institute of Technology Kharagpur, India 
  • Animesh Prasad, National University of Singapore, Singapore 
  • Muthu Kumar Chandrasekaran, National University of Singapore, Singapore 
  • Omprakash Kaiwartya, Northumbria University, UK 
  • Mukesh Prasad, University of Technology Sydney, Sydney 
  • Kishaloy Halder, National University of Singapore, Singapore 
  • Lahari Poddar, National University of Singapore, Singapore 
  • Vivek Kumar Singh, Banaras Hindu University, India 
  • Erik Cambria, Nanyang Technological University, Singapore 
  • Yogesh Singh Rawat, University of Central Florida, USA 
  • Hisham Al-Mubaid, University of Houston-Clear Lake, USA 
  • Alexandra Balahur, University of Alicante, Spain 
  • Catherine Baudin, eBay Research Labs, USA 
  • Sergio Decherchi, Italian Institute of Technology, Italy 
  • Rafael Del Hoyo, Aragon Institute of Technology, Spain 
  • Paolo Gastaldo, University of Genoa, Italy 
  • Tariq Durrani, University of Strathclyde, UK 
  • Lila Ghemri, Texas Southern University, USA 
  • Marco Grassi, Marche Polytechnic University, Italy 
  • Amir Hussain, University of Stirling, UK 
  • Raymond Lau, City University of Hong Kong, Hong Kong 
  • Saif Mohammad, National Research Council, Canada 
  • Samaneh Moghaddam, Simon Fraser University, Canada 
  • Tao Chen, Johns Hopkins University, USA

Organizing committee

  • Rajiv Ratn Shah (IIIT-Delhi, India)
  • Debanjan Mahata (Bloomberg, USA)
  • Yifang Yin (NUS, Singapore)
  • Soujanya Poria (NTU, Singapore)
  • A V Subramanaym (IIIT-Delhi, India)
  • Roger Zimmermann (NUS, Singapore)

Message

Leave a message

Refresh

Contact information

  • mr2amc.group@gmail.com