Affective social multimedia computing is an emergent research topic for both affective computing and multimedia research communities. Social multimedia is fundamentally changing how we communicate, interact, and collaborate with other people in our daily lives. Comparing with well-organized broadcast news and professionally made videos such as commercials, TV shows, and movies, social multimedia media computing imposes great challenges to research communities. Social multimedia contains much affective information. Effective extraction of affective information from social multimedia can greatly help social multimedia computing (e.g., processing, index, retrieval, and understanding). Although much progress have been made in traditional multimedia research on multimedia content analysis, indexing, and retrieval based on subjective concepts such as emotion, aesthetics, and preference, affective social multimedia computing is a new research area. The affective social multimedia computing aims to proceed affective information from social multi-media. For massive and heterogeneous social media data, the research requires multidisciplinary understanding of content and perceptual cues from social multimedia. From the multimedia perspective, the research relies on the theoretical and technological findings in affective computing, machine learning, pattern recognition, signal/multimedia processing, computer vision, speech processing, behavior and social psychology. Affective analysis of social multimedia and interaction is attracting growing attention from industry and businesses that provide social networking sites, content-sharing services, distribute and host the media, social interaction with artificial agents. This workshop focuses on the analysis of affective signals in interaction (multimodal analyses enabling artificial agents in Human-Machine Interaction, social Interaction with artificial agents) and social multimedia (e.g., twitter, wechat, weibo, youtube, facebook, etc).
The 1st, 2nd, 3rd, 4th, 5th ASMMC workshop has been successfully held in Xi'an, China on September 21, 2015, Seattle, USA on July 15, 2016, Stockholm, Sweden on August 25, 2017, and Seoul, Korea on October 26, 2018, July 2, 2019 respectively. We take the 6th ASMMC to ACM ICMI 2021 come back again to Affective Computing & Intelligent Interaction for investigating affective computing technology to become available and accessible to education, health, transport, cities, home and entertainments.
It seeks contributions on various aspects of affective computing in interaction and social multimedia and interaction on related theory, methodology, algorithms, techniques, and applications
Qin Jin is a professor in School of Information at Renmin University of China (RUC), where she leads the AI·M³ lab.
She received her Ph.D. degree in 2007 at Carnegie Mellon University. Before joining RUC in 2013, she was a research faculty (2007-2012)
and a research scientist (2012) at Carnegie Mellon University and IBM China Research Lab respectively.
Her research interests are in intelligent multimedia computing and human computer interaction. Her team’s
recent works on video understanding and multimodal affective analysis have won various awards in international challenge evaluations,
including CVPR ActivityNet Dense Video Captioning challenge, NIST TrecVID VTT evaluation, ACM Multimedia Audio-Visual Emotion Challenge etc.
Understanding human emotions is one of the fundamental steps in establishing natural human-computer interaction systems that
possess the emotion perception ability. The behavior signals of human emotion expression are multimodal, including voice,
facial expression, body language, bio-signals etc. and interactive scenarios such as conversational dialogues are the natural scenes
of emotion stimulation and expression. Our research focuses on the integration of multimodal information for robust emotion perception in
natural dialogues, which involves some of the major challenges, including modeling the contextual information in a dialogue, handling missing
modality issues in inference, learning robust emotion feature representation across cultures etc. This talk will present our recent works addressing
these challenges on multimodal emotion recognition.
Erik Cambria is the Founder of SenticNet, a Singapore-based company offering B2B sentiment analysis services,
and an Associate Professor at NTU, where he also holds the appointment of Provost Chair in Computer Science and Engineering. Prior to joining NTU,
he worked at Microsoft Research Asia (Beijing) and HP Labs India (Bangalore) and earned his PhD through a joint programme between the University of
Stirling and MIT Media Lab. His research focuses on the ensemble application of symbolic and subsymbolic AI to natural language processing tasks such
as sentiment analysis, dialogue systems, and financial forecasting. Erik is recipient of several awards, e.g., the 2019 IEEE Outstanding Early Career
Award, he was listed among the 2018 AI's 10 to Watch, and was featured in Forbes as one of the 5 People Building Our AI Future. He is Associate Editor
of many top AI journals, e.g., INFFUS, IEEE CIM, and KBS, Special Content Editor of FGCS, Department Editor of IEEE Intelligent Systems, and is involved
in many international conferences as program chair and invited speaker.
Abstract: With the recent developments of deep learning, AI research has gained new vigor and prominence. However, machine learning still faces three big
challenges: (1) it requires a lot of training data and is domain-dependent; (2) different types of training or parameter tweaking leads to inconsistent
results; (3) the use of black-box algorithms makes the reasoning process uninterpretable. At SenticNet, we address such issues in the context of NLP via
sentic computing, a multidisciplinary approach that aims to bridge the gap between statistical NLP and the many other disciplines necessary for
understanding human language such as linguistics, commonsense reasoning, and affective computing. Sentic computing is both top-down and bottom-up:
top-down because it leverages symbolic models such as semantic networks and conceptual dependency representations to encode meaning; bottom-up
because it uses subsymbolic methods such as deep neural networks and multiple kernel learning to infer syntactic patterns from data.
All the workshop accepted papers will be published in an adjunct proceedings to the 23nd ACM International
Conference on Multimodal Interaction (ICMI 2021::23rd ACM International Conference on Multimodal Interaction).
We invite the submissions in the following categories:
Please note that ICMI 2021 will use a new ACM Publication System (TAPS) process.
This means submitted papers should follow the instructions and use templates given in the following link:
All authors should submit manuscripts for review in a single-column format instead of the previous two-column format.
We invite submissions of long papers, short papers, and extended abstracts formatted according to the new
ICMI guidelines. It means:
Please refer to the instructions on the http://icmi.acm.org/2021/index.php?id=authors.
Proceedings will be published as adjunct proceedings to the ACM International Conference on Multimodal Interaction 2021.
Paper submission and acceptance deadlines
Workshop papers or abstract due: August 16th, 2021 (extended to August 31th, 2021)
Notification of acceptance: September 9th, 2021
Camera-ready paper: September 15th, 2021