Empathy Detection From Text, Audiovisual, Audio or Physiological Signals: A Systematic Review of Task Formulations and Machine Learning Methods.
Spoken in Jest, Detected in Earnest: A Systematic Review of Sarcasm Recognition—Multimodal Fusion, Challenges, and Future Prospects.
Are We There Yet? A Brief Survey of Music Emotion Prediction Datasets, Models and Outstanding Challenges.
SarcasmBench: Towards Evaluating Large Language Models on Sarcasm Understanding.
A Review of Human Emotion Synthesis Based on Generative Technology.
Datasets of Smartphone Modalities for Depression Assessment: A Scoping Review.
A Comprehensive Survey on Datasets for Affective Computing and Mental Disorder.
How to Enhance Causal Discrimination of Emotional Utterances: A Case on LLMs.
Multimodal Framework for Therapeutic Consultations.
Rethinking Emotion Annotations in the Era of Large Language Models.
A New Approach to Characterize Dynamics of ECG-Derived Skin Nerve Activity via Time-Varying Spectral Analysis.
Learning to Rank Onset-Occurring-Offset Representations for Micro-Expression Recognition.
Seismocardiography for Emotion Recognition: A Study on EmoWear With Insights From DEAP.
Lightweight Spatio-Temporal Convolutional Neural Network for Audio-Visual Emotion Recognition.
Examining the Fourier Spectrum of Speech Signal From a Time-Frequency Perspective for Automatic Depression Level Prediction.
TAHAG: Two-Stage Domain Adaptation With Hybrid Adaptive Graph Learning for EEG Emotion Recognition.
Detecting Sympathetic Discharges: Comparison of Electrodermal Activity and Skin Sympathetic Nerve Activity in Stimulation-to-Response Time and Recovery Time to Baseline.
DNMCN: Dual-Stage Normalization Based Modality-Collaborative Fusion Network for Multimodal Sentiment Analysis.
AM-ConvBLS: Adaptive Manifold Convolutional Broad Learning System for Cross-Session and Cross-Subject Emotion Recognition.
Dynamical Causal Graph Neural Network for EEG Emotion Recognition.
Electronic Library for Commercially Usable Emotional Stimuli (EL-CUES): An Annotated Image Database for Emotion Induction Validated in a German Population.
A Multi-Modal Multi-Expert Framework for Pain Assessment in Postoperative Children.
MLM-EOE: Automatic Depression Detection via Sentimental Annotation and Multi-Expert Ensemble.
Subtyping Autism Spectrum Disorder Using Multimodal Multilayer Hypergraphs.
Aware Yet Biased: Investigating Emotional Reasoning and Appraisal Bias in Large Language Models.
VISTANet: VIsual Spoken Textual Additive Net for Interpretable Multimodal Emotion Recognition.
Heart Rate and Facial Expression Data Influence the Ease of Communication in a Remote Work Set-Up.
A Novel Conditional Adversarial Domain Adaptation Network for EEG Cross-Subject Emotion Recognition.
A Discourse Structure- and Interlocutor-Guided Network for Dialogue Act Recognition and Sentiment Classification.
Detecting Signs of Depression Using Social Media Texts Through an Ensemble of Ensemble Classifiers.
Breaking Players’ Expectations: The Role of Non-Player Characters’ Coherence and Consistency.
Could Micro-Expressions Be Quantified? Electromyography Gives Affirmative Evidence.
Study of Emotion Concept Formation by Integrating Vision, Physiology, and Word Information Using Multilayered Multimodal Latent Dirichlet Allocation.
Detection of Schizophrenia Spectrum Disorder and Major Depression Disorder Using Automated Speech Analysis.
SetPeER: Set-Based Personalized Emotion Recognition With Weak Supervision.
Public Opinion Crisis Management via Social Media Mining.
MER-CLIP: AU-Guided Vision-Language Alignment for Micro-Expression Recognition.
Generalizing to Unseen Speakers: Multimodal Emotion Recognition in Conversations With Speaker Generalization.
Multi-View Self-Supervised Domain Adaptation for EEG-Based Emotion Recognition.
Contextual Graph Reconstruction and Emotional Variation Learning for Conversational Emotion Recognition.
Facial Expression Recognition With an Efficient Mix Transformer for Affective Human-Robot Interaction.
Catching the Blackdog Easily: A Convenient Depression Diagnosis Method Based on Audio-Visual Deep Learning.
Affective Embodied Agent for Patient Assistance in Virtual Rehabilitation.
CAETFN: Context Adaptively Enhanced Text-Guided Fusion Network for Multimodal Sentiment Analysis.
Effect of Mindfulness Meditation on Sensory Perception and Emotional Evaluation of Mid-Air Touch on the Forearm.
CoupleFER: Dynamic Cross-Modal Fusion via Prompt Learning for Improved 2D+3D FER.
Scale-Selectable Global Information and Discrepancy Learning Network for Multimodal Sentiment Analysis.
SEED-MYA: A Novel Myanmar Multimodal Dataset for Enhancing Emotion Recognition.
Investigating the Effects of Sleep Conditions on Emotion Responses with EEG Signals and Eye Movements.
Channel Self-Attention Residual Network: Learning Micro-Expression Recognition Features From Augmented Motion Flow Images.