PC

P.S. Cesar Garcia

92 records found

PointPCA+

A Full-reference Point Cloud Quality Assessment Metric with PCA-based Features

This paper introduces an enhanced Point Cloud Quality Assessment (PCQA) metric, termed PointPCA+, as an extension of PointPCA, with a focus on computational simplicity and feature richness. PointPCA+ refines the original PCA-based descriptors by employing Principal Component Anal ...

ComPEQ-MR

Compressed Point Cloud Dataset with Eye Tracking and Quality Assessment in Mixed Reality

Point clouds (PCs) have attracted researchers and developers due to their ability to provide immersive experiences with six degrees of freedom (6DoF). However, there are still several open issues in understanding the Quality of Experience (QoE) and visual attention of end users w ...
Extended Reality (XR) has emerged as a transformative and immersive technology with versatile applications in content creation and consumption. As XR gains popularity, companies eager to adopt it often possess a surface-level understanding, investing significant resources without ...
The Internet of Multisensory, Multimedia and Musical Things (Io3MT) is a new concept that arises from the confluence of several areas of computer science, arts, and humanities, with the objective of grouping in a single place devices and data that explore the five human senses, b ...
Immersive technologies like eXtended Reality (XR) are the next step in videoconferencing. In this context, understanding the effect of delay on communication is crucial. This article presents the first study on the impact of delay on collaborative tasks using a realistic Social X ...
The latest social VR technologies have enabled users to attend traditional media and arts performances together while being geographically removed, making such experiences accessible despite budget, distance, and other restrictions. In this work, we aim at improving the way remot ...

Deciphering Perceptual Quality in Colored Point Cloud

Prioritizing Geometry or Texture Distortion?

Point clouds represent one of the prevalent formats for 3D content. Distortions introduced at various stages in the point cloud processing pipeline affect the visual quality, altering their geometric composition, texture information, or both. Understanding and quantifying the imp ...

Transparent AI Disclosure Obligations

Who, What, When, Where, Why, How

Advances in Generative Artificial Intelligence (AI) are resulting in AI-generated media output that is (nearly) indistinguishable from human-created content. This can drastically impact users and the media sector, especially given global risks of misinformation. While the current ...
Affective computing has experienced substantial advancements in recognizing emotions through image and facial expression analysis. However, the incorporation of physiological data remains constrained. Emotion recognition with physiological data shows promising results in controll ...
As 3D immersive media continues to gain prominence, Point Cloud Quality Assessment (PCQA) is essential for ensuring high-quality user experiences. This paper introduces ViSam-PCQA, a no-reference PCQA metric guided by visual saliency information across three modalities, which fac ...
In recent years, a large variety of online communication tools have emerged, including social Virtual Reality (VR) platforms for interacting in a virtual world with participants being represented as virtual avatars. Given their popularity, an active area of research focuses on im ...
Emotion recognition systems are typically trained to classify a given psychophysiological state into emotion categories. Current platforms for emotion ground-truth collection show limitations for real-world scenarios of long-duration content (e.g. >10 minutes), namely: 1) Real ...
Within our Distributed and Interactive Systems research group, we focus on affective haptics, where we design and develop systems that can enhance human emotional states through the sense of touch. Such artificial haptic sensations can potentially augment and enhance our mind, bo ...
Instead of predicting just one emotion for one activity (e.g., video watching), fine-grained emotion recognition enables more temporally precise recognition. Previous works on fine-grained emotion recognition require segment-by-segment, fine-grained emotion labels to train the re ...

Affective Driver-Pedestrian Interaction

Exploring Driver Affective Responses toward Pedestrian Crossing Actions using Camera and Physiological Sensors

Eliciting and capturing drivers' affective responses in a realistic outdoor setting with pedestrians poses a challenge when designing in-vehicle, empathic interfaces. To address this, we designed a controlled, outdoor car driving circuit where drivers (N=27) drove and encountered ...

FeelTheNews

Augmenting Affective Perceptions of News Videos with Thermal and Vibrotactile Stimulation

Emotion plays a key role in the emerging wave of immersive, multi-sensory audience news engagement experiences. Since emotions can be triggered by somatosensory feedback, in this work we explore how augmenting news video watching with haptics can influence affective perceptions o ...

From Video to Hybrid Simulator

Exploring Affective Responses toward Non-Verbal Pedestrian Crossing Actions Using Camera and Physiological Sensors

Capturing drivers’ affective responses given driving context and driver-pedestrian interactions remains a challenge for designing in-vehicle, empathic interfaces. To address this, we conducted two lab-based studies using camera and physiological sensors. Our first study collected ...
The development and widespread adoption of immersive XR applications has led to a renewed interest in representations that are capable of reproducing real-world objects and scenes with high fidelity. Among such representations, point clouds have attracted the interest of industry ...
Virtual reality (VR) is the experience in a simulated interactive virtual space. It provides synthetic sensory feedback to users' actions that can physically and mentally immerse the users. Social VR is one type of VR system that allows multiple users to join a collaborative virt ...

BreatheWithMe

Exploring Visual and Vibrotactile Displays for Social Breath Awareness during Colocated, Collaborative Tasks

Sharing breathing signals has the capacity to provide insights into hidden experiences and enhance interpersonal communication. However, it remains unclear how the modality of breath signals (visual, haptic) is socially interpreted during collaborative tasks. In this mixed-method ...