Research

Research

Research areas and current directions

The research agenda centers on trustworthy visual computing: understanding how generative models alter media, how authenticity can be verified, and how visual systems can be protected against misuse. The projects below reflect a blend of computer vision, AI security, and human-centered questions about how images are created, edited, and trusted.

Generative Models and Image Editing

This line of work studies how modern generative models edit and transform images, especially diffusion-based systems that make high-quality editing widely accessible. The goal is to better understand controllability, misuse, and protection when models are used for style transfer, content modification, or identity-preserving editing.

Current interests include protection against unauthorized style imitation, evaluation of editing pipelines, and practical mechanisms that help creators retain control over their visual assets.

Media Forensics and Authenticity

Media forensics research focuses on detecting manipulations, reasoning about visual evidence, and making authenticity judgments more reliable. This includes understanding how dataset biases, semantic context, and viewer attention affect image manipulation detection systems.

The broader aim is to support trustworthy multimedia pipelines with methods that are not only accurate, but also better aligned with real-world misinformation scenarios.

Adversarial Robustness and Protection

This direction explores adversarial perturbations, origin protection, and robust defenses for images exposed to downstream generative models. The main question is how to protect creators and sensitive media when models can otherwise edit, imitate, or reuse content without authorization.

The work combines robustness analysis with practical protection strategies that remain lightweight, effective, and usable in realistic workflows.