Watermarks vs. Perturbations for Preventing AI-based Style Editing
GenAI Watermarking Workshop at ICLR · 2025
Compares traditional watermarking and adversarial protection methods for preventing unauthorized style editing with Stable Diffusion.
Research Output
Browse publications, preprints, and projects. Use the filters to focus on conference, workshop, or preprint entries.
GenAI Watermarking Workshop at ICLR · 2025
Compares traditional watermarking and adversarial protection methods for preventing unauthorized style editing with Stable Diffusion.
IEEE International Conference on Image Processing · 2025
Perturbation-based image protection methods often fail against diffusion-based editing and can even unintentionally strengthen prompt-guided edits instead of preventing them.
IEEE International Conference on Image Processing · 2024
Studies how semantic and visual saliency influence image manipulation detection and motivates more semantic-aware forensic analysis.
arXiv · 2026
Introduces a dataset for small but semantically relevant image manipulations.
arXiv · 2026
Introduces a large-scale dataset for remote sensing video question answering, designed to assess spatiotemporal, scene-centric, and reasoning-oriented capabilities of MLLMs.
arXiv · 2026
Provides a systematic and structured overview of protection mechanisms against diffusion-based image manipulation, organized along three dimensions: threat model, victim image type, and protection paradigm.
arXiv · 2025
Introduces a lightweight protection strategy for defending artistic style against imitation through fine-tuned diffusion models.