Matting — Aiarty

[2] Ke, Z., et al. (2020). MODNet: Real-time trimap-free portrait matting via objective decomposition. AAAI .

Author: [Your Name/Institution] Date: [Current Date] Abstract Image matting—the task of accurately extracting foreground elements with fine boundary details—remains a challenge for conventional computer vision methods, particularly for hair, fur, and translucent objects. This paper evaluates AIarty Matting , an AI-driven solution that leverages generative neural networks to produce alpha mattes. Using a dataset of 500 diverse images (portraits, e-commerce products, nature scenes), we compare AIarty Matting against three established methods: U²-Net, MODNet, and Adobe Photoshop’s “Select Subject” (AI-based). Metrics include SAD (Sum of Absolute Differences), MSE (Mean Squared Error), inference time per image, and user-rated boundary quality. Results indicate that AIarty Matting outperforms MODNet in fine detail retention (SAD improvement of 12.4%) but requires 1.8× higher inference latency. We conclude with recommendations for optimizing generative matting for real-time applications. aiarty matting

Table 1: Average metrics over AIM-500 dataset. Bold = best. [2] Ke, Z

Single GPU, version v1.2 of AIarty Matting, no trimap support (AIarty does not accept trimaps). Future work should test on video sequences and VR applications. References [1] Qin, X., et al. (2020). U²-Net: Going deeper with nested U-structure for salient object detection. Pattern Recognition , 106, 107404. Using a dataset of 500 diverse images (portraits,

[3] Sengupta, S., et al. (2020). Background matting v2. CVPR .