Figure: Pretrained “Geometric Matching Model” generated warped clothes

Abstract

Virtual try-on of clothes has attracted much interest with the transition of global retail stores to e-commerce and online shopping. Different methodologies have been proposed to implement a solution that allows consumers to virtually try on clothes and understand how they fit their bodies. Virtual try-on modules range from (1) 2D images simulated into 3D bodies like the one proposed in DRAPE to (2) conditional analogy GAN to swap fashion articles.

The best practice in image translation for virtual try-on is proposed in the paper VITON by Han et al., presenting a virtual try-on module network without using 3D information. This 2D solution proves to be more economical. However, it suffers from poor shape-context matching alignment [2]. To address this, researchers proposed CP-VTON and the latest CP-VTON+ published in August 2020. These new solutions can maintain cloth characteristics such as texture, logo, text, and so on [3].

Our research improved the CPVTON+ results by updating the VGG19 module used in the loss function to regularize the GAN. We do this by activating additional layers excluded in the researchers’ implementation, including hyperparameter optimization. We found that VGG loss still provides the best results compared to other models like Inception. Our results yield a 75.3%SSIM score (74.3% baseline) and 25.4% LPIPS (26.3% baseline). Our model also produced improved images visually.