Back
to top

NEWS

2018-12-22

SIGGRAPH Asia 2018 Conference Journal

By Yu Liu

Conference Introduction

SIGGRAPH Asia is ainteractive technology exhibition & conference that combines science, art and commerce. It is also one of the largest and most influential conference in the world. This year's SIGGRAPH Asia was held in Tokyo, Japan from December 4th to 7th, 2018. The conference attracted participants from all over the world to discuss cutting-edge work and development trend of computer graphics, including emerging technologies in research, science, art, animation, games, interaction, education. The entire conference lasted for 4 days.


Photo: SIGGRAPH Asia 2018


Day 1 Fast Forward

Fast Forward is the opening ceremony of SIGGRAPH Asia and is the traditional program that attracts all participants. The program was held in the evening of December 4th at 6pm in the auditorium of the Tokyo International Forum.


In the exhibition of Fast Forward, teachers and students of Zhejiang University’s CAD&CGState Key Laboratory made debut at the conferenceProfessor Tang Minintroducedhis paper I-Cloth: Incremental Collision Handling for GPU-Based Interactive Cloth SimulationDoctoral student Geng Jiahao introduced his paper entitled Warp-guided GANs for Single-Photo Facial Animation. Doctoral student Zhang Meng introduced his paper on Modeling Hair from an RGB-D Camera.


Photo: Fast Forward venue

 

Day 2

The Faces, Faces, Faces keynote speech focused on the latest research in face animation generation, 3D face reconstruction and face tracking. Gortardo from Disney introduced aninventive 3D face reconstruction method. This method not only generates BRDF parameters that match the face image and a high-resolution normal map for each frame, but also encodes blood flow condition in the face to create a more vivid reconstruction of the 3D face model. Wu from Facebook introduced an incremental learning method related to face research, which can detect faces in images based on input images and generate high-resolution 3D face models containing wrinkles. This method can improve the effectiveness of face tracking.


Cao from Snapchat proposed a real-time face tracking algorithm based on single-hole RGB camera and dynamic rigidity prior, which improves the stability of face tracking. Prof. Zhou Kun from Zhejiang University’s State Key Lab of CAD&CG proposed a new method of face animation generation based on single image:wg-GAN. By inputting a single portrait and a dynamic video of an expression that is unrelated to the input portrait, this method can generate an expression animation associated with the input portrait, and at the same time generate natural facial wrinkles and oral texture, thereby making the expression in the animation more natural and smoother.


Day 3

Prof. Zhou Kun from our school gave a keynote speech entitled Modeling Things on (and in) Your Head with focus on three-dimensional reconstruction of hair and teeth.Prof. Zhou Kun proposed a data driven method based on RGB-D camera. The method uses a local similarity search and synthesis algorithm to generate realistic 3D hair models and textures.Liang from the University of Washington presented a fully video-based 3D hair styling technology that restores high quality 3D hair models with a single video input.


Saito from the University of Southern California proposed a fully automatic hair model generation method using deep learning, which utilizes embedded network to calculate the low-dimensional mapping relationship between the 3D hairstyle model and the 2D image, so that it can reconstruct a 3D hairstyle model in the image from low quality input.Velinov from Disney introduced a high-precision tooth 3D reconstruction technique that accurately captures the internal features of the teeth and uses them to achieve high-quality, realistic tooth reconstruction.

 

Day 4

The keynote speech on Low-Level Imaging focused on the research of image smoothing, image super-resolution, coloring, and image segmentation. Researchers from Shandong University and Microsoft Research Asia proposed an image smoothing method based on unsupervised deep learning, which uses spatial adaptive algorithms to apply different regular functions to different regions of the image, making neural networks capable of producing high quality, smooth results.


Ge from the University of Hong Kong used deep learning to divide the super-resolution problem into two parts—the decisive part and the random part—to achieve more efficient compression and recovery of the image. Zhang from Soochow University proposed two-stage sketch colorization, which simplifies the coloring process in the animation process. Using deep learning with the two steps of “manuscript” and “refining”,it can generate high quality coloring results using line drawing and simple color calibration as inputs.


Tan from George Mason University and Adobe proposed a simple and effective palette-based image segmentation algorithm. The efficiency of this method is several orders of magnitude higher than that of previous work, and it does not require numerical optimization. It can achieve the same effect as the most advanced methods today with just 48 lines of Python code.




The College of Computer Science and Technology educates future leaders in computer science with interdisciplinary innovation capabilities to address global challenges in the AI2.0 world.