default search action
45th SIGGRAPH 2018: Vancouver, BC, Canada - Posters Proceedings
- Special Interest Group on Computer Graphics and Interactive Techniques Conference, SIGGRAPH 2018, Vancouver, BC, Canada, August 12-16, 2018, Posters Proceedings. ACM 2018, ISBN 978-1-4503-5817-0
Art & design
- Kazuki Miyazaki, Issei Fujishiro:
Automatic generation of artworks using virtual photoelastic material. 1:1-1:2 - Néill O'Dwyer, Nicholas Johnson, Rafael Pagés, Jan Ondrej, Konstantinos Amplianitis, Enda Bates, David S. Monaghan, Aljosa Smolic:
Beckett in VR: exploring narrative using free viewpoint video. 2:1-2:2 - Seungbae Bang, Sung-Hee Lee:
Computation of skinning weight using spline interface. 3:1-3:2 - Zihao Song, Serguei A. Mokhov, Miao Song, Sudhir P. Mudur:
Creative use of signal processing and MARF in ISSv2 and beyond. 4:1-4:2 - Martina R. Fröschl, Alfred Vendl:
CRISPR/Cas9-NHEJ: action in the nucleus. 5:1-5:2 - Kohei Ogawa, Kengo Tanaka, Tatsuya Minagawa, Yoichi Ochiai:
Design method of digitally fabricated spring glass pen. 6:1-6:2 - Christy Spangler, Eric Stolzenberg:
El faro: developing a digital illustration of hull wreckage 15, 400 feet below the surface of the Atlantic ocean. 7:1-7:2 - Richard Cottrell:
Ephemeral sandscapes: using robotics to generate temporal landscapes. 8:1-8:2 - Tim McGraw:
Fractal anatomy: imaging internal and ambient structures. 9:1-9:2 - Maria Lantin, Simon Lysander Overstall, Hongzhu Zhao:
I am afraid: voice as sonic sculpture. 10:1-10:2 - Shinji Mizuno, Yuka Oba, Nao Kotani, Yoichi Shinchi, Kenji Funahashi, Shinya Oguri, Koji Oguri, Takami Yasuda:
Interactive projection mappings in a Japanese traditional house. 11:1-11:2 - Jaedong Lee, Jehee Lee:
Learning to move in crowd. 12:1-12:2 - Steve Caruso:
Painting with DEGAS: (digitally extrapolated graphics via algorithmic strokes). 13:1-13:2 - Ya-Bo Huang, Mei-Yun Chen, Ming Ouhyoung:
Perceptual-based CNN model for watercolor mixing prediction. 14:1-14:2 - Yi-Lung Kao, Yu-Sheng Chen, Ming Ouhyoung:
Progressive-CRF-net: single image radiometric calibration using stacked CNNs. 15:1-15:2 - Jane Prophet, Yong Ming Kow, Mark Hurry:
Small trees, big data: augmented reality model of air quality data via the chinese art of "artificial" tray planting. 16:1-16:2 - Yuka Takahashi, Tsukasa Fukusato:
Stitch: an interactive design system for hand-sewn embroidery. 17:1-17:2 - Predrag K. Nikolic, Hua Yang, Jyunjye Chen, George Peter Stankevich:
Syntropic counterpoints: art of AI sense or machine made context art. 18:1-18:2 - Águeda Simó:
The stereoscopic art installation eccentric spaces. 19:1-19:2 - Kyoung Lee Swearingen, Scott Swearingen:
Wall mounted level: a cooperative mixed reality game about reconciliation. 20:1-20:2
Augmented & virtual realities
- Gyorgy Denes, Kuba Maruszczyk, Rafal K. Mantiuk:
Exploiting the limitations of spatio-temporal vision for more efficient VR rendering. 21:1-21:2 - Alberto Badías, Icíar Alfaro, David González, Francisco Chinesta, Elías Cueto:
Improving the realism of mixed reality through physical simulation. 22:1-22:2 - Hui-Ju Chen, Zi-Xin You, Yun-Ho Yu, Jen-Ming Chen, Chia-Chun Chang, Chien-Hsing Chou:
Interactive teaching aids design for essentials of anatomy and physiology: using bones and muscles as example. 23:1-23:2 - Takuro Nakao, Yun Suen Pai, Megumi Isogai, Hideaki Kimata, Kai Kunze:
Make-a-face: a hands-free, non-intrusive device for tongue/mouth/cheek input using EMG. 24:1-24:2 - Yiming Lin, Pieter Peers, Abhijeet Ghosh:
On-site example-based material appearance digitization. 25:1-25:2 - Ping-Hsuan Han, Jia-Wei Lin, Chen-Hsin Hsieh, Jhih-Hong Hsu, Yi-Ping Hung:
tARget: limbs movement guidance for learning physical activities with a video see-through head-mounted display. 26:1-26:2 - Katharina Krösl, Anna Felnhofer, Johanna Xenia Kafka, Laura Schuster, Alexandra Rinnerthaler, Michael Wimmer, Oswald D. Kothgassner:
The virtual schoolyard: attention training in virtual reality for children with attentional disorders. 27:1-27:2 - Wan-Lun Tsai, Min-Chun Hu:
Training assistant: strengthen your tactical nous with proficient virtual basketball players. 28:1-28:2 - Jotaro Shigeyama, Takeru Hashimoto, Shigeo Yoshida, Taiju Aoki, Takuji Narumi, Tomohiro Tanikawa, Michitaka Hirose:
Transcalibur: dynamic 2D haptic shape illusion of virtual object by weight moving VR controller. 29:1-29:2
Display & rendering
- Matthew Justice, Ergun Akleman:
A process to create dynamic landscape paintings using barycentric shading with control paintings. 30:1-30:2 - Naoki Hashimoto, Kyosuke Hamamoto:
Aerial 3D display using a symmetrical mirror structure. 31:1-31:2 - Yoshiki Terashima, Kengo Fujii, Hirotsugu Yamamoto, Masaki Yasugi, Shiro Suyama, Yukihiro Takeda:
Aerial 3D/2D composite display: depth-fused 3D for the central user and 2D for surrounding audiences. 32:1-32:2 - Martin Ritz, Pedro Santos, Dieter W. Fellner:
Automated acquisition and real-time rendering of spatially varying optical material behavior. 33:1-33:2 - Yusuke Tokuyoshi, Tomohiro Mizokuchi:
Conservative Z-prepass for frustum-traced irregular Z-buffers. 34:1-34:2 - Vineet Batra, Ankit Phogat, Mridul Kavidayal:
General primitives for smooth coloring of vector graphics. 35:1-35:2 - Fei Wang, Shujin Lin, Ruomei Wang, Yi Li, Baoquan Zhao, Xiaonan Luo:
Improving incompressible SPH simulation efficiency by integrating density-invariant and divergence-free conditions. 36:1-36:2 - Jiangyan Han, Ishtiaq Rasool Khan, Susanto Rahardja:
Lighting condition adaptive tone mapping method. 37:1-37:2 - Tobias Bertel, Christian Richardt:
MegaParallax: 360° panoramas with motion parallax. 38:1-38:2 - Antoine Toisoul, Daljit Singh J. Dhillon, Abhijeet Ghosh:
Practical acquisition and rendering of common spatially varying holographic surfaces. 39:1-39:2 - Yuliya Gitlina, Daljit Singh J. Dhillon, Jan Hansen, Dinesh K. Pai, Abhijeet Ghosh:
Practical measurement-based spectral rendering of human skin. 40:1-40:2 - Markus Schütz, Michael Wimmer:
Progressive real-time rendering of unprocessed point clouds. 41:1-41:2 - Anastasia Feygina, Dmitry I. Ignatov, Ilya Makarov:
Realistic post-processing of rendered 3D scenes. 42:1-42:2 - Yen-Chih Chiang, Shih-Song Cheng, Huei-Siou Chen, Le-Jean Wei, Li-Min Huang, David K. T. Chu:
Retinal resolution display technology brings impact to VR industry. 43:1-43:2 - Keiko Nakamoto, Takafumi Koike:
Which BSSRDF model is better for heterogeneous materials? 44:1-44:2
Hardware interfaces
- Nao Asano, Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto:
3D facial geometry analysis and estimation using embedded optical sensors on smart eyewear. 45:1-45:2 - Paul Canada, George Ventura, Christopher Iossa, Orquidia Moreno, William J. Joel:
Development of an open source motion capture system. 46:1-46:2 - Kaizhang Kang, Zimin Chen, Jiaping Wang, Kun Zhou, Hongzhi Wu:
Learning optimal lighting patterns for efficient SVBRDF acquisition. 47:1-47:2 - Yoichi Ochiai, Kazuki Otao, Yuta Itoh, Shouki Imai, Kazuki Takazawa, Hiroyuki Osone, Atsushi Mori, Ippei Suzuki:
Make your own retinal projector: retinal near-eye displays via metamaterials. 48:1-48:2 - Kenta Yamamoto, Kotaro Omomo, Kazuki Takazawa, Yoichi Ochiai:
Solar projector. 49:1-49:2 - Simone Barbieri, Tao Jiang, Ben Cawthorne, Zhidong Xiao, Xiaosong Yang:
3D content creation exploiting 2D character animation. 50:1-50:2 - Huiyi Fang, Kenji Funahashi:
Automatic display zoom for people suffering from Presbyopia. 51:1-51:2 - Buck Barbieri, Naomi Hutchens, Kayleigh Harrison:
Collaborative animation production from students' perspective: creating short 3D CG films through international team-work. 52:1-52:2 - Martin Kilian, Hui Wang, Eike Schling, Jonas Schikore, Helmut Pottmann:
Curved support structures and meshes with spherical vertex stars. 53:1-53:2 - Or Fleisher, Shirin Anlen:
Volume: 3D reconstruction of history for immersive platforms. 54:1-54:2
Research
- Vincent Gaubert, Enki Londe, Thibaut Poittevin, Alain Lioret:
3D-mesh cutting based on fracture photographs. 55:1-55:2 - Hye-Sun Kim, Yun-Ji Ban, Chang-Joon Park:
A seamless texture color adjustment method for large-scale terrain reconstruction. 56:1-56:2 - Kenta Yamamoto, Riku Iwasaki, Tatsuya Minagawa, Ryota Kawamura, Bektur Ryskeldiev, Yoichi Ochiai:
BOLCOF: base optimization for middle layer completion of 3D-printed objects without failure. 57:1-57:2 - Byungjun Kwon, Moonwon Yu, Hanyoung Jang, KyuHyun Cho, Hyundong Lee, Taesung Hahn:
Deep motion transfer without big data. 58:1-58:2 - Xiaodong Cun, Feng Xu, Chi-Man Pun, Hao Gao:
Depth assisted full resolution network for single image-based view synthesis. 59:1-59:2 - Sherzod Salokhiddinov, Seungkyu Lee:
Depth from focus for 3D reconstruction by iteratively building uniformly focused image set. 60:1-60:2 - Chloe LeGendre, Kalle Bladin, Bipin Kishore, Xinglei Ren, Xueming Yu, Paul E. Debevec:
Efficient multispectral facial capture with monochrome cameras. 61:1-61:2 - Nobuhiko Mukai, Taishi Nishikawa, Youngha Chang:
Evaluation of stretched thread lengths in spinnability simulations. 62:1-62:2 - Yuji Suzuki, Jotaro Shigeyama, Shigeo Yoshida, Takuji Narumi, Tomohiro Tanikawa, Michitaka Hirose:
Food texture manipulation by face deformation. 63:1-63:2 - Quan Qi, Qingde Li:
From visible to printable: thin surface with implicit interior structures. 64:1-64:2 - Ivo Aluízio Stinghen Filho, Estevam Nicolas Chen, Jucimar Maia da Silva Junior, Ricardo da Silva Barboza:
Gesture recognition using leap motion: a comparison between machine learning algorithms. 65:1-65:2 - Yifan Men, Zeyu Shen, Dawar Khan, Dong-Ming Yan:
Improving regularity of the centoridal voronoi tessellation. 66:1-66:2 - Yeonho Kim, Daijin Kim:
Interactive dance performance evaluation using timing and accuracy similarity. 67:1-67:2 - Ming-Shiuan Chen, I-Chao Shen, Chun-Kai Huang, Bing-Yu Chen:
Large-scale fabrication with interior zometool structure. 68:1-68:2 - Ryota Natsume, Tatsuya Yatagawa, Shigeo Morishima:
RSGAN: face swapping and editing using face and hair representation in latent spaces. 69:1-69:2 - Danny Huang, Ian Stavness:
Simulation of emergent rippling on growing thin-shells. 70:1-70:2 - Feier Cao, M. H. D. Yamen Saraiji, Kouta Minamizawa:
Skin+: programmable skin as a visuo-tactile interface. 71:1-71:2 - Abdelhak Saouli, Mohamed Chaouki Babahenini:
Towards a stochastic depth maps estimation for textureless and quite specular surfaces. 72:1-72:2 - Dominic Branchaud, Walter Muskovic, Maria Kavallaris, Daniel Filonik, Tomasz Bednarz:
Visual microscope for massive genomics datasets, expanded perception and interaction. 73:1-73:2 - Bianca Cirdei, Eike Falk Anderson:
Withering fruits: vegetable matter decay and fungus growth. 74:1-74:2
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.