https://uist.acm.org/uist2022/accepted-papers.html

  • Checking the list of accepted papers

We-toon: A Communication Support System between Writers and Artists in Collaborative Webtoon Sketch Revision

  • Hyung-Kwon Ko: Webtoon AI, NAVER WEBTOON Corp., Seoul National University; Subin An: Seoul National University; Gwanmo Park: Seoul National University; Seung Kwon Kim: NAVER WEBTOON; Daesik Kim: Naver Webtoon Ltd; Bohyoung Kim: Hankuk University of Foreign Studies; Jaemin Jo: Sungkyunkwan University; Jinwook Seo: Seoul National University

    • Is it related to Naver?

OmniScribe: Authoring Immersive Audio Descriptions for 360° Videos

  • Ruei-Che Chang: University of Michigan; Chao-Hsien Ting: National Taiwan University; Chia-Sheng Hung: National Taiwan University; Wan-Chen Lee: National Taiwan University; Liang-Jin Chen: National Taiwan University; Yu-Tzu Chao: Audio Description Development Association; Bing-Yu Chen: National Taiwan University; Anhong Guo: University of Michigan

Augmented Chironomia for Presenting Data to Remote Audiences

  • Brian D. Hall: University of Michigan; Lyn Bartram: Simon Fraser University; Matthew Brehmer: Tableau Research

    • Chironomia, a sign language-like art? It’s quite niche, why this?

WaddleWalls: Room-scale Interactive Partitioning System using a Swarm of Robotic Partitions

  • Yuki Onishi: Tohoku University; Kazuki Takashima: Tohoku University; Shoi Higashiyama: Tohoku University; Kazuyuki Fujita: Tohoku University; Yoshifumi Kitamura: Tohoku University

  • People from Tohoku University
  • 🇯🇵

Social Simulacra: Creating Populated Prototypes for Social Computing Systems

  • Joon Sung Park: Stanford University; Lindsay Popowski: Stanford University; Carrie J Cai: Google; Meredith Ringel Morris: Google Research; Percy Liang: Stanford University; Michael S. Bernstein: Stanford University

  • Is it about social computing, calculations involving social interactions like SNS?

CodeToon: Story Ideation, Auto Comic Generation, and Structure Mapping for Code-Driven Storytelling

  • Sangho Suh: University of Waterloo; Jian Zhao: University of Waterloo; Edith Law: University of Waterloo

  • They’re doing everything.

HingeCore: Laser-Cut Foamcore for Fast Assembly

  • Muhammad Abdullah: Hasso Plattner Institute; Romeo Sommerfeld: Hasso Plattner Institute; Bjarne Sievers: Hasso Plattner Institute; Leonard Geier: Hasso Plattner Institute; Jonas Noack: Hasso Plattner Institute; Marcus Ding: Hasso Plattner Institute; Christoph Thieme: Hasso Plattner Institute; Laurenz Seidel: Hasso Plattner Institute; Lukas Fritzsche: Hasso Plattner Institute; Erik Langenhan: Hasso Plattner Institute; Oliver Adameck: Hasso Plattner Institute; Moritz Dzingel: Hasso Plattner Institute; Thomas Kern: Hasso Plattner Institute; Martin Taraz: Hasso Plattner Institute; Conrad Lempert: Hasso Plattner Institute; Shohei Katakura: Hasso Plattner Institute; Hany Mohsen Elhassany: Hasso Plattner Institute; Thijs Roumen: Hasso Plattner Institute; Patrick Baudisch: Hasso Plattner Institute

  • Hasso Plattner, I think there was a Japanese person on Twitter.
  • Thijs is here.
    • There are a lot of authors.

Notational Programming for Notebook Environments: A Case Study with Quantum Circuits

  • Ian Arawjo: Cornell University; Anthony J DeArmas: Cornell University; Michael Roberts: Cornell University; Shrutarshi Basu: Harvard University; Tapan Parikh: Cornell Tech

  • A fun programming language, like that kind of vibe?

Grid-Coding: An Accessible, Efficient, and Structured Coding Paradigm for Blind and Low-Vision Programmers

  • Md Ehtesham-Ul-Haque: Pennsylvania State University; Syed Mostofa Monsur: Bangladesh University of Engineering and Technology; Syed Masum Billah: Pennsylvania State University

  • Solving the problem of blindness with a coding paradigm, wow.
  • Replaces traditional whitespace-based indentation with meaningful indentation cells, I see.

OPAL: Multimodal Image Generation for News Illustrations

  • Vivian Liu: Columbia University; Han Qiao: University of Toronto, Columbia University; Lydia B Chilton: Columbia University

  • Prof. Chilton from Columbia.

ForceSight: Non-Contact Force Sensing with Laser Speckle Imaging

  • Siyou Pei: University of California, Los Angeles; Pradyumna Chari: University of California, Los Angeles; Xue Wang: University of California, Los Angeles; Xiaoying Yang: University of California, Los Angeles; Achuta Kadambi: University of California, Los Angeles; Yang Zhang: University of California, Los Angeles

  • Sensing force with laser reflection, that kind of technology.

Prototyping Soft Devices with Interactive Bioplastics

  • Marion Koelle: Saarland University, Saarland Informatics Campus, OFFIS - Institute for Information Technology; Madalina Nicolae: Saarland University, Saarland Informatics Campus, Léonard de Vinci Pôle Universitaire, Research Center; Aditya Shekhar Nittala: Saarland University, Saarland Informatics Campus, University of Calgary; Marc Teyssier: Léonard de Vinci Pôle Universitaire, Research Center; Jürgen Steimle: Saarland University, Saarland Informatics Campus

Personalized Game Difficulty Prediction Using Factorization Machines

  • Jeppe Theiss Kristensen: IT University of Copenhagen; Christian Guckelsberger: Aalto University; Paolo Burelli: IT University of Copenhagen; Perttu Hämäläinen: Aalto University

Project Primrose: Reflective Light-Diffuser Modules for Non-Emissive Flexible Display Systems

  • Christine Dierk: Adobe Research; TJ Rhodes: Adobe Research; Gavin Miller: Adobe Research

Sketched Reality: Sketching Bi-Directional Interactions Between Virtual and Physical Worlds with AR and Actuated Tangible UI

  • Hiroki Kaimoto: The University of Tokyo, University of Calgary; Kyzyl Monteiro: IIIT-Delhi; Mehrad Faridan: University of Calgary; Jiatong Li: University of Chicago; Samin Farajian: University of Calgary; Yasuaki Kakehi: The University of Tokyo; Ken Nakagaki: University of Chicago; Ryo Suzuki: University of Calgary

  • Both ken0324 and ryosuzk are here.
  • Prof. Kakei is also here. It’s interesting that there are many Japanese people, but the base is international.
  • 🇯🇵

TangibleGrid: Tangible Web Layout Design for Blind Users- Jiasheng Li, Zeyu Yan, Ebrima Haddy Jarjue, Ashrith Shetty, and Huaishu Peng from the University of Maryland collaborated on a project titled “Detecting input recognition errors and user errors using gaze dynamics in virtual reality.”

  • Naveen Sendhilnathan, Ting Zhang, Ben Lafreniere, Tovi Grossman, and Tanya R. Jonker from Meta Inc. worked on a project titled “Integrating Living Organisms in Devices to Implement Care-based Interactions.”
  • Jasmine Lu and Pedro Lopes from the University of Chicago developed a dual-user passive exoskeleton glove called DigituSync that adaptively shares hand gestures.
  • Jun Nishida, Yudai Tanaka, Romain Nith, and Pedro Lopes, all from the University of Chicago, also contributed to the development of DigituSync.
  • FeedLens is a project on polymorphic lenses for personalizing exploratory search over knowledge graphs, led by Harmanpreet Kaur, Doug Downey, Amanpreet Singh, Evie Yu-Yen Cheng, Daniel S Weld, and Jonathan Bragg.
  • Difeng Yu, Ruta Desai, Ting Zhang, Hrvoje Benko, Tanya R. Jonker, and Aakar Gupta collaborated on a project titled “Optimizing the Timing of Intelligent Suggestion in Virtual Reality.”
  • “Look over there! Investigating Saliency Modulation for Visual Guidance with Augmented Reality Glasses” is a project by Jonathan Sutton, Tobias Langlotz, Alexander Plopski, Stefanie Zollmann, Yuta Itoh, and Holger Regenbrecht.
  • DiscoBand is a multiview depth-sensing smartwatch strap for hand, arm, and environment tracking, developed by Nathan Devrio and Chris Harrison from Carnegie Mellon University.
  • “Prolonging VR Haptic Experiences by Harvesting Kinetic Energy from the User” is a project by Shan-Yuan Teng, K. D. Wu, Jacqueline Chen, and Pedro Lopes from the University of Chicago.
  • Kinergy is a project focused on creating 3D printable motion using embedded kinetic energy, led by Liang He, Xia Su, Huaishu Peng, Jeffrey Ian Lipton, and Jon E. Froehlich.
  • Diffscriber is a project on describing visual design changes to support mixed-ability collaborative presentation authoring, developed by Yi-Hao Peng, Jason Wu, Jeffrey P Bigham, and Amy Pavel.
  • GANzilla is a project on user-driven direction discovery in generative adversarial networks, led by Noyan Evirgen and Xiang ‘Anthony’ Chen from UCLA.
  • Mimic is a project on in-situ recording and re-use of demonstrations to support robot teleoperation, developed by Karthik Mahadevan, Yan Chen, Maya Cakmak, Anthony Tang, and Tovi Grossman.
  • iWood is a project on a makeable vibration sensor for interactive plywood, led by Te-Yen Wu and Xing-Dong Yang.
  • ReCapture is a project on AR-guided time-lapse photography, developed by Ruyu Yan, Jiatian Sun, Longxiulin Deng, and Abe Davis from Cornell University.
  • AirLogic is a project on embedding pneumatic computation and I/O in 3D models to fabricate electronics-free interactive objects, led by Valkyrie Savage, Carlos Tejada, Mengyu Zhong, Raf Ramakers, Daniel Ashbrook, and Hyunyoung Kim.
  • “Sketch-Based Design of Foundation Paper Pieceable Quilts” is a project by Mackenzie Leake, Gilbert Bernstein, and Maneesh Agrawala.
  • HapTag is a project on a compact actuator for rendering push-button tactility on soft surfaces, developed by Yanjun Chen, Xuewei Liang, Si Chen, Yuwen Chen, Hongnan Lin, Hechuan Zhang, Chutian Jiang, Feng Tian, Yu Zhang, Shanshan Yao, and Teng Han.
  • MagneShape is a project on a non-electrical pin-based shape-changing display, developed by Kentaro Yasu from NTT Communication Science Laboratories.
  • “Color-to-Depth Mappings as Depth Cues in Virtual Reality” is a project by Zhipeng Li, Yikai Cui, Tianze Zhou, Yu Jiang, Yuntao Wang, Yukang Yan, Michael Nebeling, and Yuanchun Shi.
  • Concept-Labeled Examples for Library Comparison is a project by Litao Yan, Miryung Kim, Bjoern Hartmann, Tianyi Zhang, and Elena L. Glassman, focusing on the use of libraries in development.> Gesture-aware Interactive Machine Teaching with In-situ Object Annotations
  • Zhongyi Zhou: The University of Tokyo; Koji Yatani: The University of Tokyo

  • Professor Yatani
  • 🇯🇵

The authors present a research paper titled “Gesture-aware Interactive Machine Teaching with In-situ Object Annotations.” The paper is authored by Zhongyi Zhou and Koji Yatani from The University of Tokyo. Professor Yatani is a co-author of the paper. The research focuses on developing a system that enables interactive machine teaching using gestures and in-situ object annotations.

Reconfigurable Elastic Metamaterials

  • Willa Yunqi Yang: Carnegie Mellon University; Yumeng Zhuang: Carnegie Mellon University; Luke Andre Darcy: Carnegie Mellon University; Grace M Liu: Carnegie Mellon University; Alexandra Ion: Carnegie Mellon University

The authors of this paper, Willa Yunqi Yang, Yumeng Zhuang, Luke Andre Darcy, Grace M Liu, and Alexandra Ion, are from Carnegie Mellon University. Their research is about reconfigurable elastic metamaterials.

MetamorphX: An Ungrounded 3-DoF Moment Display that Changes its Physical Properties through Rotational Impedance Control

  • Takeru Hashimoto: The University of Tokyo; Shigeo Yoshida: The University of Tokyo; Takuji Narumi: The University of Tokyo

  • narumin lab
  • 🇯🇵
  • おもろ

The authors of this paper are Takeru Hashimoto, Shigeo Yoshida, and Takuji Narumi from The University of Tokyo. The research is conducted in the narumin lab. The paper introduces MetamorphX, a 3-DoF moment display that can change its physical properties through rotational impedance control. The authors describe the design and implementation of MetamorphX.

AUIT – the Adaptive User Interfaces Toolkit for Designing XR Applications

  • João Marcelo Evangelista Belo: Aarhus University; Mathias N. Lystbæk: Aarhus University; Anna Maria Feit: Saarland University, Saarland Informatics Campus; Ken Pfeuffer: Aarhus University; Peter Kán: TU Wien,Aarhus University; Antti Oulasvirta: Aalto University; Kaj Grønbæk: Aarhus University

  • デンマークの大学

The authors of this paper are João Marcelo Evangelista Belo, Mathias N. Lystbæk, Ken Pfeuffer, and Kaj Grønbæk from Aarhus University, Anna Maria Feit from Saarland University, Saarland Informatics Campus, Peter Kán from TU Wien and Aarhus University, and Antti Oulasvirta from Aalto University. The research introduces AUIT, the Adaptive User Interfaces Toolkit, which is used for designing XR (Extended Reality) applications. The authors describe the features and capabilities of AUIT.

Fibercuit: Prototyping High-Resolution Flexible and Kirigami Circuits with a Fiber Laser Engraver

  • Zeyu Yan: University Of Maryland; Anup Sathya: University of Maryland; Sahra Yusuf: George Mason University; Jyh-Ming Lien: George Mason University; Huaishu Peng: University of Maryland

The authors of this paper are Zeyu Yan and Anup Sathya from the University of Maryland, Sahra Yusuf and Jyh-Ming Lien from George Mason University, and Huaishu Peng from the University of Maryland. Their research focuses on the development of Fibercuit, a method for prototyping high-resolution flexible and kirigami circuits using a fiber laser engraver. The authors discuss the design and fabrication process of Fibercuit.

INTENT: Interactive Tensor Transformation Synthesis

  • Zhanhui Zhou: University of Michigan; Man To Tang: Purdue University; Qiping Pan: University of Michigan; Shangyin Tan: Purdue University; Xinyu Wang: University of Michigan; Tianyi Zhang: Purdue University

The authors of this paper are Zhanhui Zhou and Qiping Pan from the University of Michigan, Man To Tang and Shangyin Tan from Purdue University, and Xinyu Wang and Tianyi Zhang from the University of Michigan. Their research introduces INTENT, a system for interactive tensor transformation synthesis. The authors describe the design and implementation of INTENT.

Automated Filament Inking for Multi-color FFF 3D Printing

  • Eammon Littler: Dartmouth College; Bo Zhu: Dartmouth College; Wojciech Jarosz: Dartmouth College

The authors of this paper are Eammon Littler, Bo Zhu, and Wojciech Jarosz from Dartmouth College. Their research focuses on automated filament inking for multi-color FFF (Fused Filament Fabrication) 3D printing. The authors present a method for automatically changing the color of the filament during the printing process.

DeltaPen: A Device with Integrated High-Precision Translation and Rotation Sensing on Passive Surfaces

  • Guy Lüthi: ETH Zürich; Andreas Rene Fender: ETH Zürich; Christian Holz: ETH Zürich

The authors of this paper are Guy Lüthi, Andreas Rene Fender, and Christian Holz from ETH Zürich. Their research introduces DeltaPen, a device with integrated high-precision translation and rotation sensing on passive surfaces. The authors describe the design and capabilities of DeltaPen.

VRhook: A Data Collection Tool for VR Motion Sickness Research

  • Elliott Wen: The University of Auckland; Tharindu Indrajith Kaluarachchi: The University of Auckland; Shamane Siriwardhana: Auckland Bioengineering Institute, University Of Auckland; Vanessa Tang: University of Auckland; Mark Billinghurst: University of South Australia; Robert W. Lindeman: University of Canterbury; Richard Yao: Facebook; James Lin: Facebook; Suranga Nanayakkara: Department of Information Systems and Analytics, National University of Singapore

The authors of this paper are Elliott Wen, Tharindu Indrajith Kaluarachchi, Shamane Siriwardhana, and Vanessa Tang from The University of Auckland, Mark Billinghurst from the University of South Australia, Robert W. Lindeman from the University of Canterbury, Richard Yao and James Lin from Facebook, and Suranga Nanayakkara from the Department of Information Systems and Analytics, National University of Singapore. Their research presents VRhook, a data collection tool for VR motion sickness research. The authors discuss the design and implementation of VRhook.

PassengXR: A Low Cost Platform for Any-Car, Multi-User, Motion-Based Passenger XR Experiences

  • Mark McGill: University of Glasgow; Graham Wilson: University of Glasgow; Daniel Medeiros: University of Glasgow; Stephen Anthony Brewster: University of Glasgow

The authors of this paper are Mark McGill, Graham Wilson, Daniel Medeiros, and Stephen Anthony Brewster from the University of Glasgow. Their research introduces PassengXR, a low-cost platform for any-car, multi-user, motion-based passenger XR (Extended Reality) experiences. The authors describe the design and development of PassengXR.

NFCStack: Identifiable Physical Building Blocks that Support Concurrent Construction and Frictionless Interaction

  • Chi-Jung Lee: National Taiwan University; Rong-Hao Liang: Eindhoven University of Technology; Ling-Chien Yang: National Taiwan University; Chi-Huan Chiang: National Taiwan University; Te-Yen Wu: Dartmouth College; Bing-Yu Chen: National Taiwan University

The authors of this paper are Chi-Jung Lee, Ling-Chien Yang, Chi-Huan Chiang, and Bing-Yu Chen from National Taiwan University, Rong-Hao Liang from Eindhoven University of Technology, and Te-Yen Wu from Dartmouth College. Their research introduces NFCStack, identifiable physical building blocks that support concurrent construction and frictionless interaction. The authors discuss the design and implementation of NFCStack.

Exploring the Learnability of Program Synthesizers by Novice Programmers

  • Dhanya Jay> Beyond Text Generation: Supporting Writers with Continuous Automatic Text Summaries

Authors: Hai Dang, Karim Benharrak, Florian Lehmann, Daniel Buschek (University of Bayreuth)

This paper explores the use of continuous automatic text summaries to support writers. The authors propose a method that goes beyond traditional text generation techniques. The findings suggest that this approach can also be applied to expand reading ability.

DEEP: 3D Gaze Pointing in Virtual Reality Leveraging Eyelid Movement

Authors: Xin Yi, Leping Qiu, Wenjing Tang, Yehan Fan, Hewu Li, Yuanchun Shi (Tsinghua University)

This paper presents DEEP, a system that enables 3D gaze pointing in virtual reality by leveraging eyelid movement. The authors demonstrate the effectiveness of their system through experiments.

Computational Design of Active Kinesthetic Garments

Authors: Velko Vechev, Ronan J Hinchet, Stelian Coros, Bernhard Thomaszewski, Otmar Hilliges (ETH Zurich)

This paper introduces a computational design approach for active kinesthetic garments. The authors propose a method that allows for the design and optimization of garments that provide kinesthetic feedback to the wearer.

TipTrap: A Co-located Direct Manipulation Technique for Acoustically Levitated Content

Authors: Eimontas Jankauskis (University College London), Sonia Elizondo (Universidad Pública de Navarra), Roberto A Montano Murillo (University College London, Ultraleap), Asier Marzo (Universidad Publica de Navarra), Diego Martinez Plasencia (University College of London)

This paper presents TipTrap, a co-located direct manipulation technique for acoustically levitated content. The authors demonstrate the effectiveness of their technique through user studies.

Synthesis-Assisted Video Prototyping From a Document

Authors: Peggy Chi, Tao Dong, Christian Frueh, Brian Colonna, Vivek Kwatra, Irfan Essa (Google Research)

This paper introduces a synthesis-assisted video prototyping method that allows for the creation of videos from documents. The authors demonstrate the effectiveness of their approach through experiments.

ELAXO: Rendering Versatile Resistive Force Feedback for Fingers Grasping and Twisting

Authors: Zhong-Yi Zhang, Hong-Xian Chen, Shih-Hao Wang, Hsin-Ruey Tsai (National Chengchi University)

This paper presents ELAXO, a system that renders versatile resistive force feedback for fingers during grasping and twisting. The authors demonstrate the effectiveness of their system through experiments.

Scrapbook: Screenshot-Based Bookmarks for Effective Digital Resource Curation across Applications

Authors: Donghan Hu, Sang Won Lee (Virginia Tech)

This paper introduces Scrapbook, a system that uses screenshot-based bookmarks for effective digital resource curation across applications. The authors demonstrate the effectiveness of their system through user studies.

RemoteLab: Virtual Reality Remote study Tool Kit

Authors: Jaewook Lee (University of Washington), Raahul Natarrajan (Vanderbilt University), Sebastian S. Rodriguez (University of Illinois at Urbana-Champaign), Payod Panda (Microsoft Research), Eyal Ofek (Microsoft Research)

This paper presents RemoteLab, a virtual reality remote study tool kit. The authors demonstrate the effectiveness of their tool kit through experiments.

Record Once, Post Everywhere: Automatic Shortening of Audio Stories for Social Media

Authors: Bryan Wang (University of Toronto), Zeyu Jin, Gautham Mysore (Adobe Research)

This paper introduces a method for automatically shortening audio stories for social media. The authors demonstrate the effectiveness of their approach through experiments.

Scholastic: Graphical Human-AI Collaboration for Inductive and Interpretive Text Analysis

Authors: Matt-Heun Hong, Lauren A. Marsh, Jessica L. Feuston, Janet Ruppert, Jed R. Brubaker (University of Colorado Boulder), Danielle Albers Szafir (University of North Carolina at Chapel Hill)

This paper presents Scholastic, a graphical human-AI collaboration system for inductive and interpretive text analysis. The authors demonstrate the effectiveness of their system through user studies.

Integrating Real-World Distractions into Virtual Reality

Authors: Yujie Tao, Pedro Lopes (University of Chicago)

This paper explores the integration of real-world distractions into virtual reality. The authors discuss the potential benefits and challenges of this approach.

Phrase-Gesture Typing on Smartphones

Authors: Zheer Xu (Dartmouth College), Yankang Meng (Huazhong University of Science and Technology), Xiaojun Bi (Stony Brook University), Xing-Dong Yang (Simon Fraser University)

This paper introduces phrase-gesture typing on smartphones. The authors propose a method that combines typing and gesture input for improved text entry on smartphones.

ARDW: An Augmented Reality Workbench for Printed Circuit Board Debugging

Authors: Ishan Chatterjee, Tadeusz Pforte (University of Washington), Aspen Tng (Human-Computer Interaction + Design), Farshid Salemi Parizi (University of Washington), Chaoran Chen (Carnegie Mellon University), Shwetak Patel (University of Washington)

This paper presents ARDW, an augmented reality workbench for printed circuit board debugging. The authors demonstrate the effectiveness of their system through experiments.

DualVoice: Speech Interaction that Discriminates between Normal and Whispered Voice Input

Authors: Jun Rekimoto (The University of Tokyo, Sony CSL Kyoto)

This paper introduces DualVoice, a speech interaction system that discriminates between normal and whispered voice input. The authors demonstrate the effectiveness of their system through experiments.

RealityLens: Designing a User Interface for Blending Customized Physical World View with Virtual Reality

Authors: Chiu-Hsuan Wang (National Yang Ming Chiao Tung University), Bing-Yu Chen (National Taiwan University), Liwei Chan (National Yang Ming Chiao Tung University)

This paper discusses the design of RealityLens, a user interface that blends customized physical world view with virtual reality. The authors explore different design possibilities and considerations.

Seeing our Blind Spots: Smart Glasses-based Simulation to Increase Design Students Awareness of Visual Impairment

Authors: Qing Zhang, Giulia Barbareschi, Juling Li, Yun Suen Pai, Kai Kunze (Keio University), Yifei Huang (The University of Tokyo), Jamie A Ward (Goldsmiths University of London)

This paper presents a smart glasses-based simulation to increase design students’ awareness of visual impairment. The authors demonstrate the effectiveness of their simulation through user studies.

SenSequins: Smart Textile Using 3D Printed Conductive Sequins

Authors: Hua Ma, Junichi Yamaoka (Keio University)

This paper introduces SenSequins, a smart textile that uses 3D printed conductive sequins. The authors demonstrate the effectiveness of their textile through experiments.

Breathing Life Into Biomechanical User Models

Authors: Aleksi Ikkala (Aalto University), Florian Fischer, Markus Klar, Miroslav Bachinski, Arthur Fleig, Andrew Howes (University of Bayreuth), Perttu Hämäläinen (Aalto University), Jörg Müller (University of Bayreuth), Roderick Murray-Smith (University of Glasgow), Antti Oulasvirta (Aalto University)

This paper explores the concept of breathing life into biomechanical user models. The authors propose a method that allows for more realistic and dynamic user models in human-computer interaction.

Photographic Lighting Design with Photographer-in-the-Loop Bayesian Optimization

Authors: Kenta Yamamoto (University of Tsukuba), Yuki Koyama (National Institute of Advanced Industrial Science and Technology (AIST)), Yoichi Ochiai (University of Tsukuba)

This paper presents a photographic lighting design method that involves the photographer in the loop using Bayesian optimization. The authors demonstrate the effectiveness of their method through experiments.

SemanticOn: Specifying Content-Based Semantic Conditions for Web Automation Programs

Authors: Kevin Pu, Rainey Fu, Yan Chen, Tovi Grossman (University of Toronto), Rui Dong, Xinyu Wang- Andrew Kuznetsov, Joseph Chee Chang, Nathan Hahn, Napol Rachatasumrit, Bradley Breneisen, Julina Coupland, and Aniket Kittur are affiliated with Carnegie Mellon University.

  • The US Army is mentioned as being powerful.
  • The paper “spaceR: Knitting Ready-Made, Tactile, and Highly Responsive Spacer-Fabric Force Sensors for Continuous Input” is authored by Roland Aigner, Mira Alida Haberfellner, and Michael Haller.
  • The paper “Flaticulation: Laser Cutting Joints with Articulated Angles” is authored by Chiao Fang, Vivian Hsinyueh Chan, and Lung-Pan Cheng from National Taiwan University.
  • The paper “InterWeave: Presenting Search Suggestions in Context Scaffolds Information Search and Synthesis” is authored by Srishti Palani, Yingyi Zhou, Sheldon Zhu, and Steven P. Dow from the University of California, San Diego.
  • The paper “Mixels: Fabricating Interfaces using Programmable Magnetic Pixels” is authored by Martin Nisser, Yashaswini Makaram, Lucian Covarrubias, Amadou Yaye Bah, Faraz Faruqi, Ryo Suzuki, and Stefanie Mueller.
  • The paper “Flexel: A Modular Floor Interface for Room-Scale Tactile Sensing” is authored by Takatoshi Yoshida, Narin Okazaki, Ken Takaki, Masaharu Hirose, Shingo Kitagawa, and Masahiko Inami from the University of Tokyo.
  • The paper “PSST: Enabling Blind or Visually Impaired Developers to Author Sonifications of Streaming Sensor Data” is authored by Venkatesh Potluri, John R Thompson, James Devine, Bongshin Lee, Nora Morsi, Peli De Halleux, Steve Hodges, and Jennifer Mankoff.
  • The paper “RIDS: Implicit Detection of A Selection Gesture Using Hand Motion Dynamics During Freehand Pointing in Virtual Reality” is authored by Ting Zhang, Zhenhong Hu, Aakar Gupta, Chi-Hao Wu, Hrvoje Benko, and Tanya R. Jonker.
  • The paper “RealityTalk: Real-time Speech-driven Augmented Presentation for AR Live Storytelling” is authored by Jian Liao, Adnan Karim, Shivesh Singh Jadon, Rubaiat Habib Kazi, and Ryo Suzuki.
  • The paper “CrossA11y: Identifying Video Accessibility Issues via Cross-modal Grounding” is authored by XingyuBruce Liu, Ruolin Wang, Dingzeyu Li, Xiang Anthony Chen, and Amy Pavel.
  • The paper “Summarizing Sets of Related ML-Driven Recommendations for Improving File Management in Cloud Storage” is authored by Will Brackenbury, Kyle Chard, Aaron Elmore, and Blase Ur.
  • The paper “MuscleRehab: Improving Unsupervised Physical Rehabilitation by Monitoring and Visualizing Muscle Engagement” is authored by Junyi Zhu, Yuxuan Lei, Aashini Shah, Gila Schein, Hamid Ghaednia, Joseph H Schwab, Casper Harteveld, and Stefanie Mueller.
  • The paper “Wikxhibit: Using HTML and Wikidata to Author Applications that Link Data Across the Web” is authored by Tarfah Alrashed, Lea Verou, and David R Karger.
  • The paper “FLEX-SDK: An Open-Source Software Development Kit for Creating Social Robots” is authored by Patricia Alves-Oliveira, Kai Mihata, Raida Karim, Elin A. Bjorling, and Maya Cakmak.
  • The paper “Bayesian Hierarchical Pointing Models” is authored by Hang Zhao, Sophia Gu, Chun Yu, and Xiaojun Bi.
  • The paper “SleepGuru: Personalized Sleep Planning System for Real-life Actionability and Negotiability” is authored by Jungeun Lee, Sungnam Kim, Minki Cheon, Hyojin Ju, JaeEun Lee, and Inseok Hwang.
  • The paper “X-Bridges: Designing Tunable Bridges to Enrich 3D Printed Objects’ Deformation and Stiffness” is authored by Lingyun Sun, Jiaji Li, Junzhe Ji, Deying Pan, Mingming Li, Kuangqi Zhu, Yitao Fan, Yue Yang, Ye Tao, and Guanyun Wang.
  • The papers “TickleFoot: Design, Development and Evaluation of a Novel Foot-tickling Mechanism that Can Evoke Laughter” and “ANISMA: A Prototyping Toolkit to Explore Haptic Skin Deformation Applications Using Shape-Memory Alloys” have been accepted for TOCHI. The authors are mentioned in the respective paper titles.