iSphere:A free-hand 3D modeling interface Mohamed Imran, Noor Mohamed Sign Language Synthesis and Interaction, 66123,Saarbrucken, Germany http://slsi.dfki.de
Abstract. Integrating a high-level 3D modeling interface in free-hand sketching reduces the human cognitive load in 3D creations.Modern CAD based interfaces inhibit interactions with low level commands and thereby creating a psychological gap.Here we intend to develop Isphere with 24 degrees of freedom to bypass the mental load of low-level commands.Isphere is an intuitive device embedded with 12 capacitive sensors which enables object design using Top-down approach.The interface uses simple Push and Pull commands to interact with the user.We believe that Isphere can save lot of time by bypassing traditional Mice and keyboard based modeling.Experimental results indicate that novices in 3D modeling learn faster with the help of Isphere.We claim that Isphere is designed to minimize the barrier between the humans cognitive model of what they want to accomplish and the computer’s understanding of the user’s task.As Isphere lacks reliability in its input mechanisms, this paper suggest new improved algorithms for sensor control and new novel methods to create 3D models. Keywords: Degrees of Freedom, Human-computer interaction, User Interface, Input device, Proximity sensing
1
Introduction
For designers, a quick method to demonstrate an idea or principle might be Freehand sketching.A sketch expresses an idea directly and interactively.Freehand sketching is a common way to model 2D objects but for 3D modeling it will be complex and unintuitive to visualize.Recent developments in computer aided design(CAD) helps to address the problem effectively.It involves series of low-level commands and mode-switching operations to model a 3D object.This introduces a well-known problem for novice to learn commands as it takes months to became an expert to create 3D models intuitively. Past studies show that traditional CAD systems make designers to develop Bottom-up approach[1].The approach tends to create complex design cycle as users need to remember low-level commands and perform modeling.This Modeswitching operations affects human thought process[2] as additional mental load is imposed to build extra connections between representations[3].CAD systems eliminates direct interaction thereby creating a gap between realistic interaction and low-level commands.We asked whether there is better way to develop
2
iSphere:A free-hand 3D modeling interface
an input interface to manipulate 3D objects effectively, to reduce our cognitive load.This answers our goal to develop an intuitive 3D user input interfaceIsphere. The aim of this research was to develop a high-level modeling system which can reduce human cognitive load in 3D creation.It creates a new dimension to interact with 3D models intuitively.It projects designers idea into 3D model instantly as Isphere will manipulate 3D objects in a spatial way.Simply, an input device that should make us to focus what is in our mind and built 3D models.As shown in the Fig. 1, isphere models an object through spatial method.
Fig. 1. iSphere:A free-hand 3D modeling interface
We tested our hypothesis by conducting a study to compare the performance between command-based interfaces and Isphere.We claim that our approach to develop new 3D input interface is better than other research works conducted[5– 8] as it is a novel way to have ’3D input interface’.
2
Interactive Techniques
Isphere plays a physical role to control and shape the 3D models.It gives a tangible modeling experience as compared with routine tasks modeling via. Mice and keyboard because it involves tedious mode-switching activities.Generally, Isphere enables rich play and build environment that makes users feel 3D modeling as ’Modeling a piece of clay’.In contrast, routine task modeling is disruptive as users stumble in their thought process to remember all low level commands.Isphere gives Bi-manual interaction namely Pull and Push actions.To avoid heavy mental activities and to shape 3D models interactively bi-manual interaction is used.Bimanual interaction saves designers time to create live 3D models on the fly
iSphere:A free-hand 3D modeling interface
3
instead of forming shapes with low-level machine commands.Well known, Click and Select actions requires intense amount of Mode-switching activities.Hence, command based manipulation is non-interactive and non-feedback form to model objects. Objects modeled using 2D representations lacks to capture all spatial characters.To avoid this drawback designers couple their mental view and visual representation at a time which is non-intuitive and takes more mental activity.2D modeling has a disadvantage in design outcomes at an early stage of modeling.Whereas, Isphere wins the situation as it enhances the 3D modeling in addition to the 2D representation.
Fig. 2. Interactive methods using iSphere
Fig. 2 shows that Isphere handles Z axis manipulations by different hand action over the surface.From designers point of view, Isphere is a dummy object for manipulating 3D geometry but it cleverly maps the gestures.Playing with an object enables real interaction.
3
Play and Build
Interactive Isphere follows top-down modeling, which gives freedom for users to play around and build 3D scenes.Human hand actions are mapped to modeling commands.Input interface has 6” * 6” * 6” dodecahedron (with 12 faces) that acts as the only physical medium.Each face of the dodecahedron has capacitive electrode, which detects human hands motion.Isphere software architecture maps
4
iSphere:A free-hand 3D modeling interface
the input into the high level object which will be 3D model on the screen.Isphere is Proactive as every facets had been controlled leaving 24 Degrees of freedom for better manipulation. Device works as human hand gesture detector by capturing different actions of human hands such as: Pull and Push. Push action will be triggered when hands are less than 1 inch from Isphere.It is also called as denting action.Push defines the press button like command on routine actions.This action will be critical for precise modeling objects.Pull action occurs when hands are one inch away from Isphere.Pull action defines the sensitivity of the device as the capacitive sensors go from 1 to 6 inches.Upto 6 inches all gestures will be detected and internal calibration in done in units so Isphere measures upto 8 units. Interpreting hand actions into high-level modeling is combinational action of aggregating trivial modeling commands.
Fig. 3. iSphere: As a Gesture detector
Fig. 3 shows Push action has two more actions namely: Nearby and touch.Nearby action is used as the first level action from Pull to Push.It indicates the current state of push.For smoothing touch action is enabled.Consider touch actions as tuning method as it gently smoothes our model. As an example, Fig. 5, gives the possible states observed in Isphere while Modeling an Apple for a pre-loaded circular object which can be seen in Fig. 4.
iSphere:A free-hand 3D modeling interface
Fig. 4. Making an Apple using Isphere
Fig. 5. State Diagram for Making an Apple using Isphere
5
6
4 4.1
iSphere:A free-hand 3D modeling interface
Implementation Hardware Implementation
Hardware gives a vivid picture of Isphere as it essentially captures all human hand action.Constructing a right shape for Isphere was a challenge.Nevertheless, We built a foldable dodecahedron with acrylic material which makes a Pentagonal structure.Every face is capable of sensing hands for eight degrees above the surface.Capacitive sensors measures physical actions which are connected on the Isphere.Shunt mode operation is used to detect motions.Briefly, Shunt mode capacitive sensing will work between a transmit and receive electrode.For Isphere, the hand movement of the performer changes the electric field between a transmit and receive electrode.Hence the current is measured at the second electrode correlates to the change in electrical field.Normally, the distance to the second electrode is known as Shunt mode[9](in our case less than six inches).Capacitive sensors detects the proximity of hands at twelve different directions which corresponds to faces.
Fig. 6. iSphere: Hardware Implementation
Fig. 6 shows that received signal from capacitive sensors are level adjusted by signal conditioning circuit (Amplifier,Switch and a Low-pass filter).Finally, digital input will be received by PIC microcontroller. A Microcontroller interfaces the incoming digital input to software module.
iSphere:A free-hand 3D modeling interface
7
Fig. 7. Software implementation
4.2
Software Implementation
As shown in Fig. 7 Isphere hardware is connected to microcontroller through a Serial interface (RS232).Meta-sphere maps the input signals into a meta-sphere to a target 3D object.We use Alias-Wavefront Maya 6.0 C++ API (Application Programming Interface) as iSphere plug-in.3D manipulation is realized by MEL(Maya Embedded Language).MEL modifies the functions by drawing relationships from data.The system architecture is flexible for future upgrade.New functions can easily be added into the system. Currently, iSphere manipulates 3D mesh-based model in Alias-Wavefront Maya, 3DS Max or Rhino.
5
Experiment
An experiment was designed to capture the potential problem that may arise after evaluation.Aim of our experiment is How the Experts and novice try to accomplish a task using Isphere.We claim that both experts and novice will take same time to model a shape using Isphere.Our hypothesis seeks a method to exploit and eliminate a gap between novice and experts through an Intuitive modeling interface. 5.1
Study Design
In order to conduct the experiment six volunteers were chosen in which two people had intermediate experience and four of them had no prior experience(Novice).The median of their age is 22.KLM-GOMS[10] metrics was used to calculate the performance of routine tasks in Maya using keyboard and Mice.Final evaluation will be based on the comparison between the routine tasks given by KLM-GOMS
8
iSphere:A free-hand 3D modeling interface
versus task accomplished by novices using Isphere.Two groups was formed as Novice and Experts to model a 3D shape using Isphere.Before study, We explained how-to use Isphere with a demonstration.Pre-experiment session was done in 30 Minutes. 5.2
Experimental Method
The set-up includes a Desktop with pre-loaded software interface so that subjects could directly get started to perform the given tasks.A standard LCD monitor was used to view the shape of the modified object.As LCD screens were used subjects were placed in a defined distance from the screen to avoid wide viewing as LCD is poor at wide viewing angle.Experiment was done in a way to increase the proximity sensing so the maximum capacitive sensing could be acheived.The Isphere was placed on a soft-foam base to cushion user’s hand while modeling.Rendering were done at the shadowing mode to increase the 3D visualization. 5.3
Experimental Task
To experiment all cases, four tasks were designed such as: 1.Pull 2.Push 3.Making an Apple 4.Make any object within 5 minutes.Initially, the screen is loaded with the default 3D sphere and task was presented to perform pull action upto 3 units followed by Push action consecutively.Third task was to make an Apple which involves sequences of push and pull actions.Final task was to accomplish a free-hand modeling were the subjects asked to model any shape in their mind within 5 minutes. 5.4
Experimental Analysis
Analyses based on the time taken to accomplish task using Mice and Keyboard as calculated by KLM-GOMS Versus.Time taken by novice to model the same shape using Isphere. As seen in the Fig. 8, Much time was taken by subjects for mental preparation action which we analyze that users try to implement the learnt low-level command.Tabular column explains the time taken for movement of the cursor was about 1 to 1.5 seconds.Click action using Mice involved less time on the whole time span but clicking was the most repeated activity during the whole experiment.Precisely, each click time was around 0.2 seconds and 15 clicks involved totally.All movements like mental preparation, clicking mouse totally cost 10 seconds for the first task:Pull-up action and 20 seconds for Push task. 5.5
Experimental results
Using Isphere, all subjects learnt to model an object by controlling the different facets Fig. 9, shows that the time involved for a novice to do push and pull was
iSphere:A free-hand 3D modeling interface
Fig. 8. GOMS Analysis
Fig. 9. Isphere Vs. Routine tasks
9
10
iSphere:A free-hand 3D modeling interface
about 8.6 seconds and 12.5 seconds correspondingly.On average 25(percent)time was saved during the Pull task and 75(percent) was saved during Push task.It gives a big picture to use Isphere, as it saves lot of time compared with the routine tasks done using mice and keyboard to model 3D object.Results shows that Isphere gives more reliability and freedom for users to model using several combinations of the selection, direction and commands.It also proves that two tests were finished by subjects in shorter span of time compared with intermediate. Thus, Our results clarifies three aspects: Isphere is direct as control points are in a way to manipulate the surface directly, It takes less time than routine modeling, Finally they are intuitive as less mental preparation is involved.
6
Discussion
We found that developing high-level 3D modeling can reduce low-level manipulations as it is modeled intuitively.Experimental results prove that modeling 3D objects using Isphere eliminates gap between novice and experts.As claimed, the paper suggests that developing a free hand 3D input modeling interface is a novel development in 3D input hardware. Data shows that Isphere has capability to enhance 3D modeling experience using natural human hand gestures.It bridges 3D modeling environment from non-interactive to Interactive.Isphere leads a new paradigm for 3D designers from abstract commands to natural hand interaction.The thought process becomes more intuitive and direct.Hence, learning becomes easier for novices to realize complex 3D objects. Although Isphere is intuitive and direct, it lacks to model objects accurately.Isphere lacks fidelity when 3D objects need to modeled with minimal span of time.One possible solution to improve speed and accuracy will be by increasing the sensitivity of capacitive sensors.Yet, We need confirmatory studies in the area of sensor research.Readers may criticize that Isphere works for specialized modes.Nevertheless, it is important to understand each methods performs better in certain mode it leads a new way to unanswered questions and future directions dealing with robust mapping into shapes and improving algorithms for sensing control. Our research implies that an intuitive free-hand modeling interface like Isphere presents an effective way to express ideas directly without any intense mental activities. It aims to improve the interactions between users and computers intuitively.Whole idea was designed to minimize the barrier between the humans cognitive model of what they want to accomplish and the computer’s understanding of the user’s task.
References 1. iSphere:A free-hand 3D modeling interface Chia-Hsun Jackie Lee,Yuchang Hu, and TedSelker
iSphere:A free-hand 3D modeling interface
11
2. Doug A. Bowman, Ernst Kruijff ans Joseph J. Laviola. 3D user interfaces: theory and practice 3. Bloom, et al. (translated by Shibuya, Fujita and Kajita) Educational Assessment Method Handbook: Formative Assessment and Comprehensive Assessment of Subjects Learning (Daiichi-hoki shuppan, 1972) 4. Pashler, H. (1994). ”Dual-task interference in simple tasks: Data and theory”. Psychological Bulletin 116 (2): 220244. doi:10.1037/0033-2909.116.2.220. PMID 7972591.Mayer, R., Moreno, R., Nine Ways to Reduce Cognitive Load in Multimedia 5. Aish, R., 3D input for CAAD systems. Computer-Aided Design, 1979, 66-70. 6. Ishii, H. and Ullmer, B.,Tangible Bits:Towards Seamless Interfaces between People, Bits, and Atoms. Proc. Of CHI 97,ACM Press, 1997, 234-241. 7. Murakami,T. and Nakajima, N., Direct and Intuitive Input Device for 3-D Shape Deformation. Proc. Of CHI 94,ACM Press, 1994, 465-470 8. Rekimoto, J., SmartSkin: An Infrastructure for Freehand Manipulation on Interactive Surfaces. Proceedings of the CHI 2002,ACM Press, 2002, 113-120 9. Capacitive Sensors :Technical Notes , http://sensorwiki.org/doku.php/sensors/ capacitive 10. John, B. and Kieras, D.,The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast. ACM Transactions on Computer-Human Interaction, 3(4), 1996, 320-351