List 6 of the 10 specific sensory/perceptual systems discussed in the chapter.

What is  Sensory Perception?

Perception simply implies the use of the senses in our possession to gain a better understanding of the world around us.


An individual or organism capable of processing the stimuli in their environment is called to have a sensory perception.

This processing is done through the coordination between sense organs and the brain. Hearing, vision, taste, smell, and touch are the five senses we possess. Sensory perception involves detecting, recognizing, characterizing and responding to stimuli.

There are five different kinds of stimulus, they can be categorised as mechanical, chemical, electrical, light and temperature.

The process of sensory perception begins when something in the real world stimulates our sense organs.

For instance, light reflecting from a surface stimulates our eyes. The warmth emanating from a hot cup of beverage stimulates our touch senses.

This stimulus further gets transformed into the neurogenic signal, which is sent to the human brain.

A combination of stimuli like chemical, mechanical, electrical or temperature may cause a perception of pain. In the same way, stimuli of a certain type may be perceived by different senses, e.g. chemical stimuli can be perceived by both senses, the sense of smell and taste.

Sensory perception tends to become weaker with the ageing process.

To learn more about sensory perception and other biology topics, visit BYJU’S.

Introduction

Zhongzhi Shi, in Intelligence Science, 2021

1.3.3 Perceptual representation and intelligence

The perceptual system contains primarily visual, auditory, and kinesthetic input, that is, pictures, sounds, and feelings. There is also olfactorial and gustatorial input, that is, smells and tastes. Perceptual representation is a modeling approach that highlights the constructive, or generative, function of perception, or how perceptual processes construct a complete volumetric spatial world, complete with a copy of our own body at the center of that world. The representational strategy used by the brain is an analogical one; that is, objects and surfaces are represented in the brain not by an abstract symbolic code or in the activation of individual cells or groups of cells representing features detected in the visual field. Instead, objects are represented in the brain by constructing full spatial effigies of them that appear to us for all the world like the objects themselves, or at least so it seems to us only because we have never seen those objects in their raw form but only through our perceptual representations of them.

Perceptual intelligence refers to the ability to interact with the environment through various sensory organs, such as vision, hearing, touch, etc. The visual system gives the organism the ability of visual perception. The auditory system gives the organism the ability of auditory perception. Using the research results of big data and deep learning, the machine has gotten closer and closer to the human level in perception intelligence.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780323853804000014

The Human in the Loop

William R. Sherman, Alan B. Craig, in Understanding Virtual Reality (Second Edition), 2018

Visual Illusions

The human perceptual system (in fact the whole brain) is designed for efficiency, and to achieve this, it “cheats.” It cuts corners where it can. Illusions reveal the brain’s shortcuts. For example, the “Pinna–Gregory Illusion” is an optical illusion whereby concentric circles made of small squares appears to be spiraling inward when those squares are tilted (Fig. 3-12) [Pinna and Gregory 2002]. We might conjecture that by seeing the sides of the squares as tangents to a line, at each locality we see the line moving inward, and so we perceive a line always moving inward—a spiral—where in fact no such line exists. In a similar way, the Zöllner illusion (Fig. 3-13) and the Café Wall Illusion (Fig. 3-14) perceptually suggest that lines that are in fact parallel, seem to bend toward each other.

Figure 3-12. The “Pinna–Gregory illusion” reveals how local aspects of visual perception imply overall structure that in this case does not exist—imaginary spirals seen where only circles exist.

Figure 3-13. The “Zöllner illusion” exhibits the visual perception wherein parallel lines look as though they are converging when short cross-hatches are added.

Figure 3-14. The “café wall illusion” is another example of where parallel lines appear to converge when an offset tile-pattern is applied between the lines.

Illusions then are misperceptions; misperceptions due to misleading sensations, or at least sensations for which the brain is predisposed to interpret in a particular way. We know that by the time visual information gets to the brain it’s been preprocessed, and that some of the preprocessing involves looking for lines with particular orientations—small visual units from which an amalgamated perception can be derived when combined with many other small visual units.

Certainly, as virtual reality is about the creation of virtual worlds that we want to be (mis)perceived as being real, we have an interest in creating illusion. Of course this desire is not limited to the realm of computer-generated stimuli, and also not just to the profession of magicians (illusionists), but also to other artists. The architect Guarini, for the Santissima Sindone dome in Turin, Italy, uses geometric illusion amplified by stone color and texture to create the perception of a much taller structure than the reality [Meek 1988].

Context plays a big role in how we perceive visual images. That context may be as simple as directionality, or it may involve variations in surrounding shapes and colors. A very simple illusion can be seen with a grayscale vertical gradient inside a circle. When the darker end of the gradient is on the top, we perceive a concave impression in a surface, but when the darker end is on the bottom we see a raised convex bump (Fig. 3-15). The explanation for this is merely that we are accustomed to light shining from above, and using that “knowledge,” a concave hole would have a shadow at the top, whereas a convex bump would have a shadow beneath the protuberance. If you turn this book around while looking at Fig. 3-15, you will see that the circles that had appeared to be raised are now depressed, and vice versa.

Figure 3-15. The cues from shading provide the illusion of 3D form. Darkness implies shade from a light and our experience suggests the light is from above (because of our experience with the sun and normal room lighting), and thus a gradient from dark to light, top to bottom is seen as a depression, and a raised surface when the gradient is reversed. (Turn the book over and notice how the impressions reverse. In fact, turn the book sideways and you may be surprised to see what are now the two sides are a mirror image of each other.)

Context can also affect our color perception. Human perception often informs us of the relationship between stimuli rather than direct values—because absolute values are hard to determine. There’s always context. Visually, there are lighting conditions that affect how we see colors (Fig. 3-16). Proprioceptively, how much force it takes to lift an object might change based on the grip of the object or the lifter’s level of exhaustion. Sonically if there is a lot of background noise we may not be able to discern a precise pitch. (In a book it’s easier to demonstrate this visually.) Fig. 3-16 shows two pair of diamonds that appear to be two shades of gray. Actually, that’s not true, Fig. 3-16 shows four diamonds all the same color, the same shade of gray, but the differing backgrounds, and the visual distractions (“noise”) allow our perception to infer that certain objects or shapes are “in shadow” and therefore must be brighter than the actual stimulus provides.

Figure 3-16. In the “snake illusion” the surrounding context affects how we perceive the four diamonds, which are all the same shade of gray.

On the flip-side, color can also be the cause of illusion. In the Poggendorff illusion, an obscuring shape masks a pair of different-colored lines, one of which terminates under the mask, and the other changing shade to something closer to the other, and the single protruding segment will appear to move in a way that it is more in line with the truncated segment (Fig. 3-17). In a different vein, the shade of an object may affect our distance perception of that object. In the aforementioned dome for the Cappella della Santissima Sindone (chapel of the Shroud of Turin) Guarino Guarini selected the stones to use at each level of the dome based on their color and texture to enhance his geometric illusion of a dome with extended height. Specifically, Guarini put less dark and more polished stone lower in the dome (nearer the viewer) because they will appear to be larger, and less polished darker-appearing stone higher up, giving the impression of those stones being even farther away [Meek 1988][Evans 2000].

Figure 3-17. In the Poggendorff illusion a pair of nearby parallel lines occluded by a polygon under which one line ends and the other changes color to one closer to the terminated line causes the continuing line to seem to shift in the direction of the terminated line.

One final illusion related to vision to discuss is the illusion of “vection.” Vection is the false sense of motion caused by visual stimulation. Thus, this case is a cross-sensory illusion, and more accurately could be described as a vestibular illusion—and so indeed, we will discuss this further in the Vestibular Illusion section.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128009659000039

Perceptual Constancy: Direct versus Constructivist Theories

J. Norman, in International Encyclopedia of the Social & Behavioral Sciences, 2001

1 Stability and Constancy in Perception

The ability of our perceptual system to overcome the effects of changes in stimulation and maintain a veridical and stable percept is called perceptual constancy (Epstein 1977, Walsh and Kulikowski 1998). In discussing the constancies it is common to make a distinction between the distal stimulus and proximal stimulus. The distal stimulus is the actual physical stimulus, the physically objective dimensions of the viewed object. The proximal stimulus, in the case of vision, is the very image that falls on the retina. It changes with changes in position or lighting of the physical stimulus. Constancy can be defined as the ability to correctly perceive the distal stimulus; that is, the stable properties of objects and scenes, in spite of changes in the proximal stimulus.

One example of constancy is shape constancy, which refers to the ability of the visual system to ascertain the true shape of an object even when it is not viewed face on but slanted with respect to the observer. In an example of shape constancy the distal stimulus might be a door half open, while the proximal stimulus will be the trapezoid shape it projects on the retinas. Shape constancy in this case would be the perception of the distal rectangular shape of the door in spite of the fact that the proximal shape is far from rectangular. Another example of constancy is lightness constancy. Lightness refers to the perceived reflectance of an object; high reflectance is perceived as white and very low reflectance as black, with intermediate reflectances as various shades of gray. We are capable of perceiving the distal lightness of a surface in spite of the fact that the proximal amount of light reaching our eyes changes with changes in the amount of light illuminating that surface. For example, if we hold a piece of chalk in one hand and a piece of charcoal in the other, the chalk will be perceived as white and the charcoal as black. This will be true if we observe the two in a dimly lit room or in bright sunshine, in spite of the fact that the amount of light reaching the eyes from the charcoal in the sunshine might be greater than that from the chalk in the dim room lighting.

The perceptual constancy that has received the greatest amount of research attention is size constancy (Ross and Plug 1998). If the distance is not very great we usually perceive the distal size of objects in spite of the changes in their proximal size as their distance from us changes. The two theoretical approaches mentioned above, the constructivist and the direct, deal with size constancy differently. According to the constructivists, size constancy is achieved through some process whereby the perceptual system perceives the object's distance and then takes the distance into account and ‘corrects’ the proximal image to yield a true distal percept. The direct approach asserts that there is no need to involve the perception of distance in the perception of size. Instead, it claims that enough information exists in the visual environment to allow direct perception of size without the need for aid from additional ‘higher’ mental processes.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B0080430767014522

Graphic Design and Layout Bloopers

Jeff Johnson, in GUI Bloopers 2.0, 2008

People filter information

People miss information constantly. Our perceptual system filters out more than it lets in. That isn’t a bug; it’s a feature! If we didn’t work this way, we couldn’t function in this booming, buzzing, rapidly changing world. We’d be overloaded.

Millions of years of evolution designed us to ignore most of what is going on around us and to focus our attention on what is important. When our prehistoric ancestors were hunting in the East African veldt, what was important was what was moving and what looked different from the grassy background. It might be animals they regarded as food … or animals that regarded them as food.

In modern times, when a pilot scans cockpit displays, what is important is what is abnormal, what is changing, and how it is changing. When a business executive prepares a presentation, what is important is the presentation content and anything that seems like it will help her prepare her presentation on time. Everything else is irrelevant and is ignored.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123706430500056

User/System Interface Design

Theo Mandel, in Encyclopedia of Information Systems, 2003

V.B.2. Reduce User's Memory Load

Capabilities and limitations of the human memory and perceptual systems were discussed earlier in this section. The interface should help users remember information while using the computer. We know that people are not good at remembering things, so programs should be designed with this in mind. Table III lists the design principles in this area.

Table III. Principles That Reduce Users' Memory Load

Reduce users' memory loadKeyword
Relieve short-term memory Remember
Rely on recognition, not recall Recognition
Provide visual cues Inform
Provide defaults, undo, and redo Forgiving
Provide interface shortcuts Frequency
Promote an object-action syntax Intuitive
Use real-world metaphors Transfer
User progressive disclosure Context
Promote visual clarity Organize

A sign that computer systems do not help users' memory is the use of external memory aids, such as sticky pads, calculators, reference books, frequently asked questions (FAQs), and sheets of paper. Often, application users must write information down on a piece of paper because they know they will need that information later in the program. Program elements such as undo and redo and clipboard actions such as cut, copy, and paste allow users to manipulate pieces of information needed in multiple places within an application and across applications.

Filling in on-line forms with common information such as name, address, and telephone number should be remembered by the system once a user has entered them or once a customer record has been opened.

Interfaces support long-term memory retrieval by providing users with items for them to recognize rather than having to recall information. It is much easier to browse a list to select an item rather than trying to remember the correct item to type into a blank entry field. For example, sophisticated spell-checking techniques offer users a list of possible alternatives to select from for a misspelled word, rather than just identifying that a word is spelled incorrectly.

It is critical to continuously show users where they are, what they are doing, and what they can do next. These visual feedback indicators provide the context for users to understand where they are and where they can go. Early hypertext techniques allowed users to navigate between pieces of information and documents, but users got lost and could not remember why they were at their current location and how they got there!

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B0122272404001908

Humanoid Robots

David J. Bruemmer, Mark S. Swinson, in Encyclopedia of Physical Science and Technology (Third Edition), 2003

VII.F Perception

Humans interact with continuously flowing, diverse stimulation. Likewise, humanoids need multimodal perceptual systems that can seamlessly integrate sensors. One way to do this is to allow sensors to continually compete for dominance. At the Electro Technical Laboratory in Japan, G. Cheng and Y. Kuniyoshi have developed a humanoid with 24 dof, joint receptors with encoders and temperature sensing. The humanoid uses separate processors for control of auditory, vision, motor output, and integration. The robot itself is lightweight and flexible, allowing it to interact comfortably and safely with humans. Throughout a visual and auditory tracking task, the robot tracks a person by sight and/or sound while mimicking the upper body motion of a person. The focus of the work was to show that the robot could track people using a multiple sensory approach that is not task-specific, and does not need to switch between sensor modalities. The idea is that perceptual subsystems necessary for mimicry, tracking, vision, and auditory processing should be thought of as essential capabilities that must together contribute to high-utility human-like behavior. This said, humanoid roboticists generally agree that vision is the singlemost important sensing modality for enabling rich, human-like interactions with the environment. Of course, computer vision has long been a hard problem of itself. The main problem is that many factors are confounded into image data in a many-to-one mapping. For instance, how can a humanoid infer three-dimensional reality from a two-dimensional image? Furthermore, there is an amazing amount of data to be processed. For a long time, computer vision assumed that the goal was simply to acquire as much data about the environment as possible, and then apply “brute force” processing. This approach proved computationally intractable. Rather than view perceptual systems as passive receptors, which merely collect any and all data, we are beginning to create perceptual systems that can interact with humans and with the physical environment, actively structuring a perception of reality rather than just passively trying perceiving it.

Luiz-Marcos Garcia, Antonio A. F. Oliveira, Roderic A. Grupen, David S. Wheeler, and Andy H. Fagg use attentional mechanisms to focus a humanoid robot on visual areas of interest. On top of this capability, researchers have implemented a learning system that allows the robot to autonomously recognize and categorize the environmental elements it extracts. Robots must be equipped to exploit perceptual clues such as sound, movement, color intensity, or human body language (pointing, gazing, and so on). For rich sensor modalities such as vision, perception is as much a process of excluding input as receiving it.

Humans naturally find certain perceptual features interesting. Features such as color, motion, and face-like shapes are very likely to attract our attention. MIT has been working to create a variety of perceptual feature detectors that are particularly relevant to interacting with people and objects. These include low-level feature detectors attuned to quickly moving objects, highly saturated color, and colors representative of skin tones. The robot's attention is determined by a combination of low-level perceptual stimuli. The relative weightings of the stimuli are modulated by high-level behavior and motivational influences. For a task involving human interaction, the perceptual category “face” may be given higher priority than for a surveillance task where the robot must attend most closely to motion and color.

A sufficiently salient stimulus in any modality can supercede the robot's attention, just as a human watching a film might respond to sudden motion in the adjacent seat. MIT has implemented a number of intuitive arbitration rules into the system such as the fact that, all else being equal, larger objects are considered more salient than smaller ones. The goal is for the robot to be responsive to unexpected events, but also able to filter out superfluous ones. Otherwise the robot would become a slave to every whim of its environment. MIT has found that their attention model enables people to intuitively provide the right cues to direct the robot's attention. Actions such as shaking an object, moving closer to your intended listener, hand waving, and altering tone of voice all help the robot focus in on appropriate aspects of its environment (Fig. 21).

FIGURE 21. Graphical representation of Kismet's attentional system. (MIT Artificial Intelligence Laboratory.)

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B0122274105003173

Does Animation Help Users Build Mental Maps of Spatial Information?

Benjamin B. Bederson, Angela Boltman, in The Craft of Information Visualization, 2003

1 Introduction

During the past decade, researchers have explored the use of animation in many aspects of user interfaces. In 1984, the Apple Macintosh used rudimentary animation when opening and closing icons. This kind of animation was used to provide a continuous transition from one state of the interface to another, and has become increasingly common, both in research and commercial user interfaces. Users commonly report that they prefer animation, and yet there has been very little research that attempts to understand how animation affects users’ performance.

A commonly held belief is that animation helps users maintain object constancy and thus helps users to relate the two states of the system. This notion was described well by Robertson and his colleagues in their paper on “cone trees”, a 3D visualization technique that they developed.

“Interactive animation is used to shift some of the user's cognitive load to the human perceptual system…. The perceptual phenomenon of object constancy enables the user to track substructure relationships without thinking about it. When the animation is completed, no time is needed for reassimilation.” [19 p. 191].

Researchers including Robertson have demonstrated through informal usability studies that animation can improve subjective user satisfaction. However, there have been few controlled studies looking specifically at how animation affects user performance. These studies are summarized below.

1.1 Animation takes time

One potential drawback of adding animation to an interface or visualization is that animation, by definition, takes time. This brings up a fundamental trade-off between the time spent animating and the time spent using the interface. At one extreme with no animation, system response can be instantaneous. Users spend all of their time using the system. However, the user may then spend some time after an abrupt transition adjusting to the new representation of information and relating it to the previous representation.

At the other extreme, each visual change in the interface is accompanied by a smooth transition that relates the old representation to the new one. While developers of animated systems hope that this animation makes it easier for users to relate the different states of the system, there is clearly a trade-off in how much time is actually spent on the transition. If the transition is too fast, users may not be able to make the connection, and if the transition is too long, the users’ time will be wasted. The ideal animation time is likely to be dependent on a number of factors, including task type, and the user's experience with the interface and the data. In pilot studies and our experience building animated systems, we have found that animations of 0.5 – 1.0second seem to strike a balance. Others have found one second animations to be appropriate [9 p. 185].

In the worse case, animations can be thought of as an increase in total system response time. Typically, system response time is defined to mean the time between when the user initiates an action, and when the computer starts to display the result. This definition comes from the days of slow displays on computer terminals. This metric was chosen because users could start planning their response as soon as the first data were displayed. With many animations, however, the user does not see the relevant data until the animation is nearly finished, and thus the animation time is an important part of system response. We thus define the total system response time to include the animation time (Figure 1).

Figure 1. Model of user interface timing with animation (adapted from [22 p. 353]).

In many application domains, the system may need some time to gather data (such as with the World Wide Web), or to process it. In these cases, inserting an animation where a delay is necessitated is not likely to harm productivity because users would have to wait anyway. However, since the delay associated with the Web is often hard to predict, matching animations to the Web retrieval time could be difficult. The bigger problem is when the computer could have responded instantly, and the animation slows down computer's response time.

Researchers have been studying system response time since the 1960s, and as it happens, users’ responses to system delays are more complex than it may at first appear There is much research showing that user satisfaction decreases as delays increase (see the recent report on the World-Wide Web for a typical example [21]). However, this satisfaction does not necessarily correlate with performance. One paper showed that users pick different interaction strategies depending on system response times [25]. This paper showed that users actual performance depends on a complex mix of the task, the delay, and the variability of the delay among other things. One typical study shows productivity increasing as delays decreased for data entry tasks [12]. However, another study found an increase in data entry productivity when delays increased to a point [3].

Thus, the fact that animations take time does not necessarily imply that they will hurt productivity. Since they may, in fact, reduce cognitive effort, as suggested by Robertson and others, we believe that animation may improve some kinds of task performance.

1.2 Types of computer animation

Animation in computer interfaces can actually mean many different things. Baecker and Small summarized many of the ways that objects can be animated on computer screens [1]. Animation can consist of moving a static object within a scene, or the object may change its appearance as it is moved. A scene may be larger than can fit on the screen, and the viewpoint can be changed with animated movement by rendering “in-between frames” part way between the starting and ending state. There are numerous other types of animations as well.

In general, animation is often used to help users relate different states in the interface. These changes can be in the data within the interface or in the interface itself. Some systems that use animation of the data include the Information Visualizer [9], Cone Trees [19], the Continuous Zoom system [20], and WebBook and WebForager [10]. Chang and Ungar discussed the application of animation principles from the arts and cartoons to user interfaces, showing that more than just simple movement of interface objects is possible [11].

Some researchers have investigated the use of animated icons [2]. Others have used animation to try to improve teaching how algorithms work [8, 24]. There are also several user interface systems that include explicit support for creating animated interfaces. A good example of this is the Morphic system [18].

1.3 Zoomable User Interfaces

We are interested in understanding animation because for the past several years, we have been exploring Zoomable User Interfaces [4, 5, 6, 7, 15, 17]. Zoomable User Interfaces (ZUIs) are a visualization technique that provides access to spatially organized information. A ZUI lets users zoom in and out, or pan around to view much more information than can normally fit on a single screen. We have developed a system called Pad++ to explore ZUIs.

The ZUIs we have built typically provide three types of animated movement, as well as other kinds of animated transitions such as dissolves. The three types of animated movement are motion of objects within a scene, manual change of viewpoint (through various steering mechanisms), and automatic change of viewpoint (during hyperlinks).

We believe that animating changes of viewpoint during hyperlinks are the most important kind of animation in Pad++, since these animations appear to help users understand where they are in the information space. They are also easy to understand and use. As one child using KidPad (an authoring tool for children within Pad++) said, “With [traditional hypertext] it is like closing your eyes and when you open them you're in a new place. Zooming lets you keep your eyes open” [14].

We and others have also used Pad++ to make animated zooming presentations, and regularly receive very positive feedback from audiences. However, as HCI researchers, we want to develop an understanding of exactly where, if at all, animated ZUIs perform better than traditional approaches.

As we started to design a study that would help us to understand the benefits of ZUIs, we realized that the animation we employ is orthogonal to the use of zoom to organize data. It is possible to have an interface with or without animation, and with or without a multi-scale structure. In order to understand ZUIs better, we decided to attempt to understand the effects of animated movement, and multi-scale structure separately.

Thus, in this paper, we start by examining the most basic and fundamental kind of animation used in ZUIs. We examine how animated changes of viewpoint during hyperlink transitions affects users’ ability to build a mental map of a flat information space. We specifically chose not to investigate zooming or multiscale structures in this work because we felt zooming would be a confounding variable to the animation effects we are investigating.

1.4 Previous Studies

There have been few studies that looked specifically at how animation affects user's abilities to use interfaces. One study looked at transition effects (such as a dissolve) and animation of an object within the view [16]. This study found that both a dissolve transition affect and animated motion of objects within a scene independently helped users to solve problems. This study is important because it motivates the common belief and intuition that animation can help user's maintain object constancy and thus improve task performance. However, this study did not address animation of viewpoint which is the primary focus of the current study.

Another study that is perhaps more relevant looked at animating the viewpoint of a spatial information visualization [13]. That study compared users’ ability to find items where more items than could fit in a single display were used. Different navigation techniques were used to move through the items (scrollbars, zoom, and fisheye view), and each navigation technique was tested with and without animation. In this experiment, the use of animation did not have a significant affect on any of the navigation techniques. However, animation was implemented with just a single in-between frame. For animation to be perceived as smooth apparent movement, there must be several in-between frames, and they must be shown quickly (typically greater than 10 frames per second, and preferably 20 or 30 frames per second). Thus it does not appear that the results accurately describe the effects of animation. One interesting aspect of the study that does appear significant, but not relevant to animation, is that the zooming visualization technique performed significantly better than either the scrollbar or fisheye view visualization technique.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781558609150500159

DYNAMIC QUERIES FOR VISUAL INFORMATION SEEKING

In The Craft of Information Visualization, 2003

User-interface design.

Humans can recognize the spatial configuration of elements in a picture and notice relationships among elements quickly.

This highly developed visual system means people can grasp the content of a picture much faster than they can scan and understand text.

Interface designers can capitalize on this by shifting some of the cognitive load of information retrieval to the perceptual system. By appropriately coding properties by size, position, shape, and color, we can greatly reduce the need for explicit selection, sorting, and scanning operations. However, our understanding of when and how to apply these methods is poor; basic research is needed. Although our initial results are encouraging, there are many unanswered user-interface design questions. How can we

design widgets to specify multiple ranges of values, such as 14 to 16 or 21 to 25?

let users express Boolean combinations of slider settings?

choose among highlighting by color, points of light, regions, and blinking?

allow varying degrees of intensity in the visual feedback?

cope with thousands of points or areas by zooming?

weight criteria?

select a set of sliders from a large set of attributes?

provide “grand tours” to automatically view all dimensions?

include sound as a redundant or unique coding?

support multidimensional input?

Display issues.

We must reexamine basic research on color, sound, size, and shape coding in the context of dynamic queries. Of primary interest are the graphical display properties of color (hue, saturation, brightness), texture, shape, border, and blinking. Color is the most effective visual display property, and it can be an important aid to fast and accurate decision making.12 Auditory properties may be useful in certain circumstances (for example, lower frequency sounds associated with large values; higher frequency with small values), especially as redundant reinforcement feedback.

We understand that rapid, smooth screen changes are essential for the perception of patterns, but we would like to develop more precise requirements to guide designers. In our experience, delays of more than two- to three-tenths of a second are distracting, but precise requirements with a range of situations and users would be helpful.

In geographic applications, sometimes points on a map are a natural choice, but other applications require overlapping areas. Points and areas can be on or off (in which case monochrome displays may be adequate), but we believe that color coding may convey more information. Texture, shape, and sound coding also have appeal.

Other issues emerge when we cannot identify a natural two-dimensional representation of the data. Of course we can always use a textual representation. Another possibility is a two-dimensional space, such as a scattergram. Instead of showing homes as points of light on a city map, they could be points of light on a graph whose axes are the age of the house and its price. We could still use sliders for number of bedrooms, quality of schools, real estate taxes, and so on.

Tree maps — two-dimensional mosaics of rectangular areas — are another way to visualize large amounts of hierarchical information. For example, we built a business application that visualized sales data for a complete product hierarchy, color-coded by profitability and size-coded by revenue.13 Twelve professional users in our usability study could rapidly determine the state of financial affairs — large red regions indicate trouble and blue areas signal success. A slider let them observe quickly the changes to the tree map over time to spot trends or problems.

Input issues.

Widget design is a central issue. Even in our early explorations we were surprised that none of the existing user-interface-management systems contained a double-boxed süder for the specification of a range (more than $70,000, less than $130,000). In creating such a slider we discovered how many design decisions and possibilities there were. In addition to dragging the boxes, we had to contend with jumps, limits, display of current values, what to do when the boxes were pushed against each other, choice of colors, possible use of sound, and so on.

We also came to realize that existing widgets are poorly matched with the needs of expert users, who are comfortable with multidimensional browsing. Two-dimensional input widgets to select two values at once are not part of any standard widget set that we have reviewed, so we created the one shown in Figure 9. Using a single widget means that only one selection is required to set two values and that correct selections can be guaranteed. In Figure 9, for example, the dotted areas indicate impossible selections (the cheapest seven-bedroom house is $310,000).

Figure 9. Two prototype two-dimensional widgets. (A) A point indicating the number of bedrooms (three) and cost of a home ($220,000) with a single selection. (B) A range of bedrooms (three to four) and cost $130,000 to $260,000).

Input widgets that can handle three or more dimensions may facilitate the exploration of complex relationships. Current approaches for high-dimensional input and feedback are clumsy, but research with novel devices such as data gloves and a 3D mouse may uncover effective methods. With a 3D mouse, users lift the mouse off the desk and move it as a child moves a toy airplane.14 The mouse system continuously outputs the six parameters (six degrees of freedom) that define its linear and angular position with respect to a fixed coordinate system in space.

Designers can decompose the rotation motion of the mouse into the combination of.

a rotation around the handle of the mouse and

a change in the direction the handle is pointing.

When the mouse is held as a pointer, the rotation around the handle is created by a twist of the arm, and it may be natural to users to make the same twisting motion to increase the level of a database parameter as they would to increase the volume of a car radio. Changing the pointing direction of the mouse handle is done by the same wrist flexion that a lecturer would use to change the orientation of a laser pointer to point at another part of the conference screen. It may then also feel natural to users to imagine the planar space of two database parameters as vertical in front of them and point at specific parts by flexing their wrist up, down, and sideways.

For example, sophisticated users could perform a dynamic query of the periodic table of elements using the 3D mouse. They would find elements of larger atomic mass by translating the mouse upward; for larger atomic numbers they would move to the right; for larger ionization energies they would move toward the display; for larger atomic radius they would bend their wrist up; for larger ionic radius they would bend their wrist to the right; for larger electronegativity they would twist their arm clockwise. Sliders should probably still be present on the screen, but would move by themselves and give feedback on parameter values.

Another input issue is how to specify alphanumeric fields. Although a simple type-in dialog box is possible, more fluid ways of roaming through the range of values is helpful. To this end we developed an alphaslider to let users quickly sweep through a set of items like the days of the week or the 6,000 actor names in a movie database.15

Dynamic queries are a lively new direction for database querying. Many problems that are difficult to deal with using a keyword-oriented command language become tractable with dynamic queries. Computers are now fast enough to apply a direct-manipulation approach on modest-sized problems and still ensure an update time of under 100 ms. The challenge now is to broaden the spectrum of applications by improving user-interface design, search speed, and data compression. ♦

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9781558609150500056

Our Perception is Biased

Jeff Johnson, in Designing with the Mind in Mind (Third Edition), 2021

Habituation

A third way in which experience biases perception is called habituation. Repeated exposure to the same (or highly similar) perceptions dulls our perceptual system’s sensitivity to them. Habituation is a very low-level phenomenon of our nervous system: it occurs at a neuronal level. Even primitive animals like flatworms and ameba, with very simple nervous systems, habituate to repeated stimuli (e.g., mild electric shocks or light flashes). People, with our complex nervous systems, habituate to a range of events, from low-level ones like a continually beeping tone, to medium-level ones like a blinking ad on a website, to high-level ones like a person who tells the same jokes at every party or a politician giving a long, repetitious speech.

We experience habituation in computer usage when the same error messages or “Are you sure?” confirmation messages appear again and again. People initially notice them and perhaps respond, but eventually they click them closed reflexively without bothering to read them.

Habituation is also a factor in a recent phenomenon variously labeled “social media burnout” (Nichols, 2013), “social media fatigue,” or “Facebook vacations” (Rainie et al., 2013); newcomers to social media sites and tweeting are initially excited by the novelty of microblogging about their experiences, but sooner or later get tired of wasting time reading tweets about every little thing that their “friends” do or see—for example, “Man! Was that ever a great salmon salad I had for lunch today.”

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128182024000015

Our Perception is Biased

Jeff Johnson, in Designing with the Mind in Mind (Second Edition), 2014

Habituation

A third way in which experience biases perception is called habituation. Repeated exposure to the same (or highly similar) perceptions dulls our perceptual system’s sensitivity to them. Habituation is a very low-level phenomenon of our nervous system: it occurs at a neuronal level. Even primitive animals like flatworms and amoeba, with very simple nervous systems, habituate to repeated stimuli (e.g., mild electric shocks or light-flashes). People, with our complex nervous systems, habituate to a range of events, from low-level ones like a continually beeping tone, to medium-level ones like a blinking ad on a Web site, to high-level ones like a person who tells the same jokes at every party or a politician giving a long, repetitious speech.

We experience habituation in computer usage when the same error messages or “Are you sure?” confirmation messages appear again and again. People initially notice them and perhaps respond, but eventually click them closed reflexively without bothering to read them.

Habituation is also a factor in a recent phenomenon variously labeled “social media burnout” (Nichols, 2013), “social media fatigue,” or “Facebook vacations” (Rainie et al., 2013): newcomers to social media sites and tweeting are initially excited by the novelty of microblogging about their experiences, but sooner or later get tired of wasting time reading tweets about every little thing that their “friends” do or see—for example, “Man! Was that ever a great salmon salad I had for lunch today.”

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124079144000014

What are sensory perceptual systems?

The perceptual system contains primarily visual, auditory, and kinesthetic input, that is, pictures, sounds, and feelings. There is also olfactorial and gustatorial input, that is, smells and tastes.

How many perceptual systems are in the brain?

Human beings possess five basic perceptual systems. The basic orientation system informs us of the position of the body in relation to the environment through receptors sensitive to gravity, such as those in the vestibular mechanism in the inner ear.

What is perceptual system?

A perceptual system is a computational system (biological or artificial) designed to make inferences about properties of a physical environment based on senses. Other definitions may exist.

What is the process by which specialized organs receive stimulus energies from the environment?

Sensation: the process by which we receive physical energy from the environment and encode it into neural signals. Perception: the process of organizing and interpreting sensory information.

Toplist

Neuester Beitrag

Stichworte