Rabu, 13 Juni 2018

Sponsored Links

home2.jpg
src: docs.oracle.com

The user interface ( UI ), in the field of industrial design of human-computer interaction, is the space where human-machine interaction takes place. The purpose of this interaction is to enable effective operation and control of the machine from the human end, while the engine simultaneously provides feedback information that helps the operator decision-making process. Examples of this broad concept of user interface include interactive aspects of computer operating systems, hand tools, heavy machine operator controls, and process control. Design considerations apply when creating user-related or disciplinary interfaces such as ergonomics and psychology.

In general, the purpose of user interface design is to generate user interfaces that make it easy (self-explanatory), efficient, and user-friendly to operate the machine in a way that produces the desired results. This usually means that the operator needs to provide minimal input to achieve the desired output, and also that the engine minimizes unwanted output to humans.

With the increasing use of personal computers and the relative decline in people's awareness about heavy equipment, the term user interface is generally assumed to mean graphical user interface, while industrial control panels and design discussion control engines more commonly refer to the human-machine interface.

Another term for the user interface is the human-machine interface ( MMI ) and when the machine is a human-computer interface .


Video User interface



Overview

The user interface or human-machine interface is part of a machine that handles human-machine interaction. Membrane switches, rubber keypads and touch screens are examples of the physical parts of the Human Machine Interface that we can see and touch.

In complex systems, the human-machine interface is usually computerized. The term human-computer interface refers to such a system. In the context of computing, this term usually extends also to software dedicated to controlling the physical elements used for human-computer interaction.

Engineering of human-machine interface is enhanced by considering ergonomics (human factor). Appropriate disciplines are human factor engineering (HFE) and usability engineering (EU), which is part of system engineering.

The tools used to incorporate human factors in interface design are developed based on computer science knowledge, such as computer graphics, operating systems, programming languages. Currently, we use graphical user interface expressions for the human-machine interface on the computer, as almost everything now uses graphics.

Maps User interface



Terminology

There is a difference between the user interface and the operator interface or the human-machine interface (HMI).

  • The term "user interface" is often used in the context of (personal) computer systems and electronic devices
    • Where network equipment or computers are interconnected via MES (Manufacturing Execution System) -or Host to display information.
    • The human-machine interface (HMI) is usually local to one machine or equipment, and is a method of interface between humans and equipment/machines. The operator interface is the interface method by which some equipment connected by the host control system is accessed or controlled.
    • The system can expose multiple user interfaces to serve different types of users. For example, a computerized data base may provide two user interfaces, one for library libraries (limited set of functions, optimized for ease of use) and the other for library personnel (a broad set of functions optimized for efficiency).
  • The user interface of a mechanical system, vehicle or industrial installation is sometimes referred to as a human-machine interface (HMI). HMI is a modification of the original term MMI (human-machine interface). In practice, MMI abbreviations are still often used although some may claim that MMI stands for something different now. Other abbreviations are HCI, but more commonly used for human-computer interaction. Other terms used are operator interface console (OIC) and operator interface terminals (OIT). But abbreviated, the term refers to the 'layer' that separates the man who operates the machine from the machine itself. Without a clean and useful interface, humans will not be able to interact with information systems.

In science fiction, HMI is sometimes used to refer to what is better described as a direct neural interface. However, this latter use sees an increase in application in the use of real-life (medical) prostheses - artificial extensions that replace missing body parts (eg, cochlear implants).

In some circumstances, the computer may observe the user and react according to their actions without any special command. Tools for tracking body parts are required, and sensors that pay attention to the position of the head, the direction of sight and so on have been used experimentally. This is highly relevant for immersive interfaces.

PeopleSoft Fluid User Interface Concepts - YouTube
src: i.ytimg.com


History

The history of the user interface can be divided into the following phases according to the dominant user interface type:

1945-1968: The batch interface

In the batch era, computing power is very rare and expensive. The user interface is not yet perfect. Users must accommodate computers rather than otherwise; the user interface is considered above, and the software is designed to keep the processor at maximum utilization with as little overhead as possible.

The input side of the user interface for a batch machine is mainly a punching card or equivalent media such as paper tape. The output side adds a line printer to this medium. With the exception of unlimited console operator systems, humans do not interact with batch machines in real time at all.

Send the work to the batch machine involved, first, prepare a stack of hollow cards that describes the program and the data set. Punching the program cards is not done on the computer itself, but on the buttons, special machines such as a very large typewriter, unforgiving, and prone to mechanical failure. The software interface is also unforgiving, with very strict syntax meant to be parsed by the smallest possible compiler and interpreter.

After the cards are punched, someone will drop them in the queue and wait. Finally, the operator will feed the deck to the computer, perhaps installing magnetic tape to supply datasets or other helper software. The job will produce a print, which contains the final result or (too often) a cancel notice with an error log attached. Successful running may also write results on a magnetic tape or produce multiple data cards to be used in later calculations.

Turnaround times for one job often extend throughout the day. If someone is very lucky, it may be hours; no real-time response. But there was a worse fate than the queue of cards; some computers require a more tedious and error-prone process to switch to a program in binary code using a console switch. The earliest machines had to be partly rewired to incorporate program logic into themselves, using a device known as a plugboard.

The initial batch system provides work in progress across the computer; deck and program tapes should include what we now consider to be operating system code to talk with I/O devices and do whatever other households need. In the middle of the batch period, after 1957, various groups began experimenting with so-called "load-and-go" systems. It used a monitor program that always stayed on the computer. Program can call monitor for service. Other functions of the monitor are to perform better error checks on submitted jobs, catch errors early and smarter and generate more useful feedback to users. Thus, the monitor represents the first step towards both explicitly designed operating systems and user interfaces.

1969-present: The command line user interface

The Command interface (CLI) evolves from batch monitors connected to the system console. Their interaction model is a series of request-response transactions, with requests expressed as textual commands in a special vocabulary. The latency is much lower than the batch system, down from day or hour to second. Thus, the command-line system allows the user to change his mind about the later stages of the transaction in response to real-time or near-real-time feedback on previous results. The software can be explorative and interactive in ways that were not possible before. But this interface still places a relatively heavy load of mnemonics on the user, requiring serious investment effort and the time to learn to master.

The earliest command-line system combines teleprinters with computers, adapting adult technologies that have proven effective in mediating the transfer of information through interconnected cables. The teleprinter was originally created as a device for automatic telegraph transmission and reception; they have a history that goes back to 1902 and has become established in editorial space and elsewhere in 1920. In their reuse, economics is certainly a consideration, but psychology and Rule of Least Surprise are also important; teleprinters provide an interface point with systems that are familiar to many engineers and users.

The widespread adoption of the video-display terminal (VDTs) in the mid-1970s led to the second phase of the command-line system. This intersects latency even further, since characters can be cast at phosphor points from the screen faster than the printer head or the train can move. They helped to quell conservative resistance to an interactive program by cutting ink and paper out of cost drawings, and on the first TV generation in the late 1950s and 60s were even more iconic and more comfortable than teleprinters for the 1940s computer pioneers.

Equally important, the existence of accessible screens - a fast and reversible two dimensional display of text - makes it economical for software designers to deploy interfaces that can be described as visual rather than textual. Such pioneering apps are computer games and text editors; Near-descendants of some of the earliest specimens, such as rogue (6), and vi (1), are still a living part of the Unix tradition.

1985: SAA User Interface or Text-based User Interface

In 1985, with early Microsoft Windows and other graphical user interfaces, IBM created the so-called Standard Application Application Architecture (SAA) which included a Generic User Access (CUA) derivative. CUA manages to create what we know and use today in Windows, and most recent DOS or Windows Console apps will use that standard as well.

It defines the pulldown menu system should be at the top of the screen, the status bar at the bottom, the shortcut keys must remain the same for all public functions (F2 to Open example will work in all applications that follow the standard SAA). This greatly helps the speed at which users can learn the applications so quickly absorbed and become the industry standard.

1968-present: Graphical User Interface

  • 1968 - Douglas Engelbart demonstrates NLS, a system that uses mouse, pointers, hypertext, and multiple windows.
  • 1970 - Researchers at the Xerox Palo Alto Research Center (many of SRI) to develop a paradigm WIMP (Windows, Icons, Menus, Pointer)
  • 1973 - Xerox Alto: commercial failure due to cost, poor user interface, and lack of programs
  • 1979 - Steve Jobs and other Apple engineers visit Xerox. Pirates of Silicon Valley dramatized the event, but Apple has been working in the GUI before the visit
  • 1981 - Xerox Star: focus on WYSIWYG. Commercial failure (25K sold) due to cost ($ 16K each), performance (minutes to save file, few hours to recover from crash), and poor marketing
  • 1984 - Apple Macintosh popularizes the GUI. The Super Bowl ad shows once, the most expensive ever made at that time
  • 1984 - MIT X Window System: hardware-independent platform and network protocol for developing GUI on UNIX-like systems
  • 1985 - Windows 1.0 - provides the GUI interface to MS-DOS. No overlapping windows (instead).
  • 1985 - Microsoft and IBM start work on OS/2 intended to eventually replace MS-DOS and Windows
  • 1986 - Apple threatens to sue Digital Research because their GUI desktop looks too similar to Apple's Mac.
  • 1987 - Windows 2.0 - Increased overlap and overlapping windows, keyboard and mouse
  • 1987 - Macintosh II: The first full-color Mac
  • 1988 - OS/2 1.10 Standard Edition (SE) has a GUI written by Microsoft, very similar to Windows 2

UI) User Interface Design Inspiration | Boost Labs
src: www.boostlabs.com


Interface Design

The main methods used in interface design include prototyping and simulation.

The typical human-machine interface design consists of the following stages: interaction specifications, interface software specifications and prototyping:

  • Common practices for interaction specifications include user-centered design, persona, activity-oriented design, scenario-based design, endurance design.
  • Common practices for interface software specifications include use cases, enforcement enforcement by interaction protocols (intended to avoid misuse).
  • A common practice for creating prototypes is based on interactive design based on interface element libraries (controls, decorations, etc.).

Quality

All great interfaces have eight qualities or characteristics:

  1. Clarity The interface avoids ambiguity by making it clear through language, flow, hierarchy and metaphors for visual elements.
  2. Conclusion It's easy to make the interface clear by overly clarifying and labeling everything, but this leads to an enlarged interface, where there are too many items on the screen at the same time. If too many things are on screen, find what you're looking for is difficult, so the interface becomes tedious to use. The real challenge in creating a great interface is to make it concise and clear at the same time.
  3. Familiarity Even if someone uses the interface for the first time, certain elements can still be familiar. Real life metaphors can be used to communicate meaning.
  4. Responsive A good interface will not feel sluggish. This means the interface should provide users good feedback about what is happening and whether the user input is being processed successfully.
  5. Consistency Keeping the interface consistent across your app is important because it lets users recognize usage patterns.
  6. Aesthetics While you do not have to create a compelling interface to do its job, making things look good will make the time your users spend using your app more fun; and happier users can only be a good thing.
  7. Efficiency Time is money, and a great interface should make users more productive through good shortcuts and designs.
  8. Forgiveness A good interface should not punish users for their mistakes but should provide the means to fix them.

The smallest principle of amazement

The least surprised principle (PATTERN) is a general principle in the design of all types of interfaces. This is based on the idea that humans can only pay attention to one thing at a time, leading to the conclusion that novelty should be minimized.

Habit Establishment

Source of the article : Wikipedia

Comments
0 Comments