RDB PRIME!
Engineering
Home
Research Paper(s)
Resume
Technology Items
Site Map
Site Search
 
 It is 18:59 PST on Wednesday 04/24/2024

"O" Networking Definitions & Concepts...

Object .. to .. Overhead Traffic

# A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Search for Information Technology Items



Object:

In its role as a current computing buzzword, the term object may refer to any type of entity that can have properties and actions (or methods) associated with it. Each property represents a slot into which specific information (a value for the property) can be filled. A particular combination of properties defines an object or object type, and a particular combination of values for the properties defines a specific instace of that object type.

In networking, the term object refers to an entity in some type of grouping, listing, or difinition. For example, users, machines, devices, and servers are considered network-related objects. Abstract entities, such as groups, queues, and functions, can also be treated as objects.

Objects are mainly of interest in relation to specific networking contexts or models. For example, managed objects are elements that can be used to accomplish a task or monitored to get a performance overview and summary. These objects are important because they provide the data for the network management programs that network supervisors may be running.

In a Novell NetWare network, an object is an entity that is defined in a file server's bindary in NetWare versions 2.x and 3.x, or in the NetWare Directory Services (NDS) in versions 4.x. NDS objects are the objects contained in the database for the NDS. These are discussed in the NDS article.

The global information tree contains definitions of many of the objects used in network management and other network-related activities.

In object-oriented programming (OOP), an object is a self-contained component that consists of both data (properties) and code (actions). Programming objects may be defined in terms of other objects, in which case the derived object may inherit properties and methods from the parent object. An actual instance of an object type will contain specific data values and methods that can distinguish it from other instances of that object type.

Inheritance and polymorphism, which enable a single object type to look and behave differently (but appropriately) in different instances, are two features that help give OOP the power and flexibility for which it is noted. See related articles Global Information Tree; NDS (NetWare Directory Services)

ObjectBroker:

ObjectBroker, from Digital Equipment Corporation (DEC), is a package that allows applications running in object-oriented environments, but on different hardware, to communicate with each other in a transparent manner. It also enables developer to create object-oriented applications and services that are distributed across a network.

ObjectBroker runs on a variety of platforms, including DEC's own Open VMS, ULTRIX, and OSF/1 environments, several other UNIX variants, Macintosh System 7, Microsoft Windows, and Windows NT.

Object Management Architecture (OMA):

The Object Management Architecture (OMA) is the center of all the activity undertaken by the Object Management Group (OMG). OMG was formed to help reduce the complexity, lower the costs, and hasten the introduction of new software applications. OMG does this through the introduction of an architectural framework (OMA) with supporting detailed interface specifications. Implementations are the domain of vendors, end-users, and those developing products and projects to solve a particular computing or business problem. Specifications are the domain of OMG membership. These specifications drive the industry towards interoperable, reusable, portable software components based on open, standard object-oriented interfaces.

The OMA Reference Model:

The OMA Reference Model partitions the OMG problem space into practical, high-level architectural components that can be addressed by member-contributed technology. It forms a conceptual roadmap for assembling the resultant technologies while allowing for different design solutions. Specifically, the Reference Model identifies and characterizes components, interfaces, and protocols that compose the OMA but does not in itself define them in detail. The OMA can be viewed as three major segments consisting of five critical components:

  1. Application Oriented - the OMA characterizes Interfaces and Common Facilities as solution-specific components that rest closest to the end user.
  2. System Oriented - Object Request Brokers and Object Services are more concerned with the "system" or infrastructure aspects of distributed object computing and management.
  3. Vertical Market Oriented - Domain Interfaces are vertical application or domain-specific interfaces. These coupled with combinations of inherited CF and OS interfaces will provide critical application frameworks to a wide variety of industries.

Regardless of which segment one is predominately interested in, it's important to note that all communications between components are managed by the Object Request Broker, the foundation of the OMA. Object Request Broker technology is now considered the most important approach for deploying open, distributed, heterogeneous computing solutions.

It is equally important to note that the OMA assumes that underlying services provided by a platform's operating system and lower-level basic services, such as network computing facilities, are available and usable by OMA implementations.

OMA Component Definitions:

Object Request Broker - commercially known as CORBA™, the ORB is the communications heart of the standard. It provides an infrastructure allowing objects to converse, independent of the specific platforms and techniques used to implement the objects. Compliance with the Object Request Broker standard guarantees portability and interoperability of objects over a network of heterogeneous systems.

Object Services - these components standardize the life-cycle management of objects. Interfaces are provided to create objects, to control access to objects, to keep track of relocated objects, and to control the relationship between styles of objects (class management). Also provided are the generic environments in which single objects can perform their tasks. Object Services provide for application consistency and help to increase programmer productivity.

Common Facilities - commercially known as CORBAfacilities, common facilities provide a set of generic application functions that can be configured to the specific requirements of a particular configuration. These are facilities that sit closer to the user, such as printing, document management, database, and electronic mail facilities. Standardization leads to uniformity in generic operations and to better options for end users for configuring their working environments. CORBA facilities also include facilities for use over the Internet.

Domain Interfaces - Domain Interfaces represent vertical areas that provide functionality of direct interest to end-users in particular application domains. Domain interfaces may combine some common facilities and object services, but are designed to perform particular tasks for users within a certain vertical market or industry.

Application Objects - while not an actual OMG standardization activity, application interfaces are critical when considering a comprehensive system architecture. The Application Interfaces represent component-based applications performing particular tasks for a user. An application is typically built from a large number of basic objects - some specific to the application at hand, some domain specific, some from object services and some built from a set of common facilities. These applications benefit greatly from the strengths of robust object systems development. Better abstraction of the problem space and solution, reusability of components and far simpler extension over time are well-known aspects of good object application development.

The OMG Object Model:

When any large body of contributors work together for a common technical good, it is necessary to work from a consistent basis of understanding and terminology. To this end the OMG Object Model defines common object semantics for specifying the externally visible characteristics of objects in a standard and implementation-independent way. The common semantics characterize objects that exist in an OMG-conformant system.

The OMG Object Model is based on a small number of basic concepts:

  • objects
  • operations
  • types
  • and subtyping

An object can model any kind of entity such as a person, a boat, a document, etc. Operations are applied to objects and allow one to conclude specific things about an object such as determining a person's date of birth. Operations associated with an object collectively characterize an object's behavior.

Objects are created as instances of Types. One can view a type as a template for object creation. An instance of type boat could be red boat, 38ft long, with a seating capacity of 6. A type characterizes the behavior of its instances by describing the operations that can be applied to those objects. There can exist a relationship between types. Example: a speedboat could be related to a generic form of boat. The relationships between types are known as supertypes/subtypes.

The OMG Object Model defines a core set of requirements, based on the above mentioned basic concepts, that must be supported in any system that complies with the Object Model standard. While the Core Model serves as the common ground, the OMG Object Model also allows for extensions to the Core to enable even greater commonality within different technology domains. The concepts, known as Components and Profiles, are supported by the OMA and are discussed at length in the OMA Guide.

Object technology (OT) is composed of numerous characteristics of which only a few have been discussed in this discription. For those new to the technology, it is recommend that the reader persue a more in depth-reading.

Conclusion:

The members of OMG have a shared goal of developing and using integrated software systems. These systems should be built using a methodology that supports modular production of software; encourages reuse of code; allows for useful integration along lines of developers, operating systems, and hardware; and enhances long-range maintenance of that code. Members of OMG believe that the object-oriented approach to software construction best supports their goals. The OMG staff exists to support the vision of its membership and to promote object technology and its benefits.

If you would like more information on object technology, OMG, or the OMA, please contact the Object Management Group at +1-508-820-4300 or through email: info@omg.org

The OMG Architecture Board:

In January 1996, as part of its reorganization, the Object Management Group established the Architecture Board to oversee and maintain consistency in the OMG Technical Process. Because of the growing membership and increasingly diverse needs of end users, as well as the continuous revision of existing specifications, OMG found it necessary to create the Board as a means of keeping track of existing specifications and ensuring that new specifications are both consistent with them and do not overlap or duplicate previous work.

The OMG Reference Model



Object Management Group (OMG):

OMG was founded in May 1989 by Christopher Stone and eight companies: 3Com Corporation, American Airlines, Canon, Inc., Data General, Hewlett-Packard, Philips Telecommunications N.V., Sun Microsystems and Unisys Corporation. In October 1989, OMG began independent operations as a non-profit corporation. Through the OMG's commitment to developing technically excellent, commercially viable and vendor independent specifications for the software industry, the consortium now includes over 700 members. OMG is developing the "Architecture for the Connected World" through its worldwide standard specifications: CORBA/IIOP, Object Services, Internet Facilities and Domain Interface specifications. OMG is headquartered in Framingham, Massachusetts, USA, with international marketing partners in the UK, Germany, Japan, India and Australia.

OMG was formed to create a component-based software marketplace by hastening the introduction of standardized object software. The organization's charter includes the establishment of industry guidelines and detailed object management specifications to provide a common framework for application development. Conformance to these specifications will make it possible to develop a heterogeneous computing environment across all major hardware platforms and operating systems. Implementations of OMG specifications can be found on over 50 operating systems across the world today. OMG's series of specifications detail the necessary standard interfaces for Distributed Object Computing. Its widely popular Internet protocol IIOP (Internet Inter-Orb Protocol) is being used as the infrastructure for technology companies like Netscape, Oracle, Sun, IBM and hundreds of others. These specifications are used worldwide to develop and deploy distributed applications for Manufacturing, Finance, Telecoms, Electronic Commerce, Realtime systems and Health Care.

OMG defines object management as software development that models the real world through representation of "objects." These objects are the encapsulation of the attributes, relationships and methods of software identifiable program components. A key benefit of an object-oriented system is its ability to expand in functionality by extending existing components and adding new objects to the system. Object management results in faster application development, easier maintenance, enormous scalability and reusable software.

The acceptance and use of object-oriented software is widespread and growing. Virtually every major provider and user of computer systems in the world is either using or planning to implement object-oriented tools and applications. Within the next three to five years, revenue from the sale of object-oriented software is projected to exceed three billion dollars.

Object Orientation:

Introduction: Objects and Classes:

Object orientation (or OO) is a technique that has pervaded all aspects of computer science over the past decade or so. Object-oriented ways of thinking have been applied to software system design, opertating systems, programming languages, and database systems, to name but a few area on which this recent technolgy has bad an impact. In this write-up, we will introduce some of the basic concepts ofobject orientation.

Octet:

Is a group or set of eight. In network jargon it means "eight-bit unit of data". However, many hardware vendors use the term byte instead of octet, and the two are often interchanged. It is also used when describing frame, or packet formats.

Ohm:

An ohm is the unit of resistance; the electrical counterpart to friction. This unit is symbolized by the uppercase Greek omega (ú).

It is also an electrical unit of measurement used to measure the resistance of a transmission medium to the flow of electrons (that is, a current) through the medium. If the current is a direct current (DC), the term resistance is used. If the current is an alternating current (AC), the term impedance is used. Both resistance and impedance is measured in ohms.

OLTP: (Database)

Online Transaction Processing database.

One-time pad:

The one-time pad is the most secure, and one of the simplest, of all ciphers. It was invented and patented just after World War I (1917) by Gilbert Vernam (of AT&T) and Major Joseph Mauborgne (USA Army, chief of the Signal Corps). The fundamental features of this cipher are that the sender and receiver each have a copy of an encryption key, which is as long as the message to be encrypted, and each key is used for only one message and then discarded. That key must be random, that is without repetitive pattern, and must remain unknown to any attacker. In addition, the key must never be reused, otherwise the cipher becomes trivially breakable.

For example, two identical pads of paper with random letters can be exchanged between sender and receiver. Later, when they wish to send a message, the sender uses the (random) key in the pad (say that its on the first page of his pad) to encrypt a message. One technique is to XOR (ie, combine in a particular way) the first character of the key with the first character of the plaintext, the second character of the key with the second character of the plaintext, etc. Even a simple letter-substitution cipher as thas been known at least since Julius Caesar's time can and has beeen used -- as long as the offset for each letter is determined individually by the corresponding random letter of the key (the traditional "Caesar cipher" used a single offset for the whole message). He then sends the encoded message to the receiver, who decrypts it with his copy of the first page of the pad. Both now discard the used key page, having used it only 'one-time'.

One-time pads are information-theoretically secure, in that if all the conditions are met properly (i.e., the keys are chosen randomly, are at least as long as the message, and are never reused), then the ciphertext gives the attacking cryptanalyst no information about the message other than its length. This is a very strong notion of security, and implies that one-time pads are secure even against cryptanalysts with infinite computational power. Also, they are one of very few cryptosystems which can be implemented on a deterministic computer which would provably survive any affirmative solution of P vs. NP.

The disadvantages of the method are that it requires very large 'keys', requires that they be exchanged in advance and kept in synchrony when used, be entirely without pattern (ie, random), and requires the keys to be secret from all attackers. It is therefore not practical for most users lacking diplomatic bag protection. It is also very cumbersome for large messages (without automatic equipment). The advent of widely available small and cheap computers has made the one-time pad algorithm much less difficult to use and much faster. Key material distribution is still sufficiently difficult that except for rare circumstances (e.g., spies who must encrypt short messages without access to sophisticated encryption methods), one-time pads are at present mostly of theoretical interest. In some diplomatic or espionage situations, the one-time pad is useful because it can be computed by hand, after breaking a word into equal-sized groups of letters.

The recent development of quantum cryptography has provided a way, theoretically, to securely transmit identical key pads between two locations simultaneously, in such a way that eavesdroppers cannot determine their contents without the eavesdropping being detected. This may eventually provide a better way to distribute one-time pad key materials. It is not yet clear whether this will ever be convenient enough to see widespread use.

One-time pads have been used in specialized circumstances, since the early 1900s; the Weimar Republic Diplomatic Service began using the algorithm about 1920. Poor Soviet cryptography (broken by the British, with messages made public in two instances in the 1920s), forced them to improve their systems, and they seem to have gone to one-time pads for some uses around 1930. KGB spies also used pencil and paper one-time pads to communicate. Beginning in the late 1940s, the U.S. and British intelligence agencies were able to break some of the one-time pad traffic to Moscow during WWII as a result of errors made near the end of 1941 in generating/distributing the key material. This huge, decades long effort was codenamed VENONA. The "hot line" from the White House to the Kremlin during the Cold War reportedly used a one time pad; this line was used so infrequently that pad exhaustion was a minor concern relative to providing the necessary security.

The information-theoretic security of one-time pads is wholly dependent upon the randomness (or unpredictability) and secrecy of the key pad material. If the key material is perfectly random (and never becomes known to an attacker), then it is information-theoretically secure. If the pad material is generated by a deterministic program, then it is not, and cannot be, a one-time pad; it is a stream cipher. A stream cipher takes a short key, and uses it to generate a long pseudorandom stream, which is combined with the message using a mechanism similar to that used in a one-time pad. Stream ciphers can be secure in practice, but cannot be absolutely secure in the same provable sense. At least one of the Fish cyphers used by the German military in WWII turned out to be an insecure stream cypher, not a practical automated one-time pad as seems to have been intended by its designers. Bletchley Park broke Lorenz machine messages regularly. None of these stream ciphers have the absolute, information-theoretical security of a one-time pad, although there exist stream ciphers that appear to be unbreakable in practice by a cryptanalyst without access to the key.

The similarity between stream ciphers and one-time pads often leads cryptographic novices to invent (usually very insecure) stream ciphers under the mistaken impression that they are using one-time pads. An especially insecure approach is to use any of the random number generators that come with most computer programming languages and operating system call libraries. These typically produce sequences that pass simple statistical tests but that are nonetheless highly predictable making them absolutely useless for cryptographic purposes. Though cryptographically- secure pseudo-random number generators exist that permit computationally secure stream ciphers, even these do not provide the information-theoretic security of a one-time pad, and a claim that a particular stream cipher is equivalent in strength to a one-time pad is often viewed as a clear sign of snake oil by professional cryptographers.

Shannon's work can be interpreted as showing that any information-theoretically secure cipher will be effectively equivalent to the one-time pad algorithm. If so, one-time pads offer the best possible security of any cipher, now or ever.

Open:

In a cable, an open refers to a gap or separation in the conductive material somewhere along the cable's path, such as in one wire in a pair. Depending on the gap, this may impede or preclude the transmission of data along the cable.

In networking and other computer-related contexts, open is used as an adjective to refer to elements or interfaces whose specifications have been made public so they can be used by third parties to create compatible (or competing) products. This is an contrast to cloed, or propreitary, environments.

Open Application Interface (OAI):

In telecommunications, OAI refers to an interface that can be used to program and change the operation of PBX (private branch exchange).

Open DataBase Connectivity (ODBC):

An API (Application Program Interface) developed by Microsoft for accessing databases under Windows. An alternative to ODBC is IDAPI (Integrated Database Appliction Programming Interface), which is a standard proposed by Borland, IBM, Novell, and WordPerfect.

Open Document Architecture (ODA):

The ODA is an ISO (Internatinal Standardization Organization) standard for the interchange of compound documets, which are documents that may contain fonts and graphics in addition to text.

The ISO 8613 standard specifies three levels of document representation:

  • Level 1: Text-only data
  • Level 2: Text and graphical data from a word processisng environment.
  • Level 3: Text and graphical data from a desktop publishing environment.

The standard is mainly concerned with preserving the layout and grphics information in the document. That is, a physical connection is taken for granted; it is the logical connection that is being standardized. The primary source for this information is ISO document 8613.

Open Pipe:

A term used to describe the path between sender and receiver in circuit-switched and leased-line communications. The intent is to indicate that the data flows directly between the two locations (through the open pipe), rather than needing to be broken into packets and routed by various paths.

Open System:

Generally, a system whose specifications are published and made available for use, in order to make it easier to establish a connection or to communicate with the system. This is in contrast to a closed, or proprietary, system. Within the context of the OSI Reference Model, an open system is one that supports this model for connecting systems and networks. Also see Open Systems Interconnection Reference Model.

Open System Architecture:

A protocol architecture, such as AppleTalk or TCP/IP protocol architecture's, which are openly published so thatdevelopers can implement the protocols on other computer platforms or can replace protocols at any layer with different ones.

Open System Message Exchange (OSME):

OSME refers to an IBM application for exchanging X.400 messages.

Open System Testing Consortium (OSTC):

A European consortium that developed a suite for testing conformance to the 1984 ITU X.400 series of recommendations about MHS (Message Handling System). This suite is used, for example, in the United States to access conformance to the MHS requirements for GOSIP (Government Open Systems Interconnection Profile) certification. The Corporation for Open Systems (COS) in the Unitied States has developed a similar test suite.

Open Wire:

Open wire lines have been around since the inception of the data communication industry. An open wire line consists of copper wire tied to glass insulators. The insulators are attached to wooden arms mounted on utility poles. While still in common use throughout the world, they are quickly being replaced by twisted pair cables and other transmission media.

Optical Fiber:

Optical fiber consists of thin glass fibers that can carry information at frequencies in the visible light spectrum. The data transmission lines made up of optical fibers are joined by connectors that have very little loss of the signal throughout the length of the data line.

At the sending end of a data circuit, data is encoded from electrical signals into light pulses that travel through the lines at high speeds. At the receiving end, the light is converted back into electrical analog or digital signals that are then passed on to the receiving device. The typical optical fiber consists of a very narrow strand of glass called the core. Around the core is a concentric layer of glass called the cladding. After the light is inserted into the core it is reflected by the cladding. The light follows a zig-zag path through the core. The advantage of optical fiber is that it can carry large amounts of information at high speeds in very reduced physical spaces with little loss of signal.

There are three primary types of transmission modes using optical fiber, although many more are being developed. They are:

  • Single mode,
  • Step index,
  • Grade index


  • Single mode: --
  • Uses fibers with a core radius of 2.5 to 4 microns. Since the radius of the fiber is so small, light travels through the core with little reflection from the cladding. However, it requires very concentrated light sources to get the signal to travel long distances. This type of mode is typically used for trunk line applications.

  • Step index: --
  • Fiber consists of a core of fiber surrounded by a cladding with a lower refractive index for the light. The cable has an approximate radius of 30 to 70 microns. The lower refractive index causes the light pulse to bounce downward back toward the core. In this type of transmission, some of the light pulses travel straight down the core while other bounce off the cladding multiple times before reaching their distention. This mode is used for distances of one kilometer or less.

  • Grade index: --
  • Fiber has a refractive index that changes gradually as the light travels to the outer edge of the fiber. The cable has a radius of 25 to 60 microns. This gradual refractive index bends the light towards the core instead of just reflecting it. This mode is used for long distance communication.

Optical Time Domain Reflectometer (OTDR):

In fiber optics, and OTDR is a tool for testing the light signal. An OTDR can analyze a cable by sending out a light signal and then checking the amount and type of light reflected back.

Oracle7 RDBMS Server:

The Oracle7 server is a full-featured Relational Database Management System (RDBMS) that is ideally suited to support sophisticated client/server environments. Many features of the Oracle7 internal architecture are designed to provide high availability, maximum throughput, security, and efficient use of its host's resources. Although all these features are important architecturally for a database server, Oracle7 also contain the following language-based features that accelerate development and improve the performance of server-side application components:

  • PL/SQL language -- A major component of the Oracle7 server is its PL/SQL processing engine. (The PL stands for Procedural Language.) PL/SQL is Oracle's fourth generation language that incorporates structured procedural language elements with the SQL language. PL/SQL is designed specifically for client/server processing in that it enables a PL/SQL program black containing application logic as well a as SQL statements to be submitted to the server with a single request.
  • By using PL/SQL, you can significantly reduce the amount of processing required by the client portion of an application and the network traffic required to execute the logic. For example, you might want to execute different sets of SQL statements based on the results of a query. The query, the subsequent SQL statements, and the conditional logic to execute them can all be incorporated into one PL/SQL block and submitted to the server in one network trip.

    Not only can PL/SQL be processed by the Oracle7 server, but it can also be processed by SQL*Forms and Oracle Forms. PL/SQL is used extensively by these tools for client-based procedures and event trigger routines. In a client/server environment, PL/SQL is extremely flexible because the language used by client is interchangeable with that used by the server. Some extensions in the client language syntax allow for control of interface components, reference to form objects, and navigation.

  • Stored Procedures -- Although version 6 of Oracle supported server-based PL/SQL, Oracle7 provides the capability to store PL/SQL blocks as database objects in the form of stored procedures, functions, and database packages. Now, portions of the application logic, especially those requiring database access, can reside where they are processed -- on the server. Using stored procedures significantly increases the efficiency of a client/server system for several reasons:
    • Calling a stored procedure from a client application generates minimal network traffic. Rather than the application submitting an entire PL/SQL program block from the client, all that is required is a single call to the procedure of function with an optional parameter list.
    • Stored procedures provide a convenient and effective security mechanism. One of the characteristics of stored PL/SQL is that is always executes with the privilege domain of the procedure owner. This enables non-privileged users to have controlled access (through the procedure code) to privileged objects.
    • This feature usually serves to reduce the amount of grant administration that the DBA (Database Administrator) must do.

    • Both the compiled and textual forms of stored procedures are maintained in the database. Because the compiled form of the procedure is available and readily executable, and need to parse and compile the PL/SQL at run-time is alleviated.
  • Database Triggers -- Database triggers resemble stored procedures in that they are database-resident PL/SQL blocks; the difference between the two is that triggers are fired automatically by the RDBMS kernel in response to a commit time event (such as an insert, update, or delete operation). You can use triggers to enforce complex integrity checking, perform complex auditing and security functions, and implement application alerts and monitors. Like stored procedures, database triggers greatly reduce the amount of code and processing that is necessary in the client portion of an application.
  • Oracle7's implementation of database triggers is slightly different from that of other vendors. Although most databases support statement-level triggers, Oracle7 also includes functionality to fire triggers at the row level. Consider an UPDATE statement that affects values in a set of 100 rows. The kernel world fire a statement-level trigger once -- for the UPDATE statement (either before and/or after the statement executes). Row-level triggers, on the other hand, are fired by the kernel for each row that the statement affects -- in this case, 100 times. Oracle7 enables statement-level and row-level triggers to be used in conjunction with one another.

  • Declarative Integrity -- When you define a table in Oracle7, you might include integrity constraints as part of your table definition. Constraints are enforced by the server whenever records are inserted, updated, or deleted. In addition to using referential integrity constraints that enforce primary and foreign key relationships, you can also define your own constraints to control the value domains of individual columns (attributes) within a table.
  • Server-enforced integrity reduces some of the code required for validation by the client and also increases the robustness of the business model defined within the database. With constraints, you can often improve performance and provide the flexibility to support multiple front-end interfaces.

  • User-Defined Functions -- You'll also find PL/SQL blocks in user-defined functions. User-defined functions are similar to stored procedures and also reduce the amount of application code in the client portion of an application. Not only can you call these functions from PL/SQL, but you can also use them to extend the set of standard Oracle SQL functions. You can place user-defined functions in SQL statements just as you would any other Oracle SQL function.
  • Designing your Oracle application to make use of these server-based features not only improves the performance of a client/server system but also makes the task of developing and deploying an application easier.

Order of Magnitude:

An order of magnitude refers to a change in a numerical value that is a multiple of the original, or reference, value. In decimal systems, changes that are powers of 10 are commonly used as orders of magnitude. Thus, A and B differ by one order of magnitude if one is 10 times the other; they differ by two orders of magnitude if one is 100 times the other. Note A and B are still said to differ by an order of magnitude even if one is 90 times the order. For some computations, powers of 1,000 (10^3) are used as (decimal) orders of magnitude.

The order of magnitude is determined by the base being used. Thus, in a binary system, powers of 2 determine orders of magnitude. The table "Prefixes for Selected Orders of Magnitude" lists some of the prefixes used.

Note that the orders of magnitude are referenced to powers of two. That is, a "mega" is defined as 2^20 (1,048,576), rather than as 10^6 (1,000,000 exactly). Both binary and decimal references can be used. The context will determine which is more appropriate. For example, binary values are more meaningful when specking of storage or memory quantities; decimal values are more meaningful when speaking of time or frequency values.

PREFIXES FOR SELECTED ORDERS OF MAGNITUDE:

PREFIXNAME2^x10^yTERM
BBrontox=70y=21Sextillions
EExax=60y=18Quintillions
PPetax=50y=15Quadrillions
TTerax=40y=12Trillions
GGigax=30y=9Billions
MMegax=20y=6Millions
kkilox=10y=3Thousands
mMillix=-10y=-3Thousandths
µmMicrox=-20y=-6Millionths
nNanox=-30y=-9Billionths
PPicox=-40y=-12Trillionths
ffemtox=-50y=-15Quadrillionths
aAttax=-60y=-18Quintillionths


Originate Mode:

In communications, the originate mode is the mode of the device that initiates the call and waits for the remote device to respond. Comare with Response Mode.

OS Kernel:

The core portion of an operating system. The kernel provides the most essential and basic system services (such as process and memeory management).

OSI Model:

The Open System Interconnection (OSI) reference model for describing network protocols, devised by the International Standards Organization (ISO); divides protocols into seven layers to standardize and simplify protocol definitions.

It is a model for the modularization of network protocols and their functions. Each layer communicates only with the layer immediately above and below it.

The OSI Reference Model Seven Layers:

  • Physical Layer
Layer 1
  • Data Link Layer
Layer 2
  • Network Layer
Layer 3
  • Transport Layer
Layer 4
  • Session Layer
Layer 5
  • Presentation Layer
Layer 6
  • Application Layer
Layer 7


In an effort to standardize a way of looking at network protocols, the ISO created a seven-layer model that defines the basic network functions. In many network operating systems, you can pigeonhole each protocol into one layer of this reference model. In other cases, it's not so easy. Occasionally, a protocol spans more than one layer of the model. In still other cases, some layers may be missing entirely. But once you categorize the protocols according to the OSI Reference Model, you'll find it easier to compare the component functions of the various networks.

Two important principles are at the heart of the OSI Reference Model:

  • First --
  • There's the concept of open systems. Each layer of the model is assigned specific network functions, which means that two different networking systems that support the functions of a selected OSI layer can exchange data at that level. However, this is more often true on paper than in practice.

  • Second --
  • The OSI Reference Model depends on the concept of peer-to-peer communcications. What this means is that data created by one layer in the OSI Reference Model (such as the network layer) and transimitted to another device on the network pertains only to the same layer on that device. In other words, intervening layers do not alter the data; the other layers simply add to the data found in a packet to perform their assigned functions on the network.

Protocol Suites:

Protocol suites are designed in distinct layers to make it easier to substitute one protocol for another. You can say that protocol suites govern how data is exchanged above and below each protocol layer. (In fact, the graphical representation of these protocols in vertical layers is why protocol suites are sometimes called protocol stacks.) When protocols are designed, specifications set forth how a protocol exchanges data with a protocol layered above or below it. As long as you follow those specifications, you can subsititute a new, supposedly better, protocol for one currently in the suite without affecting the general behavior of the network.

Details of the OSI Model:

Here is how you can think of the OSI layers by considering the following questions associated with each of those layers:

  • Layer 7 -- The Application Layer: What data do I want to send to my partner?
  • Layer 6 -- The Presentation Layer: What does the data look like?
  • Layer 5 -- The Session Layer: Who is the partner?
  • Layer 4 -- The Transport Layer: Where is the partner?
  • Layer 3 -- The Network Layer: Which route do I follow to get there?
  • Layer 2 -- The Data Link Layer: How do I make each step in that route?
  • Layer 1 -- The Phasical Layer: How do I use the medium for that step?

  • Layer 7 -- Application Layer (What data do I want I to send?) consists of software we are using, such as word processing, spreadsheet, or a graphcs program. The tasks at this layer also include file transfer, electronic mail (e-mail) services, and network management. When a computer is sending data over the network, the data starts at the Application Layer and works its way down to the Physical Layer (Layer 1), where it is placed onto the transmission medium.
  • Application-layer services are much more varied than the services on the lower layers, because the entire gamut of application and task possibilities is available at this layer. The specific details depend on the framework or model being used. For example, there are several network managment applications. Each of these provides services and functions specified in a different framework for network management.

    Programs can get access to the application-layer services through Application Service Elements (ASEs). There are a variety of such ASEs, each designed for a class of tasks.

    To accomplish its tasks, the application layer passes program requests and data to the presentation layer, which is responsible for encoding the application layer's data in the appropriate form.

    Application Layor Protocols -- Not surprisingly, application programs are found at this layer. Also found here are network shells, which are the programs that run on workstations and that enable the workstation to join the network. Actually, programs such as network shells often provide functions that span or are found at mulitiple layers. For example, NETX, the Novell NetWare shell program, spans the top three layers.

    Programs and protocols that provide application-layer services include the following:

    • NICE (Network Information and Control Exchange), which provides network monitoring and management capabilities
    • FTAM (File Transfer, Access, and Management), which provides capabilities for remote file handling
    • FTP (File Transfer Protocol), which provides file transfer capabilites
    • X.400, which specifies protocols and functions for message handling and e-mail services
    • CMIP, which provides network management capabilities based on a framework formulated by ISO
    • SNMP, which provides network management within a non-OSI framework. This protocol does not conform to the OSI model, but does provide functionality that is specified within the OSI model
    • TELNET, which provides terminal emulation and remote login capabilities. TELNET's capabilities go beyond the application layer
    • rlogin, which provides remote login capabilities for UNIX environments
  • Layer 6 -- The Presentation Layer (What does the data look like?) contains an extensive set of common functions that application programs can use. It is responsible for presenting information in a manner suitable for the applcations or users dealing with the information. Functions such as data conversion from EBCDIC to ASCII (or vice versa), use of special graphics or character sets, data compression or expansion, and data encryption or decryption are carried out at this layer.
  • The presentation layer provides services for the application layer above it, and uses the session layer below it. In practice, the presentation layer rarely appears in pure form. Rather, application- or session- layer programs will encompass some or all of the presentation-layer functions.

  • Layer 5 -- The Session Layer (Who is the partner?) is responsible for synchronizing and sequencing the dialog and packets in a network connection. This layer is also responsible for making sure that the connection is maintained until the transmission is complete, and ensuring that appropriate security measures are taken during a session (that is, a connection). Functions defined at the session layer include those for network gateway communications.
  • The session layer is used by the presentation layer above it, and uses the transport layer below it.

    Session Layer Protocols Session-layer capabilities are often part of other configurations (for example, those that include the presentation layer). The following protocols encompass many of the session-layer functions:

    • ADSP (AppleTalk Data Stream Protocol), which enables two nodes to establish a reliable connection for data transfer
    • NetBEUI, which is an implementation and extension of NetBIOS
    • NetBIOS, which actually spans layers 5, 6, and 7, but which includes capabilities for monitoring sessions to make sure they are running smoothly
    • PAP (Printer Access Protocol), which provides access to a PostScript printer in an AppleTalk network
  • Layer 4 -- The Transport Layer (Where is the partner?) In the OSI Reference Model, the transport layer is responsible for provideing data transfer at an agreed upon level of quality, such as at specified transmission speeds and error rates.
  • To ensure delivery, outgoing packets are assigned numbers in sequence. The numbers are included in the packets that are transmitted by lower layers. The transport layer at the receiving end checks the packet numbers to make sure all have been delivered and to put the packet contents into the proper sequence for the recipient.

    The transport layer provides services for the session layer above it, and uses the network layer below it to find a route between source and destination. The transport layer is crucial in many ways, because it sits between the upper layers (which are strongly application-dependent) and the lower ones (which are network-based).

    Subnet Layers and Transmission Quality In the OSI model, the three layers below the transport layer are known as the subnet layers. These layers are responsible for getting packets from the source to the destination. In fact, relay devices (such as bridges, routers, or X.25 circuits) use only these three layers, since their job is actually just to pass on a "signal" or a "packet". Such devices are known as intermediate systems (ISs). In contrast, components that do use the upper layers as well are known as end systems (ESs). See the End System and Intermediate System articles for more information.

    The transmission services provided by the subnet layers may or may not be reliable. In this context, a reliable service is one that will either deliver a packet without error or inform the sender if such error-free transmission was not possible.

    Similarly, the subnet layer transmission services may or may not be connection-oriented. In connection-oriented communications, a connection between sender and receiver is established first. If the connection is successful, all the data is transmitted in sequence along this connection. When the transmission is finished, the connection is broken. Packets in such a transmission do not need to be assigned sequence numbers because each packet is transmitted immediately after its predecessor and along the same path.

    In contrast, in connectionless communications, packets are sent independently of each other, and may take different paths to the destination. With such a communications mode, packets may get there in random order, and packets may get lost, discarded, or duplicated. Before transmission, each packet must be numbered to indicate the packet's position in the transmission, so that the message can be reassembled at the destination.

    Since the transport layer must be able to get packets between applications, the services needed at this layer depend on what the subnet layers do. The more work the subnet layers do, the less the transport layer must do.

    Subnet Service Classes Three types of subnet service are distinguished in the OSI model:

    • Type A: Very reliable, connection-oriented service
    • Type B: Unreliable, connection-oriented service
    • Type C: Unreliable, possibly connectionless service

    Transport Layer Protocols To provide the capabilities required for whichever service type applies, several classes of transport layer protocols have been defined in the OSI model:

    • TP0 (Transfer Protocol Class 0), which is the simplest protocol. It assumes type A service, that is, a subnet that does most of the work for the transport layer. Because the subnet is reliable, TP0 requires neither error detection nor error correction; because the connection is connection-oriented, packets do not need to be numbered before transmission. X.25 is an example of a relay service that is connection-oriented and sufficiently reliable for TP0.
    • TP1 (Transfer Protocol Class 1), which assumes a type B subnet; that is, one that may be unreliable. To deal with this, TP1 provides its own error detection, along with facilities for getting the sender to retransmit any erroneous packets.
    • TP2 (Transfer Protocol Class 2), which also assumes a type A subnet. However, TP2 can multiplex transmissions, so that multiple transport connections can be sustained over the single network connection.
    • TP3 (Transfer Protocol Class 3), which also assumes a type B subnet. TP3 can also multiplex transmissions, so that this protocol has the capabilities of TP1 and TP2.
    • TP4 (Transfer Protocol Class 4), which is the most powerful protocol, in that it makes minimal assumptions about the capabilities or reliability of the subnet. TP4 is the only one of the OSI transport layer protocols that supports connectionless service.

    Other transport layer protocols include:

    • TCP and UDP, which provide connection-oriented and connectionless transport services, respectively. These protocols are used in most UNIX based networks.
    • SPX (Sequenced Packet Exchange), which is used in Novell's NetWare environments.
    • PEP (Packet Exchange Protocol), which is part of the XNS (Xerox Network Architecture) protocol suite from Xerox.
    • VOTS (VAX OSI Transport Service), which is used in Digital Equipment Corporation networks.
    • AEP (AppleTalk Echo Protocol), ATP (AppleTalk Transaction Protocol), NBP (Name Binding Protocol), and RTMP (Routing Table Maintenance Protocol), which are part of the AppleTalk protocol suite.

  • Layer 3 -- The Network Layer (Which route do I follow to get there?) In the OSI Reference Model, the network layer is also known as the packet layer, and is the third lowest layer, or the uppermost subnet layer. It is responsible for the following tasks:
    • Determining addresses or translating from hardware to network addresses. These addresses may be on a local network or they may refer to networks located elsewhere on an internetwork. One of the functions of the network layer is, in fact, to provide capabilities needed to communicate on an internetwork.
    • Finding a route between a source and a destination node or between two intermediate devices.
    • Establishing and maintaining a logical connection between these two nodes, to establish either a connectionless or a connection- oriented communication.

    The data is processed and transmitted using the data-link layer below the network layer. Responsibility for guaranteeing proper delivery of the packets lies with the transport layer, which uses network-layer services.

    Network Layer Protocols: Two important classes of network layer protocols are address resolution protocols and routing protocols. Address resolution protocols are concerned with determining a unique network address for a source or destination node.

    Routing protocols are concerned with getting packets from a local network to another network. After finding the Network Layer destination network, it is necessary to determine a path to the destination network. This path will usually involve just routers, except for the first and last parts of the path.

    Protocols at the network layer include the following:

    • ARP (Address Resolution Protocol), which converts from hardware to network addresses.
    • CLNP (Connectionless Network Protocol), which is an ISO-designed protocol.
    • DDP (Datagram Delivery Protocol), which provides connectionless service in AppleTalk networks.
    • ICMP (Internet Control Message Protocol), which is an error-handling protocol.
    • IGP (Interior Gateway Protocol), which is used to connect routers within an administrative domain. This is also the name for a class of protocols.
    • Integrated IS-IS, which is a specific IGP.
    • IPX (Internetwork Packet Exchange), which is part of NovellŐs protocol suite.
    • IP (Internet Protocol), which is one of the UNIX environment protocols.
    • X.25 PLP (Packet Layer Protocol), which is used in an X.25 switching network.

Out-of-Band Communication:

A type of communication that uses frequencies outside the range being used for data or message communication. Out-of-band communication is generally done for diagnostic or management purposes.

Output Feedback (OFB):

An operating mode for the Data Encryption Standard (DES).

Overhead Traffic:

Network activity that is generated by devices on the network. Examples of overhead traffic include routers updating routing tables and E-mail message alerts. Compare with user-generated traffic.




Search for Information Technology Items

Return back to Network & Concepts Index

Networking "O" Definitions and Concepts

robert.d.betterton@rdbprime.com


Back | Home | Top | Feedback | Site Search


E-Mail Me

This site is brought to you by
Bob Betterton; 2001 - 2011.

This page was last updated on 09/18/2005
Copyright, RDB Prime Engineering



This Page has been accessed "9879" times.