Research Paper(s)
Technology Items
Site Map
Site Search
 It is 10:46 PST on Monday 03/01/2021

"S" Networking Definitions & Concepts...

Sequenced Packet Exchange (SPX) .. to .. SystemView

# A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Search for Information Technology Items

Security Basics::
Sequenced Packet Exchange (SPX)::
Server Node IDs::
Services Network::
Shannon Capacity::
Shannon-Hartley Law::
Shannon's Theorem::
Shielded Cable::
Simple Network Management Protocol (SNMP)::
Socket Number::
Software Prototyping::
Speed of Light:
Structured Query Language (SQL):
Star Topology:
Statistical Multiplexing:
Statistical Time Division Multiplexers (STDM):
System Engineering:

SAN; (Hard-drive):

Storage Area Network, a storage design that connects all the storage devices on a network with all the servers on a network for enhanced reliability and performance.

SAS; (Hard-drive):

Serial Attached SCSI, the serial implementation of the SCSI standard, providing greater flexibility, performance, reliability, and connectivity.

SATA; (Hard-drive):

Serial ATA is an evolutionary replacement for the Parallel ATA physical storage interface.

SATA has been developed as a backward compatible, evolutionary replacement for ATA. Employing a serial technology version of the ATA design, SATA offers compelling technology, performance, and usability benefits for data-intensive applications in direct-attached storage environments. Within the next three years SATA will replace ATA/IDE as the low-cost interface-of-choice.

Serial ATA (SATA) is the next-generation interface standard for low-cost direct-attached storage in desktop PC, workstation, and entry-level server environments. As a serial technology (bits transmitted in a single stream, rather than along parallel paths) SATA eliminates the restrictions on performance, reliability, and scalability that are inherent in today’s parallel ATA (IDE) standard. Because SATA cost-effectively enables RAID protection, is easily scalable, and has a high performance roadmap, it will become the dominant direct-attach storage interface for budget-conscious users.


A system is scalable if it can be made to have more (or less) computational power by configuring it with a larger (or smaller) number of processors, amount of memory, interconnection bandwidth, input/output bandwidth, and mass storage.

SCSI; (Hard-drive):

Small Computer System Interface, the predominant storage I/O technology for high-reliability, high-performance server applications.

Scalable Vector Graphics (SVG):

Scalable Vector Graphics (SVG) is a language for describing two-dimensional static and animated vector graphics in XML.

SVG became a W3C Recommendation in September 2001. SVG was developed after a long process after Macromedia and Microsoft introduced VML where as Adobe and Sun submitted a competing format known as PGML. SVG is natively supported in the Amaya web browser. In other ones, a plugin, like Adobe SVG Viewer or Corel SVG Viewer, is needed to see SVG images, but they can be displayed by external editors and viewers. Mozilla now supports parts of the W3C SVG Standard, but much is still unsupported.

Schemas: (Database)

Pronounce skee-ma, the structure of a database system, described in a formal language supported by the database management system (DBMS). In a relational database, the schema defines the tables, the fields in each table, and the relationships between fields and tables.

Schemas are generally stored in a data dictionary. Although a schema is defined in text database language, the term is often used to refer to a graphical depiction of the database structure.


Many companies lack an understanding of the dynamics of competition, and in particular, of the relationship between the effort put into improving a product or process and the results achieved over time. When charted, this relationship appears as the familiar S-curve, see diagram below. At first, as funds are put into development, progress is frustratingly slow. Then, as research uncovers the key pieces of information necessary to make advances, the pace surges. Finally, progress slows down again, and each successive innovation requires a greater outlay of resources.

Ultimately, the S-curve levels off entirely, often as the technology approaches some fundamental limit, for example, the ultimate density of devices that can be squeezed onto a silicon chip. Indeed, it is important to pay attention to such limits, as they are the best clues a manager has for recognizing when a new technology must be developed within the company. It is comparatively easy to see how technological limits will affect the sales of a product that is closely related to the technology, as computers are to silicon chips. It is not so easy, however, when dealing with air travel, say, which combines thousands of technologies. Still, there are usually no more than a handful of technologies that are crucial to a certain product or process, and these are the ones managers should identify and nurture if they hope to anticipate change.

The important thing, therefore, is to spot the technological opportunities. A company should watch its rivals: when one competitor is nearing the top of the S-curve, others are likely to be exploring alternative technologies that could give rise to curves of their own, leading to discontinuities that could take a slower company by surprise. Think about the switch from vacuum tubes to semiconductors, from cloth to paper diapers, and even from conventional tennis rackets to those with enlarged "sweet spots."

Security Basics:

These are the items to consider when looking at different security techniques:

  • Don't base security solely on a user's IP address. IP addresses are easily spoofed and can often change during a user's session (especially in the case of AOL users because of the way AOL's network works. Additionally, dialup users most likely won't have the same IP address the next time they dial in and use your application because most ISPs use DHCP (Dynamic Host Configuration Protocol).
  • Do use SSL (Secure Socket Layer) wherever necessary to encrypt the sesson betwen the server and the browser. Because SSL is handled at the web server level and not by another server or browser, you need to consult the documentation for your particular web server to determine how to set it up.
  • Do require users to choose passwords that aren't easily guessed or found in the dictionary. If possible, require users to choose a password that contains a combination of letters, numbers, and possibly symbols. One way to handle this is by automatically assigning passwords to users. If you let users choose their own passwords, you can still ensure they contain certain characters by your login application.
  • Do include error and exception handling in your applications to prevent users from receiveing server and application information when an error or exception occurs.
  • Don't store passwords as clear text if you store them in a database or LDAP (Lightweight Directory Access Protocol) directory. Use a hash() function or some other method to obfuscate the password before storing it.
  • Don't pass usernames and passwords from template to template in URLs or as hidden form fields because this increases the potential for compromise. Use session variables to store and pass usernames and passwords from template to template, because they are usually stored in a server's memory and expire when a user's session expires.
Semantic Link

A semantic link is a typed link where the type carries some semantics. That is, it does not simply describe a characteristic of the link, but describes some external relationship or issue.

An example of a semantic link would be (any syntax expressing) "A is-mother-of B". Some other relations like "B is-child-of A" may be implied by it.

Such links are the basis of a semantic network. The Semantic Web is only one example of these.

Semantic Network

A semantic network is often used as a form of knowledge representation. It is a directed graph consisting of vertices which represent concepts and edges which represent semantic relations between the concepts.

Semantic networks are a common type of machine-readable dictionaries.

Semantic Web

The Semantic Web is a vision of the future of the World Wide Web. Tim Berners-Lee advocates it, but the semantic web builds on many older hypertext systems back to the late 1960s, and lately the convergence of text-and-markup with knowledge representation.

A "semantic" web is one consisting of documents that are put together in such a way that it facilitates automated information gathering and research in a far more meaningful way than can be accomplished with current web search tools. The most basic element is the semantic link.

The usability and usefulness of the Web and its interconnected resources will be enhanced through:

  • documents 'marked up' with semantic information (an extension of the <meta> tags used in today's Web pages to supply information for Web search engines using web crawlers). This could be machine-readable information about the human-readable content of the document (such as the creator, title, description, etc of the document) or it could be purely metadata representing a set of facts (such as resources and services elsewhere in the site).
  • (Note that anything that can be identified with a Universal Resource Identifier (URI) can be described, so the semantic web can reason about people, places, ideas, cats etc.)

  • common metadata vocabularies (ontologies) and maps between vocabularies that allow document creators to know how to mark up their documents so that agents can use the information in the supplied metadata (so that Author in the sense of 'the Author of the page' won't be confused with Author in the sense of a book that is the subject of a book review).
  • automated agents to perform tasks for users of the Semantic Web using this metadata
  • web-based services (often with agents of their own) to supply information specifically to agents (for example, a Trust service that an agent could ask if some online store has a history of poor service or spamming).

The primary facilitators of this technology are: URIs (which identify resources) along with XML and Namespaces. These, together with a bit of logic form RDF, which can be used to say anything about anything. As well as RDF, many other technologies such as Topic Maps and pre-web AI technologies are likely to contribute to the Semantic Web.

All current web technologies are likely to have a role in the semantic web (in the sense of semantic world wide web), for instance:

  • DOM The Document Object Model which provides a set of standard interfaces for accessing XML and HTML document components.
  • XPath, Xlink, XPointer
  • XInclude XML fragment XML query language XHTML
  • XML Schema, RDF; Resource Description Framework
  • XSL, XSLT Extensible Stylesheet Language
  • SVG, Scalable Vector Graphic
  • SMIL, Synchronzied Multimedia Integration Language
  • SOAP, Simple Object Access Protocol
  • DTD, Document Type Definition
  • the concept of metadata.

You can create a piece of RDF code (FOAF) to describe yourself to the Semantic Web using the Friend-of-a-Friend-o-matic...

Also Consider:

  • W3C
  • Topic maps
  • Dublin Core
  • WordNet
  • OWL
  • Cyc
  • Knowledge representation
  • Knowledge technologies
  • Web syndication - RSS and Atom/Echo

External links

  • W3C Semantic Web initiative
  • Tim Berners-Lee's 1998 roadmap paper
  • Scientific American article on The Semantic Web
  • Semantic Web Community Portal
Sequenced Packet Exchange (SPX):

NetWare's transport layer SPX protocol provides a connection-oriented link between nodes. A connection-oriented protocol is one that first establishes a connection between sender and receiver, then transmits the data, and finally breaks the connection. All packets in the transmission are sent in order, and all take the same path. This is in contrast to a connectionless service, in which packets may use different paths.

The SPX protocol ensures that packets arrive at their destination with enough sequence information to reconstruct the message at the receiving end and also to maintain a connection at a specified level of quality. To accomplish this, SPX is responsible for flow control, packet acknowledgment, and similar activities.

An unfortunate disadvantage of a connection-oriented protocol arises when a broadcast packet is to be handled. The protocol must establish a connection with every destination before the packets can be sent. This can be a major undertaking, consuming time and resources.

To avoid such a situation, higher-level NetWare protocols such as NCP (NetWare Core Protocol) can bypass SPX and communicate directly with IPX.

Serial Technology (Harddrive):

A design that allows data to be sent one bit at a time. Serial interfaces use thin cables, and are capable of faster speeds, greater reliability, and more flexibility in attaching multiple drives.


Network nodes that provide a service to the other nodes in the network, such as shared access to a file system (a file server), control of a printer (a printer server), or storage of messages in a mail system (a mail server).


A service is a task or operation that is made available through an application or systems program. Operating systems (such as DOS), network operating systems (such as Novell's NetWare), and applications can provide services.

The services that can be provided are limited only by the ability of users and developers to think up new ones. Nevertheless, it is possible to distinguish different classes of service. For example, network services include file services (which control file access and storage), print services, communication services, fax services, archive services, and backup service packages.

A good network operating system (NOS) can provide the entire range of services, either as part of the NOS core or in the form of add-on modules, libraries, or APIs (Application Program Interfaces). the move currently is toward providing highly modular service paakages.

According to some analysts, the ultimate outcome will by to make these services independent of particular NOSs, so that developers and possibly even users can create customized service packages.

The concepts of protocol and service are often found together. Specifically, for a given service, there is likely to be a protocol. Standards committees generally create separate speciifications for services and protocols.

Server Node IDs:

One of two classes of node ID numbers; server node IDs fall within the range 128 - 254 ($80 - $FE) and are used by network servers (such as printers, spoolers, and file servers).

Services Network:

The services network provides front-end access to ISP basic services for subscribers. Front-end servers at the services network are usually small, configured to replicate and scale horizontally with minimal changes. Front-end servers typically do not contain data and reside behind one or more load balancers, ensuring continuous services to subscribers through traffic distribution and redirection in the event of server failure.

You can install each service such as email, web, or news on separate servers; however, for management purposes, we recommend installing all services onto a single server. The figure below shows service components typically configured at the services network.


Configuring all services on a single server provides the following advantages:

  • Single configuration for all front-end servers
  • Ease of replication for horizontal scalability

Minimum configuration and effort required for replacement

Services running at the services network are basic services and some infrastructure services that are too vulnerable to be run at the DMZ network. The following paragraphs describe each service typically placed on the services network.

Web Server

A collection of web servers, called a web farm, are front-end servers that provide access to users' personal web pages. Web content can be either static or dynamic, depending on an ISP's service offering.

Static web content usually resides either locally on the same server as the web server or on a centralized content server, such as an NFS server. Dynamic web content does not reside locally. Instead, it is generated by an application server; the content can reside on an application server or a content server.

In general, hardware configurations for front-end web servers are lightweight and replicable horizontally with minimal configuration change and effort.

Mail Proxy

At the front-end of the DMZ network, one or more mail proxies interface with users for accessing email. For mail retrieval, post office protocol v3 (POP3) and Internet mail access protocol v4 (IMAP4) are offered to subscribers as methods for accessing email. Simple mail transfer protocol (SMTP) is offered for sending mail.

POP3 is popular among ISPs due to the simplicity of the protocol and the modest demand on the mail server (because most of the work is done on subscribers' systems). IMAP4 is increasingly popular among business users due to rich features and functionality, and it is becoming more common among ISPs. Similar to front-end server configuration, hardware required for mail proxies is lightweight and is replicable horizontally with minimal configuration change and effort.


A mail proxy running an IMAP4 server requires more CPU and memory than POP3.

News Reader

The news reader is the front-end news server where subscribers read and post news articles. The news reader often does not have local content. It interfaces with the news feeder for news articles, history, and index. Although you can install both news reader and feeder on the same server, a more optimal approach is to functionally decompose the news service into two tiers.

News readers are responsible for service requests at the front end. The hardware required for news readers is lightweight and replicable horizontally with minimal configuration change and effort.

Internal DNS

The internal DNS is for name resolution of hosts on internal networks only. The tier separation of external and internal DNS enhances security.

You can configure internal DNS servers almost anywhere on an ISP's internal network. The most common configuration for internal DNS is placing a primary on the content network and one or more secondary servers on the services, application, or content network.


For security reasons, never place an internal DNS server on an external network such as the DMZ network.

Configure internal secondary DNS servers to be forwarders. All systems on internal networks should have resolvers point to internal secondary DNS servers for name resolution. If an external name needs to be resolved, the internal DNS server forwards the query to an external DNS server.

For systems on internal networks that don't require DNS, we recommend that they be configured to use a local hosts table. This configuration reduces the impact on DNS servers and limits security risks by only opening port 53 where required.

For reliability, strategically place multiple secondary servers on various networks to ensure service access. Couple these servers with front-end load balancers to assure availability, because DNS is critical to an ISP. If designed improperly, DNS service can cause a single point-of-failure to an entire architecture.

LDAP Replica

LDAP is a centralized method of authentication and authorization. All accesses are authenticated against LDAP, including, but not limited to, RADIUS, FTP, and email. Also, billing systems use LDAP for user provisioning, registration, and customer care.

LDAP is designed for read-intensive purposes; therefore, for optimal performance, direct all LDAP queries (read/search) to replica directory servers. We recommend using a replica for read-only purposes. Even though a master directory server can answer LDAP requests, its system resources are better used for LDAP writes, updates, and data replication.


You can have different directory indexes on the master and replicas. Indexing speeds up searches, but slows down updates. Indexes use more memory and disk.

Each replica directory server is capable of supporting millions of entries and thousands of queries per second. The directory service enables key capabilities such as single sign on (SSO) and centralized user/group management.

Most services access LDAP as read-only, such as email, FTP, and RADIUS. Very few services should access LDAP with read-write permission. Services that usually access LDAP with read-write permission are services such as calendar, webmail, billing, and directory replicas.

Similar to internal DNS servers, directory servers can be configured almost anywhere on an ISP internal network. For security reasons, always place the master directory server on the content network.

The most common configuration for directory replicas is to place them on the network where LDAP queries are intensive, such as the services and content networks. If you design multiple replicas, strategically place them on various internal networks to enhance reliability.


Avoid placing directory replicas on the DMZ network, because it is less secure than the services network or other internal networks, where firewalls provide additional security.

The hardware configuration for a directory replica is usually a multiprocessor system with a large amount of memory. For a directory master, CPU and RAM requirements can be less than directory replicas if the master is dedicated to perform only LDAP writes, updates, and data replication.

Although directory planning and analysis is beyond the scope of this book, we recommend that you approach planning for LDAP by using the following process:

  1. Perform business and technical analysis.
  2. Plan your directory data.
  3. Plan your directory schema.
  4. Plan your directory tree.
  5. Plan your replication and referral.
  6. Plan your security policies.
  7. Plan your indexes.
  8. Evaluate the plans.

For more information, refer to the following publications:

  • Solaris and LDAP Naming Services: Deploying LDAP in the Enterprise,
  • Understanding and Deploying LDAP Directory Services,
  • Implementing LDAP.


DHCP is required for dynamic network configurations for subscribers' systems. Minimal configurations are hostname, IP address, netmasks, domain name, DNS server, and default gateway.

The dynamic configuration of these parameters is important in maintaining a centralized administration environment.

For an ISP environment, DHCP only serves dial-up users. ISP servers do not require DHCP service, and static configuration is preferred.

For redundancy, it's a good idea to have a backup DHCP server on the internal network.

DHCP relay agents can be placed on various networks so that they relay DHCP messages to a DHCP server on the services network. For an ideal configuration, enabling DHCP relay agents at the router eliminates the need for having dedicated DHCP relay agents or servers on every network.

For security reasons, avoid placing a DHCP server on the DMZ network.

Shannon Capacity:

In 1949, Claude Elwood Shannon and Warren Weaver wrote The Mathematical Theory of Communication. This founded the modern discipline of information theory. Thus it is possible to measure the amount of information in a message.

Shannon's law

C = W log2(1 + S/N)


C = bits per second - channel capacity,
W = frequency - bandwidth,
S/N = signal-to-noise ratio,

shows that there is a theoretical maximum amount of information that can be transmitted over a bandwidth-limited carrier in the ever-present background noise.

This can also be written as:

C = W log10(1 + S/N)/log10(2), or

C = W log10(1 + S/N)/0.693147

Shannon-Hartley Law:

The Shannon-Hartley law states that for a communication channel with bandwidth W, and a signal to noise ratio S/N, that the channel capacity C is expressed by the equation:

C = W log2 (1 + S/N)

where S/N is a pure ratio (i.e not expressed using the decibel scale).


  • If the SNR is 20 dB, and the bandwidth available is 4 kHz, which is appropriate for telephone communications, then:
    C = 4 log2 (1 + 100) = 4 log2 (101) = 26.63 kbit/s. Note that the value of 100 is the appropriate value for an SNR of 20 dB.
  • If it is required to transmit at 50 kbit/s, and a bandwidth of 1 MHz is used, then the minimum SNR required is given by 50 = 1000 log2 (1+S/N) so S/N = 2C/W - 1 = 0.035 corresponding to an SNR of -14.5 dB. This shows that in some sense it may be possible to transmit using signals below the noise level, using wide bandwidth communication, as in spread spectrum communications.

The law is named after Claude Shannon and Ralph Hartley.

Shannon's Theorem:

Shannon's theorem, which concerns information entropy, was proved in 1948 by Claude Shannon. It gives the theoretical maximum rate at which error-free bits can be transmitted over a noisy channel. That any such nonzero rate could exist was considered quite surprising at the time since no scheme was known that could achieve such reliable communication; information theory, as we know it today, was born.

The most famous example of this is for the bandwidth-limited and power constrained channel in the presence of Gaussian noise, usually expressed in the form C = W log2(1 + S /N ), where C is the channel capacity in bits per second, W is the bandwidth in hertz, and S/N is the signal-to-noise ratio.

Shielded Cable:

A wire or circuit enclosed by a grounded metallic material. Shielding serves two purposes. First, it keeps outside electrical disturbances from reaching the wire and disrupting the signals passing over it. Second, shielding keeps the cable from emitting radiation that can disrupt radio and television reception, or that can be captured and interpreted by some unauthorized person or entity.


With the use of computers to control switchs this provids greate flexibility in modifying the control and in introducing new features. It also led to the introduction of a separate signaling network to carry the messages between the switched computers.

Simple Network Management Protocol (SNMP):

SNMP is a component of IP (Internet Protocol) management model. It is the protocol used to represent network management information for transmission. Originally conceived as an interim protocol, to be replaced by the ISO's Common Management Information (also Interface) Service/Common Management Information (also Interface) Protocol (CMIS/CMIP) model, SNMP has proven remarkably durable. In fact, a new and improved version, SNMP version 2, was proposed in 1992.

Two of the authors of SNMPv2, which is just about to be standardized, have asked for an extension from the IETF (Internet Engineering Task Force) in order to get a formal evaluation of a stripped-down alternative to SNMPv2.

SNMP Operation:

SNMP provides communications at the applications layer in the OSI Reference Model. It was developed for networks that use TCP/IP (Transmission Control Protocol/Internet Protocol). This protocol is simple but powerful enough to accomplish its task. SNMP uses a management station and management agents, which communicate with this station. The station is located at the node that is running the network management program.

SNMP agents monitor the desired objects in their environment, package this information in the appropriate manner, and ship it to the management station, either immediately or upon request.

In addition to packets for processing requests and moving packets in and out of a node, the SNMP includes traps. A trap is a special packet that is sent from an agent to a station to indicate that something unusual has occurred.

The whole idea behind the Simple Network Management Protocol is to specify a mechanism for network management that is complete, yet simple. Essentially, information is exchanged between agents, which are the devices on the network being managed, and managers, which are devices on the network through which the management is done. The terms agent and manager are operative when discussing network management rather then client and server -- just as a client can also be a server, so an agent can also be a manager. Since clients and servers may also be, at times, agents and managers, the more general terms are usually avoided when discussing network management.

Items of interest to the manager include things like the current status of a network interface on a router, the volume of traffic being passed by a router, how many datagrams have been dropped recently, or how many error messages have been received by a router. The network manager may want to disable a network link, or reroute traffic around a downed router, or even reboot a router or gateway.

There are a lot of possible transactions between the manager and the agent, and they may vary widely with the different possible types of devices that can be agents. Attempting to implement all the different commands that a manager could possibly send to an agent would be very difficult, particularly for new devices. Instead of attempting to recreate every possible command, SNMP simplifies matters by forcing different commands to be expressed as values that are stored in the device's memory. For example, instead of including a "down link" command to close a network link on a router, SNMP agents maintain a variable in memory that indicates whether a link is up or down stored along with information about each of the router's network links. To down any given link, a manager simply sends the value corresponding to "down" into the link status variable.

Possible transactions between agent and manager are limited to a handful: the manager can request information ("get" and "get-next") from the agent or it can modify information ("set") on the agent. Under certain specific circumstances, the agent will notify the manager of a change in status ("trap") on the agent.

Some of the data to be retrieved or changed are stored as simple variables, like error message counters, but other information is stored in tables, like interface data that include hardware addresses, IP addresses, hardware type, and more, for each networks interface.

By keeping the implementation of the protocol fairly simple through limited commands, the barriers to implementing SNMP on a device are kept low, which also means that it can be implemented widely, thus making it more useful.

Another implementation issue for any network management protocol is whether to have agents be active and transmit updates about their status on a regular basis or have them be passive and polled periodically by the manager to check on their status. Each has its own drawback. When agents are passive, major problems may not be detected in a timely way if the manager doesn't check frequently enough, and undue load on the network may result if the manager checks too frequently. On the other hand, forcing agents to report status changes puts pressure on the network device's computing resources and can stress the network further when a problem occurs.

As stated above, SNMP permits the use of traps from agents to signal changes to managers, but the model encourages the use of a single trap to be sent when an important event occurs and relies on the manager to request further relevant information form the agent.

Reliability is another issue for network management. It might seem that a reliable protocol like TCP should be specified to make sure that management information gets passed reliably between agent and manager. However, UDP (User Datagram Protocol) is the TCP/IP protocol used for SNMP, for reasons that go beyond the fact that most SNMP exchanges are request/response pairs. One of the most important functions for network management is to resolve problems that occur with transmitting or routing network traffic. Network management information is mor important at times of network failure or reduction in service than at any other time, which also happens to be the time that reliable protocols like TCP are more likely to fail to connect. These are also the times when any extra load on the network is least welcome. Finally, it should be recalled that a protocol may be reliable, but if the link over which it is being sent has been severed, no data will get through.

SNMP Community:

The SNMP community is the component of the IP network management model that uses management stations and agents. A management agent may be polled by one or more management stations. An SNMP community is a way of grouping selected stations with a particular agent in order to simplify the authentication process the agent must go through when polled. Each community is give a name that is unique for the agent.

The community name is associated with each station included, and it is stored by the agent. All members of an SNMP community share the same authentication code and access rights. They may also share the same SNMP MIB (Management Information Base) view, which is a selective subset of the information available in the agent's MIB. Stations in such a community can work only with station or for all the stations in an SNMP community.

An agent may have multiple communities, stations may be in more than one community for a single agent, and a station may be part of communities associated with different agents.

By creating and using SNMP communities and MIB views, agents can simplify their work, thereby speeding up network response.

Simple Object Access Protocol (SOAP):

SOAP (an acronym for the Simple Object Access Protocol) is a light-weight protocol for exchanging messages between computer software, typically in the form of software components. The word object implies that the use should adhere to the object-oriented programming programming paradigm.

SOAP is an extensible and decentralized framework that can work over multiple computer network protocol stacks. Remote procedure calls can be modeled as an interaction of several SOAP messages. SOAP is one of the enabling protocols for Web services.

SOAP is used to encode the information in Web service request and response messages before sending them over a network. SOAP messages are independent of any operating system or protocol and may be transported using a variety of Internet protocols, including SMTP, MIME, and HTTP.

SOAP can be run on top of all the Internet Protocols, but HTTP is the most common and the only one standardized by the W3C. SOAP is based on XML, and its design follows the "Head-Body" Pattern, not unlike HTML. The head contains meta-information like information for routing, security and transactions. The body transports the main information.

Slot: (UML)

A specification that an entity modeled by an instance has a value or values for a specific structural feature. A slot defined on an object corresponds to an attribute defined on a class.


This is a common term denoting a non-network method for transferring files between computers. With sneakernet, you transfer the files by walking -- in your sneakers, of course -- from computer to computer with a floppy disk containing the files. The term is variously called Nikenet, Adidasnet, or Reboknet -- pick your favorite shoe manufacturer.


The abstraction provided by Berkeley 4BSD UNIX that allows an application program to access the TCP/IP protocols. An application opens a socket, specifies the service desired (e.g., reliable stream delivery), binds the socket to a specific destination, and then sends or receives data.

It could also be considered an addressable entity within a node connected to an AppleTalk network; sockets are "owned" by software processes known as socket clients that create them. For example, if a communications program obtains a socket for receiving messages, that socket can only be used to receive messages for that program. Messages that arrive for some other program arrive through their own sockets. The software processes that own sockets are the socket clients.

AppleTalk sockets are divided into two groups, statically assigned sockets (SAS), which are reserved for clients such as AppleTalk core protocols, and dynamically assigned sockets (DAS), which are assigned dynamically by DDP (Datagram Delivery Protocol) upon request from clients in the node. The DDP is responsible for ensuring delivery of datagrams between AppleTalk sockets.

Sockets Directory Protocol (SDP):

The Sockets Directory Protocol (SDP) is an industry-standard high-speed protocol. SDP is used with an InfiniBand network. The InfiniBand network takes the messaging burden off the CPU and onto the network hardware. This network reduces the overhead of TCP/IP, providing increased bandwidth

Socket Number:

The number that identifies a particular socket. Each node may have the capability to allocate many sockets. The socket number identifies which one is being used by a particular software process.

Software Prototyping:

A software prototype is a preliminary version or a model of all or part of a system before full commitment is made to develop it (IT-STARTS, 1989a). A software prototype can also be part or all of a system that is developed and delivered using an iterative approach in which users are involved. Prototyping is the process of creating a prototype.

The objective of creating prototypes is to assist, in some way, the development of target or delivered systems. Major issues of software development that can be addressed by prototyping are elicitation, demonstration, and evaluation of the following:

  • Data requirements and structure.
  • Function requirements.
  • Operation and performance.
  • Organizational needs and issues.

In normal usage, a prototype is a trial model or a preliminary version of a product. In conventional engineering, prototyping at reduced scale, with simplified versions of products or with a pre-production product, is a long-established tradition. The idea of producing a building, a bridge, an automobile, or an airplane without a prototype or model is almost inconceivable. Prototyping at an early stage in the development of a product allows evaluation and adjustment before the design is finalized.

SONET (Synchronous optical networking):

SONET (Synchronous Optical Networking) is a standard for communicating digital information over optical fiber. It was developed to replace the PDH (Plesiochronous [Nearly synchronised] Digital Hierarchy) system for transporting large amounts of telephone and data traffic.

The more recent Synchronous Digital Hierarchy (SDH) standard built on the experience of the development of SONET. Both SDH and SONET are widely used today; SONET in the U.S. and Canada, SDH in the rest of the world.

Synchronous networking differs from PDH in that the exact rates that are used to transport the data are tightly synchronized to network based clocks. Thus the entire network operates synchronously. SDH was made possible by the existence of atomic clocks.

Both SONET and SDH can be used to encapsulate earlier digital transmission standards, such as the PDH standard, or used directly to support either ATM or so-called Packet over SONET networking.

The basic SONET signal operates at 51.840 Mbit/s and is designated STS-1 (synchronous transport signal one). The STS-1 frame is the basic unit of transmission in SONET.

The two major components of the STS-1 frame are the transport overhead and the synchronous payload envelope (SPE). The transport overhead (27 bytes) is comprised of the section overhead and line overhead. These bytes are used for signaling and measuring transmission error rates. The SPE is comprised of the payload overhead (9 bytes, used for end to end signaling and error measurement) and the payload of 774 bytes. The STS-1 payload is designed to carry a full DS-3 frame.

The entire STS-1 frame is 810 bytes. The STS-1 frame is transmitted in exactly 125 microseconds on a fiber-optic circuit designated OC-1 (optical carrier one). In practice the terms STS-1 and OC-1 are used interchangeably.

Three OC-1 (STS-1) signals are multiplexed by time-division multiplexing to form the next level of the SONET hierarchy, the OC-3 (STS-3), running at 155.52 Mbit/s. The multiplexing is performed by interleaving the bytes of the three STS-1 frames to form the STS-3 frame, containing 2430 bytes and transmitted in 125 microseconds. The STS-3 signal is also used as a basis for the SDH hierarchy, where it is designated STM-1 (synchronous transmission module one).

Higher speed circuits are formed by successively aggregating multiples of slower circuits, their speed always being immediately apparent from their designation. For example, four OC-3 or STM-1 circuits can be aggregated to form a 622.08 Mbit/s circuit designated as OC-12 or STM-4.

The current state of the art is the OC-192 or STM-64 circuit, which operates at rate of just under 10 Gbit/s. Speeds beyond 10 Gbit/s are not currently technically viable; however multiple OC-192 circuits can be carried over a single fiber pair by means of Dense Wave Division Multiplexing (DWDM). Such circuits are the basis for all modern transatlantic cable systems and other long-haul circuits.

Due to the fortuitous similarity in bit rates, 10 Gigabit Ethernet has been designed with a capability to interoperate with OC-192/STM-64 equipment.

Source Coding:

Source coding is made up of; Discrete memoryless sources, Shannon's first (noiseless) coding theorem, Shannon-Fano-Elias coding, Huffman coding. Sources with memory. Universal source coding theorem, and Ziv-Lempel Coding. This area of study is part of information theory.

Spatial Resolution:

The spatial resolution of an imaging system describes, how fine the details are that can be separated. Several methods for measuring the resolution of a system are in common use: the modulation transfer function MTF , the point spread function PSF , the line spread function LSF and the edge spread function ESF . They all rely on the characterization of the imaging system as a linear filter, which is only approximately correct for most cases.

Spatial resolution is related to the resolving power to distinguish image details. As image processing is a multidisciplinary field, there are many ways to specify spatial resolution, each one is application oriented.

In remote sensing, it is common to specify the spatial resolution as the size each pixel represents in the real world by the terms ground resolution element and ground resolution distance. As an example the LAND-SAT has resolution ranging from 30 m to 120 m. In this case the smaller the resolution distance, the better one can resolve image spatial contents.

In medical imaging, the resolution is also used as in remote sensing, but using millimeters as their standard unit. Typical CT (Computer Tomography) Scanners have pixel size of 1 mm.

In document industry, the resolution is used as number of pixels per world dimension. As an example desktop scanners has resolution of 600 dpi (dots per inch).

What it important is to understand the concept behind the spatial resolution term, no matter which context are you are in.

Shown below are two images of the moon sampled at two different resolutions. In order to observe resolution effects, a small image of a ruler is superimposed on both moon images. The ruler measures the number of pixels within an image.

myServicesNetwork myServicesNetwork
[a] [b]
[a] 20 km/pixel; [b] 10 km/pixel

There are approximately 27 pixels across the diameter of the lunar crater shown on the first image, and approximately 55 pixels across the crater shown on the second image on the right. The image at the right has higher resolution. This implies that it is possible to measure its features more accurately. Assume the crater has a diameter of 550 km, then the resolution of the image on the left is 20 km/pixel while the image on the right has a resolution of 10 km/pixel.

Speed of Light:

Within computers and networks, of course, data move from place to place by electrical impulses, and these impulses cannot move faster than the speed of light, about 2.997 X 10^10 centimeters per second. While the history of computing and networks is marked by remarkable increases in the speed of processing and transmission, further progress must respect two basic limitations:

  • Data cannot move through computers or networks at speeds exceeding the speed of light, since this determines the speed of electricity as well. Since different computer components are physically separated, the speed of light limits how fast data can flow between components. Similar issues limit how fast data can flow within a single component.
  • Miniaturization of components and circuits is limited by the size of atoms and molecules. Circuits cannot be smaller than molecular size.

With the impressive speed and size of today's machines, and network objects, each of these factors proves that the current pace of increasing speed and power of computers and network equipment cannot continue indefinitely.


A combination of hardware and software that stores documents sent to it over a network and manages the printing of those documents on a printer.


Is the acronym for Signal Quality Error. It is used to tell hosts attached to transceivers that the network is still alive. This signal is generated by the transceivers on a Ethernet network.

SQL (Structured Query Language):

A data manipulation language standardized by ANSI and used in most relational database systems. In the Structured Query Language used to define relational databases, a data item is defined at the same time that a relation is defined.

SQL is an English-like language used to access data in relational database management systems. It is the preeminent query language for mainframe and client-server relational DBMS's. This language combines the flavors of both the algebra and the calculus and is well suited for the specification of conjunctive (joining; connective; joined together) queries.

Although there are numerous variants of SQL, it has become the standard for relational query languages and indeed for most aspects of relational database access, including data definition, data modification, and view definition. SQL was originally developed under the name Sequel at the IBM San Jose Research Laboratory.

The basic building block of SQL queries is the select-from-where clause, and have the form:

select<list of fields to select> from tuples (rows)
from<list of relation names> from tables, relations, (files)

In the queries below, the relation names themselves are used to denote variables ranging over tuples (rows) occurring in the corresponding relation (table). For example, in the query below, the identifier Movies can be viewed as ranging over tuples (rows) in relation (table) Movies. Relation (table) name and attribute (field) name pairs, such as Location.Theater, are used to refer to tuple components (fields); and the relation name can be dropped if the attribute occurs in only one of the relations in the from clause.

whereTitle= 'Cries and Whispers';

selectLocation.Theater, Address
fromMovies, Location, Periscope
whereDirector= 'Bergman'
and Movies.Title=Periscope.Title
and Periscope.Theater=Location.Theater;

Star Topology:

A layout scheme in which network devices are arranged so that all are connected to a central controlling device (i.e., main computer, or a microcomputer that is dedicated as a star controller, which is also called a hub topology). That is, it is a network layout in which cable and devices radiate from a central point. For LANs, a device known as an active hub, concentrator, or star controller replaces the main computer at the center of the network.

The central device controls each workstation. For one workstation to communicate with another, the message passes through the central device(i.e., star controller), also called a hub. The hub or star controller controls access and transmission along all of the connecting lines. For example, the part of a telephone network that includes a local telephone exchange is a star network because communications between any two telephones must first pass through the local exchange. The exchange is, essentially, the hub of that star network.

Star networks are simple to control because all control takes place at one point. Also, problems are easy to isolate because the workstations are not directly connected to one another. However, the two disadvantages of the star network are the cost of the central computer and the vulnerability of the overall network if the hub fails. Unlike bus networks that can still operate if a device fails, star networks fail completely if the hub fails.

State Machine: (UML)

A behavior that specifies the sequences of states that an object or an interaction goes through during its life in response to events, together with its responses and actions.

Statistical Multiplexing:

A multiplexing strategy in which access is provided only to ports that need or want it. Thus, in any given cycle, one node may have nothing to send, while another node may need to get as much access as possible.



Statistical Time Division Multiplexers (STDM):

Statistical time division multiplexers are intelligent devices capable of identifying which terminals are idle and which terminals require transmission, and they allocate line time only when it is required. This allows the connection of many more devices to the host than is possible with FDM's or TDM's. See Frequency division multiplexing (FDM) and Time division multiplexing (TDM).

The STDM consists of a microprocessor based unit that contains all hardware and software required to control both the reception of low-speed data coming in and high-speed data going out. Newer STDM units provide additional capabilities such as data compression, line priorities, mixed-speed lines, host port sharing, network port control, automatic speed detection, internal diagnostics, memory expansion, and integrated modems.

The number of devices that can be multiplexed using STDM's depends on the address field used in an STDM frame. If the frame is 4 bits long, then there are 16 terminals (2 to the power of 4) that can be connected. If 5 bits are used then 32 terminals can be connected, and so on.

Stereotype: (UML)

A specialized version of a standard UML element.

Stored Procedure: (Relational Database):

An independent procedural function that typically executes on the database server.

Structural Independence: (Relational Database):

Because the relational database model does not use a navigational data access system, data access paths are irrelevant to relational database designers, programmers, and end users. Changes in the relational database structure do not affect the DBMS's data access in any way. Therefore, the relational database model achieves the structural independence not found in the hierarchical and network database models. (Structural independence exist when it is possible to make changes in the database structure without affecting the DBMS's ability to access the data. In contrast to the relational database, any change in the hierarchical database's tree structure or in the network database's sets will affect the data access paths, thus requiring changes in all data access programs.

Swimlane: (UML)

A vertical delimiter on an activity diagram used to partition the activities performed by specific responsible parties.


A switch is a device that connects material coming in with an appropriate outlet. For example, the input may be packets and the outlet might be an Ethernet bus, as in an Ethernet switch. Or the input might be an electronic mail (e-mail) message in cc:Mail format and the output might be to any of a number of other e-mail formats, as with a mail switch.

A switch needs to have a way of establishing the desired connection, and may also need to translate the input before sending it to an output.

There are two main approaches to the task of matching an input with the desired outlet:

  • In a matrix approach, each input channel has a predefined connection with each output channel. To pass something from an input to an output is merely a matter of following the connection.
  • In a shared memory approach, the input controller writes the material to a reserved area of memory and the specified output channel reads the material from this memory area.
  • If the connection requires translation, a switch may translate directly or use an intermediate form. For example, a mail switch may use a common format as the storage format. The specified output channel will translate the "generic" format into the format required for the output channel.

    In general, switches are beginning to replace earlier, less flexible inter-network links, such as bridges and gateways. For example, a gateway may be able to connect two different architectures, but a switch may be able to connect several.

    Because switches do more work than bridges or gateways, switches need more processing power. Switches may have multiple processors, or they may run on a minicomputer for better performance.

Synchronized Multimedia Integration Language (SMIL):

SMIL (pronounced "smile") is an abbreviation for the Synchronized Multimedia Integration Language. It is a W3C Recommendation for describing multimedia presentations using XML. It defines timing markup, layout markup, animations, visual transitions, and media embedding, among other things. Often used in streaming media presentations.

Systems Engineering:

Systems Engineering is an interdisciplinary approach to developing complex systems that satisfy a client mission in an operational environment. Information technology is at the heart of most systems. Software Systems Engineering is an examination of the systems engineering process with special emphasis on computers and software systems. Systems Engineering uses an overview of system theory and structures, elements of the systems life cycle (including systems design and development), risk and trade-off analyses, modeling and simulation, and the tools needed to analyze and support the systems process. Systems Engineering also studies from the information technology domain to illustrate the many systems engineering principles.

System Monitor (Oracle):

The system monitor (SMON) process, as the name indicates, performs system monitoring tasks for the Oracle instance. The SMON process performs crash recovery upon the restarting of an instance that crashed. The SMON process determines if the database is consistent following a restart after an unexpected shutdown. This process is also responsible for coalescing [to grow together; fuse] free extents if you happen to use dictionary managed tablespaces. Coalescing free extents [range over which something extends; scope; comprehensiveness] will enable you to assign larger contiguous free areas on disk to your database objects. In addition, the SMON process cleans up unnecessary temporary segments. Like the PMON process, the SMON process sleeps most of the time, waking up to see if it is needed. Other processes will also wake up the SMON process if they detect a need for it.


A comprehensive network management package from IBM. The first parts of SystemView were released in 1990, and components are still being developed. Intended as a replacement for NetView, SystemView is more comprehensive, will support more networking models, and will provide greater flexibility in data presentation than NetView.

Search for Information Technology Items

Return back to Network & Concepts Index

Networking "S" Definitions and Concepts


Back | Home | Top | Feedback | Site Search

E-Mail Me

This site is brought to you by
Bob Betterton; 2001 - 2011.

This page was last updated on 09/18/2005
Copyright, RDB Prime Engineering

This Page has been accessed "5797" times.