RDB PRIME!
Engineering
Home
Research Paper(s)
Resume
Technology Items
Site Map
Site Search
 
 It is 18:29 PST on Thursday 04/25/2024

"P" Networking Definitions & Concepts...

Pacing .. to .. Proxy Server

# A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Search for Information Technology Items



Pacing:

In communications, the temporary use of a lower transmission speed. For example, pacing may be used to give the receiver time to catch up and process the data tht has already been sent.

Packet:

One unit of information that has been formatted for transmission on the network topology of use. A packet includes user data as well as the control and addressing information needed to send the packet to the correct destination.

Packet Exchange Protocol (PEP):

A transport level protocol in the XNS (Xerox Network Services) protocol suite from Xerox.

Packet Switching:

Packet switching is a transmission method in which packets are sent across a shared medium from source to destination. The transmission may use any available path, or circuit, and the circuit is available as soon as the packet has been sent. The next packet in the transmission may take a different path.

With packet switching, multiple packets from the same transmission can be on their way to the destination at the same time. Because of the switching, the packets may not all take the same paths, and they may not arrive in the order in which they were sent.

The X.25 telecommunications standard uses packet switching, as do many local- and wide-area networks.

COMPARE.

Circuit Switching; Message Switching.

Paralle Technology: (harddrive)

A design that allows a device (hard drive) to receive multiple bits of information at the same time. Parallel interfaces use short, wide cables carrying multiple signals, and pose inherent design limitations on data transfer speed and multiple device connections.

Passive Star:

A network topology in which each wiring run is connected together at a common end. Each wiring run is called a branch, or leg, of the star. Unlike the active star, a passive star has no concentrator at the center. See star topology.

Password Policies:

A good password policy is the first line of defense in protecting a network from intruders. Careless password practices (choosing common passwords, such as "God" or "love" or a user's spouse's name; choosing short, all-alpha, one-case passwords; writing passwords down; or sending passwords across the network in plaintext) are like leaving your car doors unlocked with the key in the ignition. Although some intruders might be targeting a specific system, many others are just "browsing" for a network that is easy to break into. Lack of a good password policy is an open invitiation.

Best practices for password creation require that you address the following:

  • Password length and complexity,
  • Who creates the password,
  • Forced changing of passwords,

A few rules of thumb for creating good password policies include:

  • Passwords should have a minimum of eight alpanumeric characters.
  • Passwords should not be "dictionary" words.
  • Passwords should consist of a mixture of alpha, numberic, and symbol characters.
  • Passwords should be created by their users.
  • Passwords should be easy for users to remember.
  • Passwords should never be written down.
  • Passwords should be changed on a regular bases.
  • Passwords should be changed anytime compromise is suspected.
  • Passwords change policies should prevent users from making only slight changes when creating new passwords.

In a high security environment, you might need to go beyond the use of just passwords (something you probably already know) in authenticating users to access the network. A multifaceted authentication scheme also requires that users provide something they have (such as smart cards or tokens) and/or something they are, that is biometric identifies, such a fingerprints or retinal scans.

PC Boot Process:

The PC has a complicated process for starting up the machine. Knowing more about the process will allow users to troubleshoot future boot problems and to develop a better understanding of how the PC works.

Here is the process:

  1. The PC is powered on and the BIOS performs a test of RAM and other motherboard components,
  2. Video cards are initialized and the default video mode is enabled,
  3. The machine looks at its CMOS setting to determine if any system drives (IDE, ESDI, ST-506) are in the computer. CMOS stands for Complementary (symmetry) metal-oxide semiconductor. It is a memory chip that permits many components to be packed together in a very small area. The main characteristics of CMOS chips are low power consumption, high noies immunity and slow speed,
  4. The BIOS initializes all cards, causing drivers to be loaded from these cards and banners to be displayed.
  5. If SCSI cards are encountered, the SCSI card's BIOS will scan their buses and allocate INT13 numbers for any usable drive,

  6. The machine checks for bootable floppies in the floppy drive,
  7. If a system drive is found, the PC will attempt to boot from it.
  8. If no system drives are in the system, the PC will attempt to boot from the first SCSI host adapter card it encounters.

  9. Using the INT13 interface, the startup drive will be examined to see if it has a valid boot sector. A valid boot sector consists of two components: boot loader code and a partition information table,
  10. If the startup drive has a valid boot sector, the BIOS will execute the boot loader code contained in the boot sector,
  11. The boot loader examines the partition table for an active, or bootable, partition. The first sector of that partition contains operating system-specific boot code and operating system-specific partition information,
  12. The partition boot code is executed and proceeds to load in whatever operating system is present on that partition (for DOS, this would be the hidden files IO.SYS and MSDOS.SYS).
PCMCIA:

Personal Computer Memory Card International Association. A joint effort of various special interest groups aimed at setting a standard for memory cards used in PC's. PCMCIA cards add improved computer memory capacity or enhance connectivity to external networks and services.

Peer Networks:

Peer networks are defined by a lack of central control over the network. There are no servers in peer networks; users simply share disk space and resources, such as printers and faxes, as they see fit.

Peer networks are organized into workgroups. Workgroups have very little security control. There is no central login process. If you have logged in to one peer on the network, you will be able to use any resources on the network that are not controlled by a specific password.

Access to individual resources can be controlled if the user who shared the resource requires a password to access it. Because there is no central security trust, you will have to know the individual password for each secured shared resource you wish to access. This can be quite inconvenient.

Peers are also not optimized to share resources. Generally, when a number of users are accessing resources on a peer, the user of that peer will notice significantly degraded performance. Peers also generally have licensing limitations that prevent more than a small number of users from simultaneously accessing resources.

Advantages of Peer Networks:

Peer computers have many advantages, especially for small businesses that cannot afford to invest in expensive server hardware and software:

  • No extra investment in server hardware or software is required
  • Easy setup
  • No network administrator required
  • Ability of users to control resources sharing
  • No reliance on other computers for their operation
  • Lower cost for small networks

Disadvantages of Peer Networks:

Peer networks, too, have their disadvantages, including:

  • Additional load on computers because of resource sharing
  • Inability of peers to handle as many network connections as servers
  • Lack of central organization, which can make data hard to find
  • No central point of storage for file archiving
  • Requirement that users administer their own computers
  • Weak and intrusive security
  • Lack of central management, which makes large peer networks hard to work with
Persistence Models: (JAVA)

By George Reese

In the object-oriented world of Java development, Java objects are said to persist against a data store. In other words, the objects that make up your Java application save their data to a relational database so that data may be referenced at a later time. The approach you take to mapping your Java objects to the data store is called a persistence model.

These days, Java programmers have many persistence models from which to choose. In this section, we look at many of the most popular persistence models, including:

  • EJB, both container-managed and bean-managed
  • Java Data Objects
  • Third-party tools such as Hibernate and Castor
  • Custom persistence models

Persistence grants immortality to your business applications. Without it, you lose all of your application data every time the server shuts down. Database programming is the development of persistence mechanisms to save an application's state to a relational database. In this book, I will cover a variety of persistence mechanisms, but this chapter introduces the basics through a custom guest book JSP application.

The excellent book Design Patterns by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides (Addison-Wesley) has popularized the concept of design patterns. They are recurring forms in software development that you can capture at a low level and reuse across dissimilar applications. Within any application scope are problems you have encountered; patterns are the result of recognizing common problems and leveraging a common solution.

People have been writing database applications for nearly three decades. Over that time, many best practices have evolved into design patterns. As we explore different modes of persistence in this writeup, we will see many of these patterns over and over again.

Perhaps the most essential element of good persistence design is a clear separation of application logic into the following areas:

View logic: The view logic is responsible for displaying the user interface. It is the user's window into the control and business logic.

Control logic: The control logic handles user actions and decides what should happen based on those actions. It handles data validation and triggers the appropriate business logic on behalf of the user.

Business logic: Business logic[1] encapsulates the basic business concepts behind your application. They provide the view with getter methods to access business data and provide the interface for creating, searching, modifying, and deleting the business objects they support.

[1] You do not need to be writing a business application to have business logic. Business logic is a generic term that refers to any of the basic concepts in your problem domain. If you are building a first-person shooter game, your "business objects" are monsters, weapons, hazards, and the like.

Data access logic: Data access logic maps business objects to the data storage layer. They are the heart of persistence.

Data storage logic: The database engine provides you with this type of logic, which simply ensures your data is not lost at application shutdown.

Separation of logic with dependencies based on simple interfaces is a core principle of object-oriented software engineering. When you capture the essence of a business concept in a business object without burdening it with other logic, you enable it to be reused in other environments. For example, a bank account object that does not contain display or data access logic can be reused with JSP (java server pages), Swing, and other kinds of frontends. It can also persist against different database engines.

BEST PRACTICE: Divide application functionality into different logical components to facilitate component reuse.

This same principle extends beyond the business object layer. It also makes it easier to divide the work of building software among developers with different skills. With a good tag library, the view developer needs to know only XHTML and your tag library to write the view. The more difficult work of JDBC programming can be easily handed off to an experienced JDBC programmer without having to hand the entire application to a JDBC programmer.

BEST PRACTICE: Divide application logic into multiple tiers to match the complexity of your application.

In almost any database application, you need to generate unique identifiers to serve as primary keys in your database. Most databases have some sort of proprietary mechanism to help you generate sequences. Unfortunately, you cannot port a database application that relies on these proprietary schemes to other databases without changing the code that relies on those schemes.

I always recommend the use of a database, independent approach to sequence generation. Later in this writeup, we develop a sequence generator that will work for most database applications. It stores sequence seeds in the database. When an application needs a new unique number, it requests the unique number from the sequence generator. If the sequence generator has the seed in memory, it uses the following formula to create a new unique number:


unique = (seed * 1000000) + last;
last++;

If the seed is not in memory, it is loaded from the database, incremented, and the incremented value is stored back in the database. When the seed runs out of unique values - when last reaches 1000000[2] - it loads a new seed from the database, increments that new seed, and saves the incremented seed back to the database.

[2] The value 1,000,000 depends on the system. You will want lower numbers for systems with short uptimes and larger numbers for systems with long uptimes.

This approach has several important features:

  • It generates unique values in a distributed environment. Multiple application servers can save business objects to multiple databases and still have the guarantee that the sequences being generated are unique across the entire system.
  • You do not need to go to the database every single time you generate a sequence. You go to the database only every million sequences.
  • The sequences are not tied to a specific table. You can create a sequence that is shared among several tables or even across the entire database. Similarly, you can have multiple values in the same table rely on different sequences.

BEST PRACTICE: Use database-independent sequence generation.

Mementos

In the division of labor discussed earlier, the data access object needs to know about the state of the object it is persisting. You could pass the business object to the data access object, but doing so would require the data access object to know the intimate details of how the business object is implemented. The memento design pattern from the Design Patterns book comes to the rescue here.

A memento enables one object to share its state with another without either object needing to know anything about the other. Consider a common situation in which you have one class (class A) that references the values of another (class B). If you delete an attribute in class B, class A will no longer compile if it has direct references to the deleted attribute of class B. In general, this behavior is exactly what you want.

Sometimes, especially in mapping objects to a databaseyou want a looser coupling between two classes. The memento pattern creates this independence. It specifically enables you to make code changes to the business objects and data access objects independently of each other. A change to the business object will not require any changes to the data access object unless you are adding new data elements or removing obsolete ones. The data access object knows that the only changes it will care about come in the form of changes in the data contained in the memento. Similarly, any change to the underlying tables in the database is hidden from the business object. It always passes its state to the data access object and lets the data access object worry about persistence issues.

BEST PRACTICE: Use mementos to pass component state between application tiers

Object Caching

A database application must use the database as a persistent store, not as a memory store. In other words, you need to pull data from the database and hold it in memory in business objects. If you go to the database every time you want to display some data about a business object, your database application will perform terribly and fail to scale at all.

On the other hand, you don't want to load the entire database in memory and keep it there. If you have a large amount of data, you will quickly run out of memory. It is therefore important to develop an object caching mechanism that strikes a solid balance between memory usage and database access.

In architectures like the EJB architecture, the application server automatically manages caching for you. The Guest Book later in this chapter, however, does not use EJBs. It therefore needs something else to manage caching. It leverages a Cache class that uses a SoftReference to cache objects loaded from the database.

BEST PRACTICE: If you are building your own persistence system, implement an efficient caching scheme to prevent exhausting system resources.

A SoftReference is a special kind of object in java.lang.ref that creates a soft reference to the object it stores. In Java, references between objects are generally strong references. For example:

StringBuffer buffer = new StringBuffer();

The reference to buffer is a strong reference. The strong reference is in force as long as the reference is in scope. If the references fall out of scope, then the object is said to be no longer strongly reachable. It is thus potentially available for garbage collection.

A soft reference is a reference via a SoftReference object. By storing an object indirectly through a SoftReference instead of directly, you make the object available for potential garbage collection while still maintaining the ability to access the object until it is garbage collected.

The Cache class implements the Java Collection interface. Internally, it even uses a HashMap internally to store data. When an application loads an object from the database, it can put it in the cache using the cache( ) method:


public void cache(Object key, Object val) {
    cache.put(key, new SoftReference(val));
}

This method creates a soft reference around the business object and then stores the soft reference in the internal HashMap. As time goes by and the business object is no longer in use, the soft reference will expire and the memory the business object occupies will be freed. The code that checks for the existence of a specific business object in the cache thus needs to verify that the soft reference has not expired:


public boolean contains(Object ob) {
    Iterator it = cache.values( ).iterator( );
   
    while( it.hasNext( ) ) {
        SoftReference ref = (SoftReference)it.next( );
        Object item = ref.get( );
   
        if( item != null && ob.equals(item) ) {
            return true;
        }
    }
    return false;
}

The get( ) method has to perform similar checks:


public Object get(Object key) {
    SoftReference ref = (SoftReference)cache.get(key);
    Object ob;
   
    if( ref =  = null ) {
        return null;
    }
    ob = ref.get( );
    if( ob =  = null ) {
        release(key);
    }
    return ob;
}

A Guest Book Application:

To illustrate these most fundamental persistence concepts, we will use a simple Guest Book JSP application from my web site. You can see this example in action at http://george.reese.name/guestbook.jsp. The Guest Book enables visitors to a web site to leave comments and view the comments left by others. To prevent abuse, the application also includes an administrative approval mechanism. The full code for the Guest Book can be found on O'Reilly's FTP site.

Physical Network Components:

LAN communication functions are typically performed by hardware and firmware that is specifically designed to implement them. The physical components used in a network that supports personal computers include the following:

  • Adapter Card. A network adapter circuit card, purchased from the computer vendor or a LAN vendor, is typically installed in each personal computer that is to be a station on the network. The adapter card contains the hardware and firmware programming that implements the logical link control and media access control functions
  • .
  • Cabling System. The cabling system includes the cable, or wire, used to interconnect the network devices. The cabling system also typically includes attachment units that allow the devices to attach to the cable.
  • Concentrator. Some LAN implementations use concentrators, or access units, tht allow network devices to be interconnected through a central point. Attaching devices through a central concentrator typically simplifies the maintenance of the LAN.

The basic wiring alternatives used for most LAN implementations are twisted-wire-pair cable, various types of coaxial cable, and fiber-optic cable. In some instances, existing telephone wiring of adequate quality is already installed in appropriate locations to support the network. Thick, inflexible Ethernet coaxial cable has a reputation for being more difficult to install than twisted-wire-pair cables are laid down, which give higher speeds than unshielded, lower-quality telephone wire pairs. However, newer forms of coaxial cable used with some network implementations are thinner and easier to handle for new installations than some types of twisted-wire-pair cable.

PING:

Packet InterNet Grouper -- The name of a program used with TCP/IP internets to test reachability of destinations by sending them an ICMP echo request and waiting for a reply. The term is now used like a verb as in, "please ping host A to see if it is alive."

Ping is a command that operates at a very deep level, in that it sends out ICMP echo request calls. As long as the IP protocol module in the interrupted computer is still active, a reply is sent ('ICMP echo response') showing that the computer/device is reachable and active.

Ping sends a 64-byte packet to the given network device every second and measures the time until the reply is received. Ping is terminated by entry of an abort signal from the keyboard (^C or DEL), whereupon statistics about received and lost packets and the mean measured response time is displayed.

Pixelization:

Pixelization is defined as the process of somehow combining multiple data points into a single pixel to plot when there are more data points in a set than pixels across the screen. It is also known as the process of breaking a continuous image into a grid of pixels and is sometimes called pixelization, sampling, scanning, or spaitial quantization.

Plesiochronous Digital Hierarchy (PDH):

The Plesiochronous Digital Hierarchy (PDH) is a technology used in telecommunications networks to transport large quantities of data over digital transport equipment such as fibre optic and microwave radio systems. The term plesiochronous is derived from Greek plesio, meaning near, and chronos, time, and refers to the fact that PDH networks run in a state where different parts of the network are almost, but not quite perfectly, synchronised.

PDH is now being replaced by SDH equipment in most telecommunications networks.

PDH allows transmission of data streams that are nominally running at the same rate, but allowing some variation on the speed around a nominal rate. By analogy, your watch and mine are nominally running at the same rate, clocking up 60 seconds every minute. However, there is no link between our watches to guarantee the run at exactly the same rate, and it is highly likely that one is running a bit faster than the other.

The European and American versions of the PDH system differ slightly in the detail of their working, but the principles are the same. The European system is described.

The basic data transfer rate is a data stream of 2.048 Megabits/second (usually abreviated to "2 megs"). For speech transmission, this is broken down into 30 x 64 kilobit/second (abbreviated to "64K") channels plus 2 x 64K channels used for signalling and synchronisation. Alternatively, the whole 2 megs may be used for non speech purposes, for example, data transmission.

The exact data rate of the 2 meg data stream is controlled by a clock in the equipment generating the data. The exact rate is allowed to vary some percentage (+/- 50ppm) either side of an exact 2.048 Megabits/second. This means that different 2 meg data streams can (probably are) be running at slightly different rates to one another.

In order to move multiple 2 meg data streams from one place to another, they are combined together, or "multiplexed" in groups of four. This is done by taking 1 bit from stream #1, followed by 1 bit from stream #2, then #3, then #4. The transmitting multiplexer also adds additional bits in order to allow the far end receiving multiplexer to decode which bits belong to which 2 meg data stream and so correctly reconstitute the original data streams.

Because each of the four 2 Meg data streams is not necessarily running at the same rate, some compensation has to be made. The transmitting multiplexer combines the four data streams assuming that they are running at their maximum allowed rate. This means that occasionally, (unless the 2 meg really is running at the maximum rate) the multiplexer will look for the next bit but it will not have arrived. In this case, the multiplexer signals to the receiving multiplexer that a bit is "missing". This allows the receiving multiplexer to correctly reconstruct the original data for each of the four 2 meg data streams, and at the correct, different, rates.

The resulting data stream from the above process runs at 8.448 Mega bits / second (8 Meg). Similar techniques are used to combine four x 8 Meg together, giving 34 Megs. Four x 34 Megs, gives 140. Four x 140 gives 565.

565 Megabits/second is the rate typically used to transmit data over a fibre optic system for long distance transport. Recently, telecommunications companies have been replacing their PDH equipment with SDH equipment capable of much higher transmission rates.

Point-to-Point; (Harddrive):

Direct connection between the backplane and the storage device, allowing for the high-performance, full utilization of bandwidth.

Point-to-Point Protocol (PPP):

In the Internet protocol environment, a protocol for direct communication between two nodes over serial point-to point links, such as between routers in an internetwork or between a node and a router. PPP is used as a medium-speed access protocol for the Internet. The protocol replaces the older SLIP (Serial Line Internet Protocol).

PPP is a multiprotocol transport mechanism. While SLIP is designed to handle one type of traffic (TCP/IP trafic) at a time, PPP can transport TCP/IP traffic as well as IPX, AppleTalk, and other types of traffic simultaeously on the same connection. However, since SLIP and PPP are typically used to make single TCP/IP connections (to the internet), the multiprotocol feature of PPP is not usually of benefit.

PPP can negotiate header compression. Streams of packets in a single TCP connection have few changed fields in the IP and TCP headers, so simple compression algorithms can just send the changed parts of the header instead of the complete headers. This can significantly improve packet throughput. SLIP on the otherhand does not offier this compression.

PPP also offers IP enhanced security.

Portal:

A Web portal is a "supersite" on the Internet that provides a comprehensive entry point for a huge array of resources and services. Portals typically contain news, free e-mail services, search engines, online shopping, chat rooms, discussion boards and links to other sites. Excite, Lycos and Yahoo started as simple search engines, but today, these portals let you do just about anything imaginable. Maybe you haven't heard who won the Tigers game last night. Perhaps you're looking for a forum to debate Middle Eastern politics. Or you're traveling to Madrid, Spain, and want to find the best tapas barsÑand someone to bring along with you. Whatever you're looking for, you can probably find it in a general Web portal.

A vortal is a jargony way of saying "vertical industry portal." A vortal provides information and resources for a particular industry. You'll find news, research and statistics, online tools, discussions and newsletters pertaining to a particular industry or area of discipline. For example, visit a vortal like FindLaw.com, and you can figure out whether you can fire a lazy underling without getting sued, research how much less you're making than a 25-year-old first-year associate in New York City or find a lawyer to handle your divorce. Or if you're interested in bleeding-edge, hog-butchering technology, check out Meat and Poultry Online (www.meatandpoultryonline.com). While you're there, you can also find great deals on collagen sausage-casings or look into lucrative careers at chicken-packaging plants. A quick trip to VerticalNet.com will give you links to vortals for almost any industry you can imagine, from solid-waste disposal to textiles to fiber optics to human resources.

Corporate portals are internal websites that provide proprietary information to employees, suppliers, partners, clients and stockholders. When they serve only the needs of employees, they're called intranets. When outside parties, like customers or suppliers, are allowed access, they become extranets. Corporate portals let users locate and share knowledge and data, participate in a business process and collaborate. They generally feature search engines for internal materials, and they can be customized for different user groups.

As and example, and simply put, a portal is a virtual desktop or workbench, delivered via the World Wide Web, where the electronic information and services available to an individual member of the community are presented in an accessible, secure, personalized, customizable, and integrated fashion. Since that's quite a mouthful, I'll define each of those characteristics:

Accessible: Portals and their underlying information and services are available from anywhere at any time via the Web through a browser. This includes on campus, at home, or on the road.

Secure: An enterprise portal, such as myNEU, is based on a unique user id (not social security number) and password identifying the individual, their role(s) in our community, and the services or information they are authorized to use.

Personalized: Given the portal experience is initially defined by an individual's role(s) in our community, the basic capabilities available within the portal for each individual are appropriate to their needs. Examples of roles include faculty, staff, student, parent, alumni, etc. Within each role there can be sub roles, such as faculty member in a particular college, or a department or staff member in a particular office. The personalization feature of a portal makes it possible for each member of the community to have the tools and information necessary to do their job or to perform their role.

Customizable: Based on the capabilities defined by roles, each individual has the option to customize their portal to meet their individual needs or preferences. This could include turning on or off various services or information channels (i.e. subscribing to NU sports but not wellness bulletins). It could also include adding remote services, such as CNN newswire or the Weather Channel.

Integrated: Because all these capabilities are presented on a single screen and available under a single user id and password, the individual's experience is integrated, providing benefits in the areas of ease of use and security.

An additional characteristic of a portal is that it continually grows in terms of the information available and the services offered. Today, the student version of myNEU includes access to administrative information (Bursar balance, etc.), e-mail, calendaring, and various information channels. In November, we will be adding course drop/add to the student myNEU. Throughout the year, we will continue working with students to define a series of enhancements that will rollout over the balance of the academic year. Some of these enhancements will include online voting for student elections and referendums.

In addition to continuing to enhance the student myNEU portal, we are working with faculty and staff to define the initial characteristics of the faculty & staff portal we plan to launch after the first of the year. We also plan to launch a portal for prospective students toward the end of the academic year. With the introduction of portals to these and additional segments of our community, we will be able to deliver unprecedented levels of service to meet each individual's technology needs.

Other thoughts concerning portals - they are concerned with Varied Audiences, and Varied Views.

Aggregation: In that the content is from diverse sources, both inside and outside. Of function, from horizontal services such as email, calendaring, PIM, etc.

Integration: With enterprise applications that extract high-value functionality and present this in a single context. With IT infrastructure for secure access and reliable operation.

Presentation: Provide a single consitent interface across diverse content and function. Provide a common user interaction model and API (Application P Interface) which has new applications that can be built on. Delivers a common user experience across different device form factors.

Access: Provide a common access mechanism for users to a wide range of applicatiions (Single Sign-On). Allow different classes of users to have different levels of privileges, mutable and manageable. Providing access in a continuously available, responsonsive environment.

Personalization: Customize the user experience to fit each user's specified preferences. Allow protal management to tailor the user experience for different classes of users, based on both implicit and explicit preferences.

Administration: Allow multiple organizaltional units to create and contribute content, and to administer sections of the portal. Allow a central management entity to manage multiple portals across the entire organization.

Best practices in Portal Deployment:

Understand what a portal is and what it can do for your enterprise, then sell it.

Communicate well-understood business goals.

"Tomado" vs. "waterfall" methodology.

Undertake iterative developemtn and deployment.

Separate vendor hype from reality!

Seek professional assistance.

Don't underestimate costs!!

Make the portal "sticky".

You've heard them referred to as corporate portals, enterprise information portals, and business intelligence portals. Steven Telleen has described them as "brokers." And you may even have debated the merits of "vortals" at a recent dinner party (although I hope not).

In a nutshell, portals provide a single point of access to aggregated information. The portal concept has been applied to general audiences on the Web (so-called "Internet portals"), to organization-private Web sites ("intranet portals"), and to specialized online communities of practice ("vertical portals" or vortals). While all of this terminology may seem daunting at first, the principles behind portals are relatively simple.

The primary goal of most portals is ease-of-use. Besides having a single point of access -- a virtual front door -- portals generally try to provide a rich navigation structure. Portals using Web pages for their user interface will, for instance, often include numerous hyperlinks on the front page.

One example of an Internet portal is msnbc.com. MSNBC contains many elements one expects from a general-purpose portal including featured content, numerous hyperlinks, search capability, stock quotes, and customization based on user locale.

Constrast MSNBC with a typical intranet portal. Like their counterparts, Intranet portals typically contain many navigation options condensed into a small space. They tend to include customizable news, access to stock quotes, and a search facility.

Looking past the surface, however, a number of key differences between intranet and Internet (Web) portals also emerge.

Focus: Intranet portals offer news, event calendars, and email just as Web portals do, yet intranet content tends to be restricted to the information most relevant to the organization. Ostensibly this allows employees to better focus on their job responsibilities by (hopefully) finding information more quickly, and it might also reduce the site's support burden.

Security: On the sample intranet portal you can find several references to "groups": Group Members, Group Documents, Group Links, and so on. These correspond to functional groups within the organization. Access to certain intranet documents, for example, may be restricted to certain individuals or project teams with a "need to know." This concept is essentially foreign to Web portals where individual visitors tend not to collaborate with each other and Web portal administrators want all content accessible to everyone. Incidentally, it is a non-trivial implementation burden for intranet portals to support groups and group administration (adding and removing groups and members, maintaining access rights, auditing, and so on).

Authoring: Web portals tend to be produced by third parties. In intranets, on the other hand, the user community usually generates a substantial portion of their own content. The Group Documents area of the sample portal illustrates just one way that intranet content can be published.

Eye candy: There is an element of salesmanship in Web portals that exists to a much lesser degree on the intranet. Eye-catching true color clickable graphics with rollover effects such as that on the MSNBC start page don't contribute much to the bottom line of an intranet portal.

A vortal is essentially a hybrid -- a cross between traditional Web portals and intranet portals. Vortals focus on specialized topics in much the same way as intranet portals do, and they also tend to support collaboration among users. Conversely, vortals also share features in common with Web portals such as open access policies and user interfaces produced by third parties. This site, About Computer Networking, is a vortal -- as is Test and Measurement.com.

Portals have become a very powerful concept on intranets in particular. Judith Rosall, an analyst with the Aberdeen Group, asserts that "the corporate portal is an extension of the intranet." [1] Others feel that "the information portal is the next generation intranet." [2] There are several good reasons for this upbeat line of thinking. On intranets, portals can offer:

  • integration of the loose federation of department - and division-level Web sites that exist on some intranets.
  • unification of content: intranets often include a wealth of legacy data outside of Web pages, such as documents, database query engines, and front-ends to other specialized software applications.
  • scaling: By categorizing and grouping similar content, portals reduce information overload as intranets grow.

Portals seem like the next logical step in the evolution of the Web in general, and intranets in particular. On intranets a person's time is especially valuable, and portals can help to reduce "time-on-task." This next-higher level of integration also creates the potential for more powerful intranet services, although these services can be challenging to build. In the future, watch for portals to continue to evolve and grow in capability.

POST:

Power On Self-Test. The sequence of diagnostic checks that a device runs on itself as it starts up.

PostScript Document Files:

A conforming PostScript document file includes the following structural features:

  • Prologue
  • Script
  • Pages

The prologue is a set of procedure definitions that define operations required by a document-composition system. The PostScript document begins with a prologue, which typically contains application-dependent definitions and which is stored in a place accessible to an application executable code.

The script is usually generated automatically by an application program. The script, which contains the data that represents a particular document, should not contain any definitions. The script of a multipage document is organized as a sequence of independent single-page descriptions.

The pages in a PostScript script are functionally independent of each other, but they are dependent on the definitions in the prologue. The pages can be executed in any order and rearranged without affecting the printed document. This means that the pages can be printed in parallel as long as the prologue definitions are available to each page.

A document file can contain another document file description as part of its script. An illustration embedded in a document is an example of this structure. One benefit of PostScript document descriptions is that they allow documents from different sources to be merged for final printing.

PostScript Print Jobs:

In understanding PostScript document files, you must understand the difference between a document file and a print job. A document file is a data representation that may be transmitted, edited, stored, spooled, or otherwise processed. A document is transmitted to a printer in a series of print jobs, each of which contains a certain type of code. There are three types of PostScript print jobs with which you should be familiar:

  • standard print jobs
  • queries
  • exit server jobs

Standard print jobs are those jobs destined for the printer. The print spooler passes these jobs to the PostScript printer. They contain the code for the printed document.

Queries are print jobs that check printer status. Queries require a response from the printer. A print spooler must be able to respond to these queries by interpreting query comments.

Exit server jobs bypass the normal server-loop save/restore context. They contain a block of text with resources for the printer (such as fonts that are being downloaded to the printer), rather than an actual printing job. The print spooler generally stores the resources that are contained in an exit server job on its hard disk so that they are permanently available to the printer.

The job type is specified by the Job Identification comment, which is the first line of every print job. Comments consist of a percent (%) followed by text and terminated by a newline character. The PostScript interpreter completely ignores the comments. However, comments conforming to the file-structuring conventions can query or convey structural information to document managers. Comments that contain structural information start with %! or %%. Query comments begin with %%?. Comments that do not start with one of these three notations are ignored by document managers (as well as by the PostScript interpreters).

Power Management:

An energy-saving feature (i.e., Energy Star). It is the ability of a system to recognize when it has been inactive for a (usually) user-specified amount of time, and consequently to spin down.

Printer Access Protocol (PAP):

As its name implies, the Printer Access Protocol, or PAP, takes the responsibility for maintaining communications between a workstation (host) and a printer (or print service). PAP functions, therefore, include setting up and maintaining a connection, as well as transferring the data and tearing down the connection on completion of a job.

When a connection is established, either socket client can send or receive data. This two-way communication is necessary because printers often must control the amount of data sent (by asking for the next page) or reply with the printer's status.

As with other protocols in the session layer, PAP relies on NBP (Name Binding Protocol) to find the addresses of named entities. PAP also depends on ATP (AppleTalk Transaction Protocol) for sending data (see figure below). On a workstation, the application uses the Print Manager software to communicate with the PAP. The client, or workstation, side of PAP then maintains a session with the networked printer (a PAP server) to print the required pages.





The Printer Access Protocol covers five basic processes:

  • opening a connection,
  • transferring data,
  • closing a connection,
  • determining a print service's status,
  • and filtering duplicate requests.

(Because PAP uses ATP, duplicat packets can be received from a node).

One of PAP's capabilities is to handle half-open connections, which occur when one side of the connection goes down or terminates without informing its partner. To cope with half-open connections, PAP maintains a connection timer at both ends. If the connection timer expires before any packets are received, the connection is terminated.

To assist in maintaining a connection, PAP also sends tickler packets periodically. As you might expect, ticklers are used to keep the other end informed that the device is actulally still on-line, even if it otherwise appears that nothing's happening. Many printers spend most of their time processing the data, while ignoring nearly everything else; sending a tickler tells the user that someting is happening and that the printer hasn't gone down.

When a device such as a Tektronix Phaser 500 series is busy processing a job, it's doing very little on the network. Under such circumstances, the client of the Phaser 500 series periodically sends out tickler packets to the Phaser 500 to make sure that it's still working.

Print Spooler:

An application program or combination of hardware and software that allows users to print at the same time they are working on some other task.

The print spooler acts as a buffer for the files to be printed. The files are stored in the spooler until a printer is free for printing. The spooler then sends the file to the printer. These set of actions are primarily done in the back-ground, allowing the user to go on to some other task.

Benefits of printing with a spooler:

Since a print spooler stores a printer-ready file on disk and interacts with the printer until the file is printed, introducing a print spooler between the document-composition system and the printer reduces the length of time that a workstation is tied up for printing. As soon as the print job is ready to be printed, the workstation sends the job to the spooler to store on disk, which releases the workstation for other uses. The spooler then establishes and maintains the required dialog with the printer until the print job is finished.

A print spooler can also provide a mechanism for controlling access to a printer. The spooler can include a user authentication system that would force potential users to enter user identification information (such as user names and passwords) before allowing the users to gain access to a specific printer. The authentication function can be extended to include a wide variety of access options. For example, classes of user authorization could be established, and certain classes of print jobs could be given priority over other jobs.

A print spooler also provides a mechanism for gathering statistical information about printer usage. An accounting department can use information about the printing activity of the users for billing purposes. In addition, management can use statistics about printer access to evalute a site's design and to plan potential modifications.

Project Life Cycle: (Project Management)

A collection of project phases whose name and number are determined by the control needs of the organization or organizations involved in the project. For example, the project life cycle of a motion picture project would include such phases as:

  • casting,
  • scripting,
  • shooting,
  • editing,
  • retakes,
  • and so on.

In contrast, the project life cycle for a home building project might include such phases as:

  • creating the blueprint,
  • building the foundation,
  • laying down the wood floor,
  • framing the walls,
  • and so on.

In each case, the project phases are unique to the industry and designed to achieve specific project deliverables. Also in each case, the project phases allow the project deliverables to evolve gradually and systematically. In this way, the project manager and the professionals involved on the team can inspect the deliverables as they are emerging in order to control the quality, timing, and cost. By using an industry-standard project life cycle, project managers can help assure that deliverables will conform to recognized quality standards.

Project Management (PM): (Project Management)

The application of knowledge, skills, tools, and techniques to project activities in order to meet or exceed stakeholder needs and expectaions.

Project Management Professional (PMP): (Project Management)

An individual certified as such by the Project Management Institute (PMI).

Protocol:

Protocols are rules which co-ordinate the exchange of messages between partners and thus make this exchange efficient, much as certain formalism is required for understanding between people. An example of a protocol in the human world is the use of 'Roger' and 'Over' in radio traffic: both communication partners acknowledge that they have understood the message with 'Roger' and signal a change in the direction of speech with 'Over'.

Data exchange between data-processing systems naturally involves similar but also more extensive requirements. Thus a protocol is the most basic level in the software hierarchy, form the basis for all network communications. Protocols define the rules for accessing the network, communicating between devices, communicating between applications. Protocols control all LAN behavior. Different network types use different sets of communication rules that direct and control communications on the network. Thus the communication rules are called Protocols.

Protocol Address:

The unique address that a node assigns to identify the protocol client that is to receive a packet for a particular protocol stack.

Protocol Data Units:

In the OSI Reference Model, a packet. Specifically, a PDU is a packet created at a particular layer in an open system. The PDU is used to communicate with the same layer on another machine. Basically, information is passed between layers in the form of packets, know as PDUs. The packet size and definition depends on the protocol suite involved in the horizontal communications of the OSI Model.

The basic strategy for passing PDUs is as follows:

  1. Packets are padded as they make their way down the layers on the sending machine, and are stripped as they make their way up the layers on the receiving machine.
  2. Once passed to the lower layer (layer Y), a data packet from the layer above (layer X), known as an XPDU or X-PDU (after the layer), is padded by adding Y-specific header and trailer material. Once padded, the XPDU is passed as layer Y's data -- as a YPDU -- down to layer Z, where the padding process is repeated with different information. For example, in going from the presentation to the network layer, a packet is padded at the session and transport layers before being passed to the network layer.
  3. The header material in a PDU provide handling and delivery information for the process that receives the packet. Trailer materials typically provide error-checking information.
Protocol Layers:

The functional divisions of processing to send and receive data on a network. The layers refer to the main tasks the networking software has to accomplish to transmit and receive data.

Protocol Stack:

The implementation of a specific protocol family in a computer or other node on the network. The protocol stack refers to the visual analogy of all of the layers of a set of protocols -- a stack of protocols being implemented on a node.

Proxy Server:

A server that sits between a client application, such as a Web browser, and a real server. It intercepts all requests to the real server to see if it can fulfill the requests itself. If not, it forwards the request to the real server.

Proxy servers have two main purposes:

  • Improve Performance: Proxy servers can dramatically improve performance for groups of users. This is because it saves the results of all requests for a certain amount of time. Consider the case where both user X and user Y access the World Wide Web through a proxy server. First user X requests a certain Web page, which we'll call Page 1. Sometime later, user Y requests the same page. Instead of forwarding the request to the Web server where Page 1 resides, which can be a time-consuming operation, the proxy server simply returns the Page 1 that it already fetched for user X. Since the proxy server is often on the same network as the user, this is a much faster operation. Real proxy servers support hundreds or thousands of users. The major online services such as Compuserve and America Online, for example, employ an array of proxy servers.
  • Filter Requests: Proxy servers can also be used to filter requests. For example, a company might use a proxy server to prevent its employees from accessing a specific set of Web sites.



Search for Information Technology Items

Return back to Network & Concepts Index

Networking "P" Definitions and Concepts

robert.d.betterton@rdbprime.com


Back | Home | Top | Feedback | Site Search


E-Mail Me

This site is brought to you by
Bob Betterton; 2001 - 2011.

This page was last updated on 09/18/2005
Copyright, RDB Prime Engineering



This Page has been accessed "6905" times.