3- D IC's

Introduction
There is a saying in real estate; when land get expensive, multi-storied buildings are the alternative solution. We have a similar situation in the chip industry. For the past thirty years, chip designers have considered whether building integrated circuits multiple layers might create cheaper, more powerful chips.

Performance of deep-sub micrometer very large scale integrated (VLSI) circuits is being increasingly dominated by the interconnects due to increasing wire pitch and increasing die size. Additionally, heterogeneous integration of different technologies on one single chip is becoming increasingly desirable, for which planar (2-D) ICs may not be suitable.

The three dimensional (3-D) chip design strategy exploits the vertical dimension to alleviate the interconnect related problems and to facilitate heterogeneous integration of technologies to realize system on a chip (SoC) design. By simply dividing a planar chip into separate blocks, each occupying a separate physical level interconnected by short and vertical interlayer interconnects (VILICs), significant improvement in performance and reduction in wire-limited chip area can be achieved.In the 3-Ddesign architecture, an entire chip is divided into a number of blocks, and each block is placed on a separate layer of Si that are stacked on top of each other.

Motivation For 3-D ICs

The unprecedented growth of the computer and the information technology industry is demanding Very Large Scale Integrated ( VLSI ) circuits with increasing functionality and performance at minimum cost and power dissipation. Continuous scaling of VLSI circuits is reducing gate delays but rapidly increasing interconnect delays. A significant fraction of the total power consumption can be due to the wiring network used for clock distribution, which is usually realized using long global wires.

Furthermore, increasing drive for the integration of disparate signals (digital, analog, RF) and technologies (SOI, SiGe, GaAs, and so on) is introducing various SoC design concepts, for which existing planner (2-D) IC design may not be suitable.

3D Architecture

Three-dimensional integration to create multilayer Si ICs is a concept that can significantly improve interconnect performance ,increase transistor packing density, and reduce chip area and power dissipation. Additionally 3D ICs can be very effective large scale on chip integration of different systems.

In 3D design architecture, and entire(2D) chips is divided into a number of blocks is placed on separate layer of Si that are stacked on top of each other. Each Si layer in the 3D structure can have multiple layer of interconnects(VILICs) and common global interconnects.

Sensors on 3D Digitization

Introduction

Digital 3D imaging can benefit from advances in VLSI technology in order to accelerate its deployment in many fields like visual communication and industrial automation. High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Intelligent digitizers will be capable of measuring accurately and simultaneously colour and 3D.

Colour 3D Imaging Technology

Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available [1].

Passive vision, attempts to analyze the structure of the scene under ambient light. [1] Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed.

Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated.

Sensors For 3D Imaging

The sensors used in the autosynchronized scanner include

1. Synchronization Circuit Based Upon Dual Photocells

This sensor ensures the stability and the repeatability of range measurements in environment with varying temperature. Discrete implementations of the so-called synchronization circuits have posed many problems in the past. A monolithic version of an improved circuit has been built to alleviate those problems. [1]

2. Laser Spot Position Measurement Sensors

High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated.[1]

Fuzzy Logic

Introduction

In this context, FL is a problem-solving control system methodology that lends itself to implementation in systems ranging from simple, small, embedded micro-controllers to large, networked, multi-channel PC or workstation-based data acquisition and control systems. It can be implemented in hardware, software, or a combination of both. FL provides a simple way to arrive at a definite conclusion based upon vague, ambiguous, imprecise, noisy, or missing input information.

FL's approach to control problems mimics how a person would make decisions, only much faster.
As the complexity of a system increases, it becomes more difficult and eventually impossible to make a precise statement about its behavior, eventually arriving at a point of complexity where the fuzzy logic method born in humans is the only way to get at the problem.

History

The concept of Fuzzy Logic (FL) was conceived by Lotfi Zadeh, a professor at the University of California at Berkley, and presented not as a control methodology, but as a way of processing data by allowing partial set membership rather than crisp set membership or non-membership. This approach to set theory was not applied to control systems until the 70's due to insufficient small-computer capability prior to that time. Professor Zadeh reasoned that people do not require precise, numerical information input, and yet they are capable of highly adaptive control. If feedback controllers could be programmed to accept noisy, imprecise input, they would be much more effective and perhaps easier to implement. Unfortunately, U.S. manufacturers have not been so quick to embrace this technology while the Europeans and Japanese have been aggressively building real products around it.

How is FL different from conventional control methods?

FL incorporates a simple, rule-based IF X AND Y THEN Z approach to a solving control problem rather than attempting to model a system mathematically. The FL model is empirically-based, relying on an operator's experience rather than their technical understanding of the system. For example, rather than dealing with temperature control in terms such as "SP =500F", "T <1000F", or "210C
How does Fl work?

FL requires some numerical parameters in order to operate such as what is considered significant error and significant rate-of-change-of-error, but exact values of these numbers are usually not critical unless very responsive performance is required in which case empirical tuning would determine them. For example, a simple temperature control system could use a single temperature feedback sensor whose data is subtracted from the command signal to compute "error" and then time-differentiated to yield the error slope or rate-of-change-of-error, hereafter called "error-dot". Error might have units of degs F and a small error considered to be 2F while a large error is 5F. The "error-dot" might then have units of degs/min with a small error-dot being 5F/min and a large one being 15F/min. These values don't have to be symmetrical and can be "tweaked" once the system is operating in order to optimize performance. Generally, FL is so forgiving that the system will probably work the first time without any tweaking.

Simputer

Introduction

Simputer is a multilingual mass access low cost hand held device currently being developed. The information mark up language is the primary format of the content accessed by the Simputer. The information mark up language (IML) has been created to provide a uniform experience to users and to allow rapid development of solution on any platform.
The Simputer proves that illiteracy is no longer a barrier in handling a computer. The Simputer through its smart card feature allows for personal information management at the individual level for a unlimited number of users. Applications in diverse sectors can be made possible at an affordable price.

A rapid growth of knowledge can only happen in an environment which admits free exchange of thought of information. Indeed, nothing else can explain the astounding progress of science in the last three hundred years. Technology has unfortunately not seen this freedom two often. Several rounds of intends discussions among the trustees convinced them that the only way to break out the current absurdities is to foster a spirit of co-operation in inventing new technologies. The common mistake of treating to-operation as a synonym of charity poses its own challenges. The Simputer Licensing Framework is the Trust's responds to these challenges.

What is Simputer?
A Simputer is a multilingual, mass access, low cost, portable alternative to PC's by which the benefits of IT can reach the common man. It has a special role in the third world because it is ensures that illiteracy is no longer barrier in handling a computer. The key to bridging the digital divide is to have shared devices that permit truly simple and natural users interfaces based on sight, touch and studio. The Simputer meets these demands through a browser for the Information Markup Language (IML). IML has been created to provide a uniform experience to users to allow rapid development of solutions on any platform.

Features
Simputer is a hand held device with the following features:
- It is portable
- A (320 X 240) LCD Panel which is touch enabled
- A speaker, microphone and a few keys
- A soft keyboard
- A stylus is a pointing device
- Smart card reader in Simputer
- The use of extensive audio in the form of text to speech and audio snippets
The display resolution is much smaller than the usual desktop monitor but much higher than usual wireless devices (cell phones, pagers etc). The operating system for Simputer is Linux. It is designed so that Linux is to be started up in frequently, but the Simputer is in a low power mode during the times it is not in use. When the Simputer is 'powered on', the user is presented with a screed having several icons.

What Makes Simputer Different From Regular PCs?
Simputer is not a personal computer. It could however be a pocket computer. It is much more powerful than a Palm, with screen size 320 x 240 and memory capability (32MB RAM). The Wintel (Windows + Intel) architecture of the de facto standard PC is quite unsuitable for deployment on the low cost mass market. The entry barrier due to software licensing is just too high. While the Wintel PC provides a de facto level f standardization, it is not an open architecture. The Simputer mean while is centered around Linux which is freely available, open and modular.

Wavelet Video Processing Technology

Introduction

Uncompressed multimedia data requires considerable storage capacity and transmission bandwidth. Despite rapid progress in mass storage density processor speeds and digital communication system performance, demand for data storage capacity and data transmission bandwidth continues to outstrip the capabilities of available technologies. The recent growth of data intensive multimedia-based web applications have not only sustained the need for more efficient ways to encode signals and images but have made compression of such signals central to storage and communication technology.

For still image compression, the joint photographic experts group (JPEG) standard has been established. The performance of these codes generally degrades at low bit rates mainly because of the underlying block-based Discrete cosine Transform (DCT) scheme. More recently, the wavelet transform has emerged as a cutting edge technology, within the field of image compression. Wavelet based coding provides substantial improvements in picture quality at higher compression ratios. Over the past few years, a variety of powerful and sophisticated wavelet based schemes for image compression have been developed and implemented. Because of the many advantages, the top contenders in JPEG-2000 standard are all wavelet based compression algorithms.

Image Compression

Image compression is a technique for processing images. It is the compressor of graphics for storage or transmission. Compressing an image is significantly different than compressing saw binary data. Some general purpose compression programs can be used to compress images, but the result is less than optimal. This is because images have certain statistical properties which can be exploited by encoders specifically designed for them. Also some finer details in the image can be sacrificed for saving storage space.

Compression is basically of two types.
1. Lossy Compression
2. Lossless Compression.

Lossy compression of data concedes a certain loss of accuracy in exchange for greatly increased compression. An image reconstructed following lossy compression contains degradation relative to the original. Often this is because the compression scheme completely discards redundant information. Under normal viewing conditions no visible is loss is perceived. It proves effective when applied to graphics images and digitized voice.
Lossless compression consists of those techniques guaranteed to generate an exact duplicate of the input data stream after a compress or expand cycle. Here the reconstructed image after compression is numerically identical to the original image. Lossless compression can only achieve a modest amount of compression. This is the type of compression used when storing data base records, spread sheets or word processing files.

IP Telephony

Introduction

If you've never heard of Internet Telephony, get ready to change the way you think about long-distance phone calls. Internet Telephony, or Voice over Internet Protocol, is a method for taking analog audio signals, like the kind you hear when you talk on the phone, and turning them into digital data that can be transmitted over the Internet.
How is this useful? Internet Telephony can turn a standard Internet connection into a way to place free phone calls. The practical upshot of this is that by using some of the free Internet Telephony software that is available to make Internet phone calls, you are bypassing the phone company (and its charges) entirely.

Internet Telephony is a revolutionary technology that has the potential to completely rework the world's phone systems. Internet Telephony providers like Vonage have already been around for a little while and are growing steadily. Major carriers like AT&T are already setting up Internet Telephony calling plans in several markets around the United States, and the FCC is looking seriously at the potential ramifications of Internet Telephony service.
Above all else, Internet Telephony is basically a clever "reinvention of the wheel." In this article, we'll explore the principles behind Internet Telephony, its applications and the potential of this emerging technology, which will more than likely one day replace the traditional phone system entirely.

The interesting thing about Internet Telephony is that there is not just one way to place a call.

There are three different "flavors" of Internet Telephony service in common use today:
ATA - The simplest and most common way is through the use of a device called an ATA (analog telephone adaptor). The ATA allows you to connect a standard phone to your computer or your Internet connection for use with Internet Telephony.

The ATA is an analog-to-digital converter. It takes the analog signal from your traditional phone and converts it into digital data for transmission over the Internet. Providers like Vonage and AT&T CallVantage are bundling ATAs free with their service. You simply crack the ATA out of the box, plug the cable from your phone that would normally go in the wall socket into the ATA, and you're ready to make Internet Telephony calls. Some ATAs may ship with additional software that is loaded onto the host computer to configure it; but in any case, it is a very straightforward setup.

IP Phones - These specialized phones look just like normal phones with a handset, cradle and buttons. But instead of having the standard RJ-11 phone connectors, IP phones have an RJ-45 Ethernet connector. IP phones connect directly to your router and have all the hardware and software necessary right onboard to handle the IP call. Wi-Fi phones allow subscribing callers to make Internet Telephony calls from any Wi-Fi hot spot.

Computer-to-computer - This is certainly the easiest way to use Internet Telephony. You don't even have to pay for long-distance calls. There are several companies offering free or very low-cost software that you can use for this type of Internet Telephony. All you need is the software, a microphone, speakers, a sound card and an Internet connection, preferably a fast one like you would get through a cable or DSL modem. Except for your normal monthly ISP fee, there is usually no charge for computer-to-computer calls, no matter the distance.

If you're interested in trying Internet Telephony, then you should check out some of the free Internet Telephony software available on the Internet. You should be able to download and set it up in about three to five minutes. Get a friend to download the software, too, and you can start tinkering with Internet Telephony to get a feel for how it works.

RPR

Introduction

The nature of the public network has changed. Demand for Internet Protocol (IP) data is growing at a compound annual rate of between 100% and 800%1, while voice demand remains stable. What was once a predominantly circuit switched network handling mainly circuit switched voice traffic has become a circuit-switched network handling mainly IP data. Because the nature of the traffic is not well matched to the underlying technology, this network is proving very costly to scale. User spending has not increased proportionally to the rate of bandwidth increase, and carrier revenue growth is stuck at the lower end of 10% to 20% per year. The result is that carriers are building themselves out of business.

Over the last 10 years, as data traffic has grown both in importance and volume, technologies such as frame relay, ATM, and Point-to-Point Protocol (PPP) have been developed to force fit data onto the circuit network. While these protocols provided virtual connections-a useful approach for many services-they have proven too inefficient, costly and complex to scale to the levels necessary to satisfy the insatiable demand for data services. More recently, Gigabit Ethernet (GigE) has been adopted by many network service providers as a way to network user data without the burden of SONET/SDH and ATM. GigE has shortcomings when applied in carrier networks were recognized and for these problems, a technology called Resilient Packet Ring Technology were developed.

RPR retains the best attributes of SONET/SDH, ATM, and Gigabit Ethernet. RPR is optimized for differentiated IP and other packet data services, while providing uncompromised quality for circuit voice and private line services. It works in point-to-point, linear, ring, or mesh networks, providing ring survivability in less than 50 milliseconds. RPR dynamically and statistically multiplexes all services into the entire available bandwidth in both directions on the ring while preserving bandwidth and service quality guarantees on a per-customer, per-service basis. And it does all this at a fraction of the cost of legacy SONET/SDH and ATM solutions.

Data, rather than voice circuits, dominates today's bandwidth requirements. New services such as IP VPN, voice over IP (VoIP), and digital video are no longer confined within the corporate local-area network (LAN). These applications are placing new requirements on metropolitan-area network (MAN) and wide-area network (WAN) transport. RPR is uniquely positioned to fulfill these bandwidth and feature requirements as networks transition from circuit-dominated to packet-optimized infrastructures.

RPR technology uses a dual counter rotating fiber ring topology. Both rings (inner and outer) are used to transport working traffic between nodes. By utilizing both fibers, instead of keeping a spare fiber for protection, RPR utilizes the total available ring bandwidth. These fibers or ringlets are also used to carry control (topology updates, protection, and bandwidth control) messages. Control messages flow in the opposite direction of the traffic that they represent. For instance, outer-ring traffic-control information is carried on the inner ring to upstream nodes.

PH Control Technique using Fuzzy Logic

Introduction
Fuzzy control is a practical alternative for a variety of challenging control applications since it provides a convenient method for constructing non-linear controllers via the use of heuristic information. Since heuristic information may come from an operator who has acted as "a human in the loop" controller for a process. In the fuzzy control design methodology, a set of rules on how to control the process is written down and then it is incorporated into a fuzzy controller that emulates the decision making process of the human.

In other cases, the heuristic information may come from a control engineer who has performed extensive mathematical modelling, analysis and development of control algorithms for a particular process. The rest of the process is the same as the earlier case. The ultimate objective of using fuzzy control is to provide a user-friendly formalism for representing and implementing the ideas we have about how to achieve high performance control. Apart from being a heavily used technology these days, fuzzy logic control is simple, effective and efficient. In this paper, the structure, working and design of a fuzzy controller is discussed in detail through an in-depth analysis of the development and functioning of a fuzzy logic pH controller.

PH Control

To illustrate the application of fuzzy logic, the remaining section of the paper is directed towards the design and working of a pH control system using fuzzy logic.

PH is an important variable in the field of production especially in chemical plants, sugar industries, etc. PH of a solution is defined as the negative of the logarithm of the hydrogen ion concentration, to the base 10. I.e., PH= -log 10 [H+]
Let us consider the stages of operation of a sugar industry, where PH control is required. The main area of concern is the clarification of raw juice of sugarcane. The raw juice will be having a PH of 5.1 to 5.5. The clarified juice should ideally be neutral. I.e., the set point should be a PH of 7. The process involves addition of lime and SO2 gas for clarifying the raw juice. The addition of these two are called liming and sulphitation respectively. Since the process involves continuous addition of lime and SO2 ; lime has a property of increasing the PH of the clarified juice. This is the principle used for PH control in sugar industries. The PH of the raw juice is measured and this value is compared to the set point and this is further used for changing the diameter of the lime flow pipe as per the requirement.

The whole process can be summarised as follows. The PH sensor measures the PH. This reading is amplified and recorded. The output of the amplifier is also fed to the PH indicator and interface. The output of this block is fed to the fuzzy controller. The output of fuzzy controller is given to the stepper motor drive. This inturn adjusts the diameter of lime flow pipe as per the requirement. Thus, the input to the fuzzy controller is the PH reading of the raw juice.

The output of the fuzzy controller is the diameter of the lime flow pipe valve or a quantity that controls the diameter of the lime flow pipe valve like a DC current, voltage, etc. The output obtained from the fuzzy controller is used to drive a stepper motor which inturn controls the diameter of the value opening of the lime flow pipe. This output tends to maintain the pH value of sugar juice to a target value. A detailed description of the design and functioning of the fuzzy controller is given in the following section.

Multisensor Fusion and Integration

Introduction
Sensor is a device that detects or senses the value or changes of value of the variable being measured. The term sensor some times is used instead of the term detector, primary element or transducer.

The fusion of information from sensors with different physical characteristics, such as light, sound, etc enhances the understanding of our surroundings and provide the basis for planning, decision making, and control of autonomous and intelligent machines.

Sensors Evolution

A sensor is a device that responds to some external stimuli and then provides some useful output. With the concept of input and output, one can begin to understand how sensors play a critical role in both closed and open loops.

One problem is that sensors have not been specified. In other words they tend to respond variety of stimuli applied on it without being able to differentiate one from another. Neverthless, sensors and sensor technology are necessary ingredients in any control type application. Without the feedback from the environment that sensors provide, the system has no data or reference points, and thus no way of understanding what is right or wrong g with its various elements.

Sensors are so important in automated manufacturing particularly in robotics. Automated manufacturing is essentially the procedure of remo0ving human element as possible from the manufacturing process. Sensors in the condition measurement category sense various types of inputs, condition, or properties to help monitor and predict the performance of a machine or system.

Multisensor Fusion And Integration

Multisensor integration is the synergistic use of the information provided by multiple sensory devices to assist in the accomplishment of a task by a system.

Multisensor fusion refers to any stage in the integration process where there is an actual combination of different sources of sensory information into one representational format.

Multisensor Integration

The diagram represents multisensor integration as being a composite of basic functions. A group of n sensors provide input to the integration process. In order for the data from each sensor to be used for integration, it must first be effectively modelled. A sensor model represents the uncertainty and error in the data from each sensor and provides a measure of its quality that can be 7used by the subsequent integration functions.

Integrated Power Electronics Module

Introduction

In power electronics, solid-state electronics is used for the control and conversion of electric power .The goal of power electronics is to realize power conversion from electrical source to an electrical load in a highly efficient, highly reliable and cost effective way. Power electronics modules are key units in a power electronics system. These modules contain integration of power switches and associated electronic circuitry for drive control and protection and other passive components.

During the past decades, power devices underwent generation-by-generation improvements and can now handle significant power density. On the other hand power electronics packaging has not kept pace with the development of semiconductor devices. This is due to the limitations of power electronics circuits. The integration of power electronics circuit is quite different from that of other electronics circuits. The objective of power electronics circuits is electronics energy processing and hence require high power handling capability and proper thermal management.

Most of the currently used power electronic modules are made by using wire-bonding technology [1,2]. In these packages power semi conductor dies are mounted on a common substrate and interconnected with wire bonds. Other associated electronic circuitries are mounted on a multi layer PCB and connected to the power devices by vertical pins. These wire bonds are prone to resistance, parasitic and fatigue failure. Due to its two dimensional structure the package has large size. Another disadvantage is the ringing produced by parasitic associated with the wire bonds.

To improve the performance and reliability of power electronics packages, wire bonds must be replaced. The researches in power electronic packaging have resulted in the development of an advanced packaging technique that can replace wire bonds. This new generation package is termed as 'Integrated Power Electronics Module' (IPEM) [1]. In this, planar metalization is used instead of conventional wire bonds. It uses a three-dimensional integration technique that can provide low profile high-density systems. It offers high frequency operation and improved performance. It also reduces the size, weight and cost of the power modules.

Features Of IPEMS

The basic structure of an IPEM contains power semi conductor devices, control/drive/protection electronics and passive components. Power devices and their drive and protection circuit is called the active IPEM and the remaining part is called passive IPEM. The drive and protection circuits are realized in the form of hybrid integrated circuit and packaged together with power devices. Passive components include inductors, capacitors, transformers etc.

The commonly used power switching devices are MOSFETs and IGBTs [3]. This is mainly due to their high frequency operation and low on time losses. Another advantage is their inherent vertical structure in which the metalization electrode pads are on two sides. Usually the gate source pads are on the top surface with non-solderable thin film metal Al contact. The drain metalization using Ag or Au is deposited on the bottom of chip and is solderable. This vertical structure of power chips offers advantage to build sand witch type 3-D integration constructions.

H.323

Introduction

The H.323 standard provides a foundation for audio, video, and data communications across IP-based networks, including the Internet. By complying with H.323, multimedia products and applications from multiple vendors can interoperate, allowing users to communicate without concern for compatibility. H.323 will be the keystone for LAN-based products for consumer, business, entertainment, and professional applications.

H.323 is an umbrella recommendation from the International Telecommunications Union (ITU) that sets standards for multimedia communications over Local Area Networks (LANs) that do not provide a guaranteed Quality of Service (QoS). These networks dominate today's corporate desktops and include packet-switched TCP/IP and IPX over Ethernet, Fast Ethernet and Token Ring network technologies. Therefore, the H.323 standards are important building blocks for a broad new range of collaborative, LAN-based applications for multimedia communications.

The H.323 specification was approved in 1996 by the ITU's Study Group 16. Version 2 was approved in January 1998. The standard is broad in scope and includes both stand-alone devices and embedded personal computer technology as well as point-to-point and multipoint conferences. H.323 also addresses call control, multimedia management, and bandwidth management as well as interfaces between LANs and other networks.

H.323 is part of a larger series of communications standards that enable videoconferencing across a range of networks. Known as H.32X, this series includes H.320 and H.324, which address ISDN and PSTN communications, respectively.

IMPORTANCE OF H.323

The H.323 Recommendation is comprehensive, yet flexible, and can be applied to voice-only handsets and full multimedia video-conferencing stations, among others. H.323 applications are set to grow into the mainstream market for several reasons.

" H.323 sets multimedia standards for the existing infrastructure (i.e. IP-based networks). Designed to compensate for the effect of highly variable LAN latency, H.323 allows customers to use multimedia applications without changing their network infrastructure.
" IP LANs are becoming more powerful. Ethernet bandwidth is migrating from 10 Mbps to 100 Mbps, and Gigabit Ethernet is making headway into the market.
" By providing device-to-device, application-to-application, and vendor-to-vendor interoperability, H.323 allows customer products to interoperate with other H.323-compliant products.
" PCs are becoming more powerful multimedia platforms due to faster processors, enhanced instruction sets, and powerful multimedia accelerator chips.
" H.323 provides standards for interoperability between LANs and other networks.
" Network loading can be managed. With H.323, the network manager can restrict the amount of network bandwidth available for conferencing. Multicast support also reduces bandwidth requirements.
" H.323 has the support of many computing and communications companies and organizations, including Intel, Microsoft, Cisco, and IBM. The efforts of these companies will generate a higher level of awareness in the market.

GMPLS

Introduction

The emergence of optical transport systems has dramatically increased the raw capacity of optical networks and has enabled new sophisticated applications. For example, network-based storage, bandwidth leasing, data mirroring, add/drop multiplexing [ADM], dense wavelength division multiplexing [DWDM], optical cross-connect [OXC], photonic cross-connect [PXC], and multiservice switching platforms are some of the devices that may make up an optical network and are expected to be the main carriers for the growth in data traffic.

Multiple Types of Switching and Forwarding Hierarchies

Generalized MPLS (GMPLS) differs from traditional MPLS in that it supports multiple types of switching, i.e. the addition of support for TDM, lambda, and fiber (port) switching. The support for the additional types of switching has driven GMPLS to extend certain base functions of traditional MPLS and, in some cases, to add functionality. These changes and additions impact basic LSP properties, how labels are requested and communicated, the unidirectional nature of LSPs, how errors are propagated, and information provided for synchronizing the ingress and egress LSRs.

1. Packet Switch Capable (PSC) interfaces:
Interfaces that recognize packet boundaries and can forward data based on the content of the packet header. Examples include interfaces on routers that forward data based on the content of the IP header and interfaces on routers that forward data based on the content of the MPLS "shim" header.

2 . Time-Division Multiplex Capable (TDM) interfaces:
Interfaces that forward data based on the data's time slot in a repeating cycle. An example of such an interface is that of a SDH/SONET Cross-Connect (XC), Terminal Multiplexer (TM), or Add-Drop Multiplexer (ADM).

3 . Lambda Switch Capable (LSC) interfaces:
Interfaces that forward data based on the wavelength on which the data is received. An example of such an interface is that of a Photonic Cross-Connect (PXC) or Optical Cross-Connect (OXC) that can operate at the level of an individual wavelength. Additional examples include PXC interfaces that can operate at the level of a group of wavelengths, i.e. a waveband.

4. Fiber-Switch Capable (FSC) interfaces:
Interfaces that forward data based on a position of the data in the real world physical spaces. An example of such an interface is that of a PXC or OXC that can operate at the level of a single or multiple fibers.

The diversity and complexity in managing these devices have been the main driving factors in the evolution and enhancement of the MPLS suite of protocols to provide control for not only packet-based domains, but also time, wavelength, and space domains. GMPLS further extends the suite of IP-based protocols that manage and control the establishment and release of label switched paths (LSP) that traverse any combination of packet, TDM, and optical networks. GMPLS adopts all technology in MPLS.