IGCT

Introduction


Thyristor technology is inherently superior to transistor for blocking voltage values above 2.5kV, plasma distributions equal to those of diodes offering the best trade-off between the on-state and blocking voltages. Until the introduction of newer power switches, the only serious contenders for high-power transportation systems and other applications were the GTO (thyristor), with its cumbersome snubbers, and the IGBT (transistor), with its inherently high losses. Until now, adding the gate turn-off feature has resulted in GTO being constrained by a variety of unsatisfactory compromises. The widely used standard GTO drive technology results in inhomogenous turn-on and turn-off that call for costly dv/dt and di/dt snubber circuits combined with bulky gate drive units.

Rooting from the GTO is one of the newest power switches, the Gate-Commutated Thyristor (GCT). It successfully combines the best of the thyristor and transistor characteristics, while fulfilling the additional requirements of manufacturability and high reliability. The GCT is a semiconductor based on the GTO structure, whose cathode emitter can be shut off "instantaneously", thereby converting the device from a low conduction-drop thyristor to a low switching loss, high dv/dt bipolar transistor at turn- off.

The IGCT (Integrated GCT) is the combination of the GCT device and a low inductance gate unit. This technology extends transistor switching performance to well above the MW range, with 4.5kV devices capable of turning off 4kA, and 6kV devices capable of turning off 3kA without snubbers. The IGCT represents the optimum combination of low loss thyristor technology and snubberles gate effective turn off for demanding medium and high voltage power electronics applications.

The thick line shows the variation of the anode voltage during turn-off. The lighter shows the variation of the anode current during turn-off process of IGCT.

GTO and thyristor are four layer (npnp) devices. As such, they have only two stable points their characteristics-'on' and 'off'. Every state in between is unstable and results in current filamentation. The inherent instability is worsened by processing imperfections. This has led to the widely accepted myth that a GTO cannot be operated without a snubber. Essentially, the GTO has to be reduced to a stable pnp device i.e. a transistor, for the few critical microseconds during turn-off.

To stop the cathode (n) from taking part in the process, the bias of the cathode n-p junction has to be reversed before voltage starts to build up at the main junction. This calls for commutation of the full load current from the cathode (n) to the gate (p) within one microsecond. Thanks to a new housing design, 4000A/us can be achieved with a low cost 20V gate unit. Current filamentation is totally suppressed and the turn-off waveforms and safe operating area are identical to those of a transistor.

IGCT technology brings together the power handling device (GCT) and the device control circuitry (freewheeling diode and gate drive) in an integrated package. By offering four levels of component packaging and integration, it permits simultaneous improvement in four interrelated areas; low switching and conduction losses at medium voltage, simplified circuitry for operating the power semiconductor, reduced power system cost, and enhanced reliability and availability. Also, by providing pre- engineered switch modules, IGCT enables medium-voltage equipment designers to develop their products faster.

Iris Scanning

Introduction


In today's information age it is not difficult to collect data about an individual and use that information to exercise control over the individual. Individuals generally do not want others to have personal information about them unless they decide to reveal it. With the rapid development of technology, it is more difficult to maintain the levels of privacy citizens knew in the past. In this context, data security has become an inevitable feature. Conventional methods of identification based on possession of ID cards or exclusive knowledge like social security number or a password are not altogether reliable. ID cards can be almost lost, forged or misplaced: passwords can be forgotten.

Such that an unauthorized user may be able to break into an account with little effort. So it is need to ensure denial of access to classified data by unauthorized persons. Biometric technology has now become a viable alternative to traditional identification systems because of its tremendous accuracy and speed. Biometric system automatically verifies or recognizes the identity of a living person based on physiological or behavioral characteristics.

Since the persons to be identified should be physically present at the point of identification, biometric techniques gives high security for the sensitive information stored in mainframes or to avoid fraudulent use of ATMs. This paper explores the concept of Iris recognition which is one of the most popular biometric techniques. This technology finds applications in diverse fields.

Biometrics - Future Of Identity
Biometric dates back to ancient Egyptians who measured people to identify them. Biometric devices have three primary components.
1. Automated mechanism that scans and captures a digital or analog image of a living personal characteristic
2. Compression, processing, storage and comparison of image with a stored data.
3. Interfaces with application systems.


A biometric system can be divided into two stages: the enrolment module and the identification module. The enrolment module is responsible for training the system to identity a given person. During an enrolment stage, a biometric sensor scans the person's physiognomy to create a digital representation. A feature extractor processes the representation to generate a more compact and expressive representation called a template. For an iris image these include the various visible characteristics of the iris such as contraction, Furrows, pits, rings etc. The template for each user is stored in a biometric system database.

The identification module is responsible for recognizing the person. During the identification stage, the biometric sensor captures the characteristics of the person to be identified and converts it into the same digital format as the template. The resulting template is fed to the feature matcher, which compares it against the stored template to determine whether the two templates match.

The identification can be in the form of verification, authenticating a claimed identity or recognition, determining the identity of a person from a database of known persons. In a verification system, when the captured characteristic and the stored template of the claimed identity are the same, the system concludes that the claimed identity is correct. In a recognition system, when the captured characteristic and one of the stored templates are the same, the system identifies the person with matching template.

Loop magnetic couplers

Introduction


Couplers, also known as "isolators" because they electrically isolate as well as transmit data, are widely used in industrial and factory networks, instruments, and telecommunications. Every one knows the problems with optocouplers. They take up a lot of space, are slow, optocouplers age and their temperature range is quite limited. For years, optical couplers were the only option. Over the years, most of the components used to build instrumentation circuits have become ever smaller. Optocoupler technology, however, hasn't kept up. Existing coupler technologies look like dinosaurs on modern circuit boards.


Magnetic couplers are analogous to optocouplers in a number of ways. Design engineers, especially in instrumentation technology, will welcome a galvanically-isolated data coupler with integrated signal conversion in a single IC. My report will give a detailed study about 'ISOLOOP MAGNETIC COUPLERS'. GROUND LOOPS
When equipment using different power supplies is tied together (with a common ground connection) there is a potential for ground loop currents to exist. This is an induced current in the common ground line as a result of a difference in ground potentials at each piece of equipment.

Normally all grounds are not in the same potential. Widespread electrical and communications networks often have nodes with different ground domains. The potential difference between these grounds can be AC or DC, and can contain various noise components. Grounds connected by cable shielding or logic line ground can create a ground loop-unwanted current flow in the cable. Ground-loop currents can degrade data signals, produce excessive EMI, damage components, and, if the current is large enough, present a shock hazard.


Galvanic isolation between circuits or nodes in different ground domains eliminates these problems, seamlessly passing signal information while isolating ground potential differences and common-mode transients. Adding isolation components to a circuit or network is considered good design practice and is often mandated by industry standards. Isolation is frequently used in modems, LAN and industrial network interfaces (e.g., network hubs, routers, and switches), telephones, printers, fax machines, and switched-mode power supplies.


Giant Magnetoresistive (GMR):
Large magnetic field dependent changes in resistance are possible in thin film ferromagnet/nonmagnetic metallic multilayers. The phenomenon was first observed in France in 1988, when changes in resistance with magnetic field of up to 70% were seen. Compared to the small percent change in resistance observed in anisotropic magnetoresistance, this phenomenon was truly 'giant' magnetoresistance.


The spin of electrons in a magnet is aligned to produce a magnetic moment. Magnetic layers with opposing spins (magnetic moments) impede the progress of the electrons (higher scattering) through a sandwiched conductive layer. This arrangement causes the conductor to have a higher resistance to current flow.


An external magnetic field can realign all of the layers into a single magnetic moment. When this happens, electron flow will be less effected (lower scattering) by the uniform spins of the adjacent ferromagnetic layers. This causes the conduction layer to have a lower resistance to current flow. Note that these phenomenon takes places only when the conduction layer is thin enough (less than 5 nm) for the ferromagnetic layer's electron spins to affect the conductive layer's electron's path.

LWIP

Introduction


Over the last few years, the interest for connecting computers and computer supported devices to wireless networks has steadily increased. Computers are becoming more and more seamlessly integrated with everyday equipment and prices are dropping. At the same time wireless networking technologies, such as Bluetooth and IEEE 802.11b WLAN , are emerging. This gives rise to many new fascinating scenarios in areas such as health care, safety and security, transportation, and processing industry. Small devices such as sensors can be connected to an existing network infrastructure such as the global Internet, and monitored from anywhere.

The Internet technology has proven itself flexible enough to incorporate the changing network environments of the past few decades. While originally developed for low speed networks such as the ARPANET, the Internet technology today runs over a large spectrum of link technologies with vastly different characteristics in terms of bandwidth and bit error rate. It is highly advantageous to use the existing Internet technology in the wireless networks of tomorrow since a large amount of applications using the Internet technology have been developed. Also, the large connectivity of the global Internet is a strong incentive.

Since small devices such as sensors are often required to be physically small and inexpensive, an implementation of the Internet protocols will have to deal with having limited computing resources and memory. This report describes the design and implementation of a small TCP/IP stack called lwIP that is small enough to be used in minimal systems.

Overview

As in many other TCP/IP implementations, the layered protocol design has served as a guide for the design of the implementation of lwIP. Each protocol is implemented as its own module, with a few functions acting as entry points into each protocol. Even though the protocols are implemented separately, some layer violations are made, as discussed above, in order to improve performance both in terms of processing speed and memory usage. For example, when verifying the checksum of an incoming TCP segment and when demultiplexing a segment, the source and destination IP addresses of the segment has to be known by the TCP module. Instead of passing these addresses to TCP by the means of a function call, the TCP module is aware of the structure of the IP header, and can therefore extract this information by itself.

lwIP consists of several modules. Apart from the modules implementing the TCP/IP protocols (IP, ICMP, UDP, and TCP) a number of support modules are implemented.
The support modules consists of :-

" The operating system emulation layer (described in Chapter3)

" The buffer and memory management subsystems
(described in Chapter 4)

" Network interface functions (described in Chapter 5)

" Functions for computing Internet checksum (Chapter 6)

" An abstract API (described in Chapter 8 )

Image Authentication Techniques

Introduction


This paper explores the various techniques used to authenticate the visual data recorded by the automatic video surveillance system. Automatic video surveillance systems are used for continuous and effective monitoring and reliable control of remote and dangerous sites. Some practical issues must be taken in to account, in order to take full advantage of the potentiality of VS system. The validity of visual data acquired, processed and possibly stored by the VS system, as a proof in front of a court of law is one of such issues. But visual data can be modified using sophisticated processing tools without leaving any visible trace of the modification.

So digital or image data have no value as legal proof, since doubt would always exist that they had been intentionally tampered with to incriminate or exculpate the defendant. Besides, the video data can be created artificially by computerized techniques such as morphing. Therefore the true origin of the data must be indicated to use them as legal proof. By data authentication we mean here a procedure capable of ensuring that data have not been tampered with and of indicating their true origin.

Automatic Visual Surveillance System

Automatic Visual Surveillance system is a self monitoring system which consists of a video camera unit, central unit and transmission networks A pool of digital cameras is in charge of frame the scene of interest and sent corresponding video sequence to central unit. The central unit is in charge of analyzing the sequence and generating an alarm whenever a suspicious situation is detected.

Central unit also transmits the video sequences to an intervention centre such as security service provider, the police department or a security guard unit. Somewhere in the system the video sequence or some part of it may be stored and when needed the stored sequence can be used as a proof in front of court of law. If the stored digital video sequences have to be legally credible, some means must be envisaged to detect content tampering and reliably trace back to the data origin

Authentication Techniques

Authentication techniques are performed on visual data to indicate that the data is not a forgery; they should not damage visual quality of the video data. At the same time, these techniques must indicate the malicious modifications include removal or insertion of certain frames, change of faces of individual, time and background etc. Only a properly authenticated video data has got the value as legal proof. There are two major techniques for authenticating video data.


They are as follows


1. Cryptographic Data Authentication

It is a straight forward way to provide video authentication, namely through the joint use of asymmetric key encryption and the digital Hash function.

Cameras calculate a digital summary (digest) of the video by means of hash function. Then they encrypt the digest with their private key, thus obtaining a signed digest which is transmitted to the central unit together with acquired sequences. This digest is used to prove data integrity or to trace back to their origin. Signed digest can only read by using public key of the camera.

2. Watermarking- based authentication

Watermarking data authentication is the modern approach to authenticate visual data by imperceptibly embedding a digital watermark signal on the data.

Digital watermarking is the art and science of embedding copyright information in the original files. The information embedded is called 'watermarks '. Digital watermarks are difficult to remove without noticeably degrading the content and are a covert means in situation where copyright fails to provide robustness.

Seasonal Influence on Safety of Substation Grounding

Introduction


With the development of modern power system to the direction of extra-high voltage, large capacity, far distance transmission and application of advanced technologies the demand on the safety, stability and economic operation of power system became higher. A good grounding system is the fundamental insurance to keep the safe operation of the power system. The good grounding system should ensure the following:


" To provide safety to personnel during normal and fault conditions by limiting step and touch potential.
" To assure correct operation of electrical devices.
" To prevent damage to electrical apparatus.
" To dissipate lightning strokes.
" To stabilize voltage during transient conditions and therefore to minimize the probability of flashover during the transients


As it is stated in the ANSI/IEEE Standard 80-1986 "IEEE Guide for Safety in AC substation grounding," a safe grounding design has two objectives:


" To provide means to carry electric currents into the earth under normal and fault condition without exceeding any operational and equipment limit or adversely affecting continuity of service.
" To assure that a person in the vicinity of grounded facilities is not exposed to the danger of critical electrical shock.


A practical approach to safe grounding considers the interaction of two grounding systems: The intentional ground, consisting of ground electrodes buried at some depth below the earth surface, and the accidental ground, temporarily established by a person exposed to a potential gradient at a grounded facility.


An ideal ground should provide a near zero resistance to remote earth. In practice, the ground potential rise at the facility site increases proportionally to the fault current; the higher the current, the lower the value of total system resistance which must be obtained. For most large substations the ground resistance should be less than 1 Ohm. For smaller distribution substations the usually acceptable range is 1-5 Ohms, depending on the local conditions.
When a grounding system is designed, the fundamental method is to ensure the safety of human beings and power apparatus is to control the step and touch voltages in their respective safe region. step and touch voltage can be defined as follows.

Step Voltage
It is defined as the voltage between the feet of the person standing in near an energized object. It is equal to the difference in voltage given by the voltage distribution curve between two points at different distance from the electrode.


Touch Voltage
It is defined as the voltage between the energized object and the feet of the person in contact with the object. It is equal to the difference in voltage between the object and a point some distance away from it.
In different season, the resistivity of the surface soil layer would be changed. This would affect the safety of grounding systems. The value of step and touch voltage will move towards safe region or to the hazard side is the main concerned question

Wavelet transforms

Introduction


Wavelet transforms have been one of the important signal processing developments in the last decade, especially for the applications such as time-frequency analysis, data compression, segmentation and vision. During the past decade, several efficient implementations of wavelet transforms have been derived. The theory of wavelets has roots in quantum mechanics and the theory of functions though a unifying framework is a recent occurrence. Wavelet analysis is performed using a prototype function called a wavelet.

Wavelets are functions defined over a finite interval and having an average value of zero. The basic idea of the wavelet transform is to represent any arbitrary function f (t) as a superposition of a set of such wavelets or basis functions. These basis functions or baby wavelets are obtained from a single prototype wavelet called the mother wavelet, by dilations or contractions (scaling) and translations (shifts). Efficient implementation of the wavelet transforms has been derived based on the Fast Fourier transform and short-length 'fast-running FIR algorithms' in order to reduce the computational complexity per computed coefficient.

First of all, why do we need a transform, or what is a transform anyway?

Mathematical transformations are applied to signals to obtain further information from that signal that is not readily available in the raw signal. Now, a time-domain signal is assumed as a raw signal, and a signal that has been transformed by any available transformations as a processed signal.

There are a number of transformations that can be applied such as the Hilbert transform, short-time Fourier transform, Wigner transform, the Radon transform, among which the Fourier transform is probably the most popular transform. These mentioned transforms constitute only a small portion of a huge list of transforms that are available at engineers and mathematicians disposal. Each transformation technique has its own area of application, with advantages and disadvantages.

Importance Of The Frequency Information

Often times, the information that cannot be readily seen in the time-domain can be seen in the frequency domain. Most of the signals in practice are time-domain signals in their raw format. That is, whatever that signal is measuring, is a function of time. In other words, when we plot the signal one of the axis is time (independent variable) and the other (dependent variable) is usually the amplitude.

When we plot time-domain signals, we obtain a time-amplitude representation of the signal. This representation is not always the best representation of the signal for most signal processing related applications. In many cases, the most distinguished information is hidden in the frequency content of the signal. The frequency spectrum of a signal is basically the frequency components (spectral components) of that signal. The frequency spectrum of a signal shows what frequencies exist in the signal.

Cyberterrorism

Definition

Cyberterrorism is a new terrorist tactic that makes use of information systems or digital technology, especially the Internet, as either an instrument or a target. As the Internet becomes more a way of life with us,it is becoming easier for its users to become targets of the cyberterrorists. The number of areas in which cyberterrorists could strike is frightening, to say the least.

The difference between the conventional approaches of terrorism and new methods is primarily that it is possible to affect a large multitude of people with minimum resources on the terrorist's side, with no danger to him at all. We also glimpse into the reasons that caused terrorists to look towards the Web, and why the Internet is such an attractive alternative to them.

The growth of Information Technology has led to the development of this dangerous web of terror, for cyberterrorists could wreak maximum havoc within a small time span. Various situations that can be viewed as acts of cyberterrorism have also been covered. Banks are the most likely places to receive threats, but it cannot be said that any establishment is beyond attack. Tips by which we can protect ourselves from cyberterrorism have also been covered which can reduce problems created by the cyberterrorist.


We, as the Information Technology people of tomorrow need to study and understand the weaknesses of existing systems, and figure out ways of ensuring the world's safety from cyberterrorists. A number of issues here are ethical, in the sense that computing technology is now available to the whole world, but if this gift is used wrongly, the
consequences could be disastrous. It is important that we understand and mitigate cyberterrorism for the benefit of society, try to curtail its growth, so that we can heal the present, and live the future…

Ipv6 - The Next Generation Protocol

Definition

The Internet is one of the greatest revolutionary innovations of the twentieth century.It made the 'global village utopia ' a reality in a rather short span of time. It is changing the way we interact with each other, the way we do business, the way we educate ourselves and even the way we entertain ourselves. Perhaps even the architects of Internet would not have foreseen the tremendous growth rate of the network being witnessed today.With the advent of the Web and multimedia services, the technology underlying t he Internet has been under stress.

It cannot adequately support many services being envisaged, such as real time video conferencing, interconnection of gigabit networks with lower bandwidths, high security applications such as electronic commerce, and interactive virtual reality applications. A more serious problem with today's Internet is that it can interconnect a maximum of four billion systems only, which is a small number as compared to the projected systems on the Internet in the twenty-first century.

Each machine on the net is given a 32-bit address. With 32 bits, a maximum of about four billion addresses is possible. Though this is a large a number, soon the Internet will have TV sets, and even pizza machines connected to it, and since each of them must have an IP address, this number becomes too small. The revision of IPv4 was taken up mainly to resolve the address problem, but in the course of refinements, several other features were also added to make it suitable for the next generation Internet.

This version was initially named IPng (IP next generation) and is now officially known as IPv6. IPv6 supports 128-bit addresses, the source address and the destination address, each being, 128 bits long. IPv5 a minor variation of IPv4 is presently running on some routers. Presently, most routers run software that support only IPv4. To switch over to IPv6 overnight is an impossible task and the transition is likely to take a very long time.

However to speed up the transition, an IPv4 compatible IPv6 addressing scheme has been worked out. Major vendors are now writing softwares for various computing environments to support IPv6 functionality. Incidentally, software development for different operating systems and router platforms will offer major jobs opportunities in coming years.

Driving Optical Network Evolution

Definition

Over the years, advancement in technologies has improved transmission limitations, the number of wavelengths we can send down a piece of fiber, performance, amplification techniques, and protection and redundancy of the network. When people have described and spoken at length about optical networks, they have typically limited the discussion of optical network technology to providing physical-layer connectivity.

When actual network services are discussed, optical transport is augmented through the addition of several protocol layers, each with its own sets of unique requirements, to make up a service-enabling network. Until recently, transport was provided through specific companies that concentrated on the core of the network and provided only point-to- point transport services.

A strong shift in revenue opportunities from a service provider and vendor perspective, changing traffic patterns from the enterprise customer, and capabilities to drive optical fiber into metropolitan (metro) areas has opened up the next emerging frontier of networking. Providers are now considering emerging lucrative opportunities in the metro space. Whereas traditional or incumbent vendors have been installing optical equipment in the space for some time, little attention has been paid to the opportunity available through the introduction of new technology advancements and the economic implications these technologies will have.

Specifically, the new technologies in the metro space provide better and more profitable economics, scale, and new services and business models. The current metro infrastructure comprises this equipment, which emphasizes voice traffic; is limited in scalability; and was not designed to take advantage of new technologies, topologies, and changing traffic conditions.

Next-generation equipment such as next-generation Synchronous Optical Network (SONET), metro core dense wavelength division multiplexing (DWDM), metro-edge DWDM, and advancements in the optical core have taken advantage of these limitations, and they are scalable and data optimized; they include integrated DWDM functionality and new amplification techniques; and they have made improvements in the operational and provisioning cycles.This tutorial provides technical information that can help engineers address numerous Cisco innovations and technologies for Cisco Complete Optical Multiservice Edge and Transport (Cisco COMET). They can be broken down into five key areas: photonics, protection, protocols, packets, and provisioning.

Radio Network Controller

Definition

A Radio Network Controller (RNC) provides the interface between the wireless devices communicating through Node B transceivers and the network edge. This includes controlling and managing the radio transceivers in the Node B equipment, as well as management tasks like soft handoff.

The RNC performs tasks in a 3G wireless network analogous to those of the Base Station Controller (BSC) in a 2G or 2.5G network. It interfaces with GPRS Service Nodes (SGSNs) and Gateways (GGSNs) to mediate with the network service providers.

A radio network controller manages hundreds of Node B transceiver stations while switching and provisioning services off the Mobile Switching Center and 3G data network interfaces. The connection from the RNC to a Node B is called the User Plane Interface Layer and it uses T1/E1 transport to the RNC.

Due to the large number of Node B transceivers, a T1/E1 aggregator is used to deliver the Node B data over channelized OC-3 optical transport to the RNC. The OC-3 pipe can be a direct connection to the RNC or through traditional SONET/SDH transmission networks.

A typical Radio Network Controller may be built on a PICMG or Advanced TCA chassis. It contains several different kinds of cards specialized for performing the functions and interacting with the various interfaces of the RNC.

Wireless Networked Digital Devices

Definition

The proliferation of mobile computing devices including laptops, personal digital assistants (PDAs),and wearable computers has created a demand for wireless personal area networks (PANs).PANs allow proximal devices to share
information and resources.The mobile nature of these devices places unique requirements on PANs,such as low power consumption, frequent make-and-break connections, resource discovery and utilization, and international regulations.

This paper examines wireless technologies appropriate for PANs and reviews promising research in resource discovery and service utilization. We recognize the need for PDAs to be as manageable as mobile phones and also the restrictive screen area and input area in mobile phone. Thus the need for a new breed of computing devices to fit the bill for a PAN. The above devices become especially relevant for mobile users such as surgeons and jet plane mechanics who need both hands free and thus would need to have "wearable" computers.

This paper first examines the technology used for wireless communication.Putting a radio in a digital device provides physical connectivity;however,to make the device useful in a larger context a networking infrastructure is required. The
infrastructure allows devices o share data,applications,and resources such as printers, mass storage, and computation power. Defining a radio standard is a tractable problem as demonstrated by the solutions presented in this paper.
Designing a network infrastructure is much more complex.

The second half of the paper describes several research projects that try to address components of the networking infrastructure. Finally there are the questions that go beyond the scope of this paper, yet will have he greatest effect on the direction,capabilities,and future of this paradigm. Will these networking strategies be incompatible, like he various cellular phone systems in the United States, or will there be a standard upon which manufacturers and developers agree, like the GSM (global system for mobile communication)cellular phones in Europe?

Communication demands compatibility, which is challenging in a heterogeneous marketplace. Yet by establishing and implementing compatible systems, manufacturers can offer more powerful and useful devices to their customers. Since these are, after all, digital devices living in a programmed digital
world, compatibility and interoperation are possible.

Technologies explored:
1. Electric field- use human body as a current conduit.
2.Magnetic field-use base station technology for picocells of space.
3.Infra Red- Basic issues including opaque body obstruction.
4.Wireless Radio Frequency- The best technology option however has to deal with the finite resource of the electro magnetic spectrum.

Also must meet international standards by a compatible protocol.
a. UHF Radio.
b. Super regenerative receiver
c. SAW/ASH Receiver.

3- D IC's

Introduction
There is a saying in real estate; when land get expensive, multi-storied buildings are the alternative solution. We have a similar situation in the chip industry. For the past thirty years, chip designers have considered whether building integrated circuits multiple layers might create cheaper, more powerful chips.

Performance of deep-sub micrometer very large scale integrated (VLSI) circuits is being increasingly dominated by the interconnects due to increasing wire pitch and increasing die size. Additionally, heterogeneous integration of different technologies on one single chip is becoming increasingly desirable, for which planar (2-D) ICs may not be suitable.

The three dimensional (3-D) chip design strategy exploits the vertical dimension to alleviate the interconnect related problems and to facilitate heterogeneous integration of technologies to realize system on a chip (SoC) design. By simply dividing a planar chip into separate blocks, each occupying a separate physical level interconnected by short and vertical interlayer interconnects (VILICs), significant improvement in performance and reduction in wire-limited chip area can be achieved.In the 3-Ddesign architecture, an entire chip is divided into a number of blocks, and each block is placed on a separate layer of Si that are stacked on top of each other.

Motivation For 3-D ICs

The unprecedented growth of the computer and the information technology industry is demanding Very Large Scale Integrated ( VLSI ) circuits with increasing functionality and performance at minimum cost and power dissipation. Continuous scaling of VLSI circuits is reducing gate delays but rapidly increasing interconnect delays. A significant fraction of the total power consumption can be due to the wiring network used for clock distribution, which is usually realized using long global wires.

Furthermore, increasing drive for the integration of disparate signals (digital, analog, RF) and technologies (SOI, SiGe, GaAs, and so on) is introducing various SoC design concepts, for which existing planner (2-D) IC design may not be suitable.

3D Architecture

Three-dimensional integration to create multilayer Si ICs is a concept that can significantly improve interconnect performance ,increase transistor packing density, and reduce chip area and power dissipation. Additionally 3D ICs can be very effective large scale on chip integration of different systems.

In 3D design architecture, and entire(2D) chips is divided into a number of blocks is placed on separate layer of Si that are stacked on top of each other. Each Si layer in the 3D structure can have multiple layer of interconnects(VILICs) and common global interconnects.

Sensors on 3D Digitization

Introduction

Digital 3D imaging can benefit from advances in VLSI technology in order to accelerate its deployment in many fields like visual communication and industrial automation. High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated. Intelligent digitizers will be capable of measuring accurately and simultaneously colour and 3D.

Colour 3D Imaging Technology

Machine vision involves the analysis of the properties of the luminous flux reflected or radiated by objects. To recover the geometrical structures of these objects, either to recognize or to measure their dimension, two basic vision strategies are available [1].

Passive vision, attempts to analyze the structure of the scene under ambient light. [1] Stereoscopic vision is a passive optical technique. The basic idea is that two or more digital images are taken from known locations. The images are then processed to find the correlations between them. As soon as matching points are identified, the geometry can be computed.

Active vision attempts to reduce the ambiguity of scene analysis by structuring the way in which images are formed. Sensors that capitalize on active vision can resolve most of the ambiguities found with two-dimensional imaging systems. Lidar based or triangulation based laser range cameras are examples of active vision technique. One digital 3D imaging system based on optical triangulation were developed and demonstrated.

Sensors For 3D Imaging

The sensors used in the autosynchronized scanner include

1. Synchronization Circuit Based Upon Dual Photocells

This sensor ensures the stability and the repeatability of range measurements in environment with varying temperature. Discrete implementations of the so-called synchronization circuits have posed many problems in the past. A monolithic version of an improved circuit has been built to alleviate those problems. [1]

2. Laser Spot Position Measurement Sensors

High-resolution 3D images can be acquired using laser-based vision systems. With this approach, the 3D information becomes relatively insensitive to background illumination and surface texture. Complete images of visible surfaces that are rather featureless to the human eye or a video camera can be generated.[1]

Fuzzy Logic

Introduction

In this context, FL is a problem-solving control system methodology that lends itself to implementation in systems ranging from simple, small, embedded micro-controllers to large, networked, multi-channel PC or workstation-based data acquisition and control systems. It can be implemented in hardware, software, or a combination of both. FL provides a simple way to arrive at a definite conclusion based upon vague, ambiguous, imprecise, noisy, or missing input information.

FL's approach to control problems mimics how a person would make decisions, only much faster.
As the complexity of a system increases, it becomes more difficult and eventually impossible to make a precise statement about its behavior, eventually arriving at a point of complexity where the fuzzy logic method born in humans is the only way to get at the problem.

History

The concept of Fuzzy Logic (FL) was conceived by Lotfi Zadeh, a professor at the University of California at Berkley, and presented not as a control methodology, but as a way of processing data by allowing partial set membership rather than crisp set membership or non-membership. This approach to set theory was not applied to control systems until the 70's due to insufficient small-computer capability prior to that time. Professor Zadeh reasoned that people do not require precise, numerical information input, and yet they are capable of highly adaptive control. If feedback controllers could be programmed to accept noisy, imprecise input, they would be much more effective and perhaps easier to implement. Unfortunately, U.S. manufacturers have not been so quick to embrace this technology while the Europeans and Japanese have been aggressively building real products around it.

How is FL different from conventional control methods?

FL incorporates a simple, rule-based IF X AND Y THEN Z approach to a solving control problem rather than attempting to model a system mathematically. The FL model is empirically-based, relying on an operator's experience rather than their technical understanding of the system. For example, rather than dealing with temperature control in terms such as "SP =500F", "T <1000F", or "210C
How does Fl work?

FL requires some numerical parameters in order to operate such as what is considered significant error and significant rate-of-change-of-error, but exact values of these numbers are usually not critical unless very responsive performance is required in which case empirical tuning would determine them. For example, a simple temperature control system could use a single temperature feedback sensor whose data is subtracted from the command signal to compute "error" and then time-differentiated to yield the error slope or rate-of-change-of-error, hereafter called "error-dot". Error might have units of degs F and a small error considered to be 2F while a large error is 5F. The "error-dot" might then have units of degs/min with a small error-dot being 5F/min and a large one being 15F/min. These values don't have to be symmetrical and can be "tweaked" once the system is operating in order to optimize performance. Generally, FL is so forgiving that the system will probably work the first time without any tweaking.

Simputer

Introduction

Simputer is a multilingual mass access low cost hand held device currently being developed. The information mark up language is the primary format of the content accessed by the Simputer. The information mark up language (IML) has been created to provide a uniform experience to users and to allow rapid development of solution on any platform.
The Simputer proves that illiteracy is no longer a barrier in handling a computer. The Simputer through its smart card feature allows for personal information management at the individual level for a unlimited number of users. Applications in diverse sectors can be made possible at an affordable price.

A rapid growth of knowledge can only happen in an environment which admits free exchange of thought of information. Indeed, nothing else can explain the astounding progress of science in the last three hundred years. Technology has unfortunately not seen this freedom two often. Several rounds of intends discussions among the trustees convinced them that the only way to break out the current absurdities is to foster a spirit of co-operation in inventing new technologies. The common mistake of treating to-operation as a synonym of charity poses its own challenges. The Simputer Licensing Framework is the Trust's responds to these challenges.

What is Simputer?
A Simputer is a multilingual, mass access, low cost, portable alternative to PC's by which the benefits of IT can reach the common man. It has a special role in the third world because it is ensures that illiteracy is no longer barrier in handling a computer. The key to bridging the digital divide is to have shared devices that permit truly simple and natural users interfaces based on sight, touch and studio. The Simputer meets these demands through a browser for the Information Markup Language (IML). IML has been created to provide a uniform experience to users to allow rapid development of solutions on any platform.

Features
Simputer is a hand held device with the following features:
- It is portable
- A (320 X 240) LCD Panel which is touch enabled
- A speaker, microphone and a few keys
- A soft keyboard
- A stylus is a pointing device
- Smart card reader in Simputer
- The use of extensive audio in the form of text to speech and audio snippets
The display resolution is much smaller than the usual desktop monitor but much higher than usual wireless devices (cell phones, pagers etc). The operating system for Simputer is Linux. It is designed so that Linux is to be started up in frequently, but the Simputer is in a low power mode during the times it is not in use. When the Simputer is 'powered on', the user is presented with a screed having several icons.

What Makes Simputer Different From Regular PCs?
Simputer is not a personal computer. It could however be a pocket computer. It is much more powerful than a Palm, with screen size 320 x 240 and memory capability (32MB RAM). The Wintel (Windows + Intel) architecture of the de facto standard PC is quite unsuitable for deployment on the low cost mass market. The entry barrier due to software licensing is just too high. While the Wintel PC provides a de facto level f standardization, it is not an open architecture. The Simputer mean while is centered around Linux which is freely available, open and modular.

Wavelet Video Processing Technology

Introduction

Uncompressed multimedia data requires considerable storage capacity and transmission bandwidth. Despite rapid progress in mass storage density processor speeds and digital communication system performance, demand for data storage capacity and data transmission bandwidth continues to outstrip the capabilities of available technologies. The recent growth of data intensive multimedia-based web applications have not only sustained the need for more efficient ways to encode signals and images but have made compression of such signals central to storage and communication technology.

For still image compression, the joint photographic experts group (JPEG) standard has been established. The performance of these codes generally degrades at low bit rates mainly because of the underlying block-based Discrete cosine Transform (DCT) scheme. More recently, the wavelet transform has emerged as a cutting edge technology, within the field of image compression. Wavelet based coding provides substantial improvements in picture quality at higher compression ratios. Over the past few years, a variety of powerful and sophisticated wavelet based schemes for image compression have been developed and implemented. Because of the many advantages, the top contenders in JPEG-2000 standard are all wavelet based compression algorithms.

Image Compression

Image compression is a technique for processing images. It is the compressor of graphics for storage or transmission. Compressing an image is significantly different than compressing saw binary data. Some general purpose compression programs can be used to compress images, but the result is less than optimal. This is because images have certain statistical properties which can be exploited by encoders specifically designed for them. Also some finer details in the image can be sacrificed for saving storage space.

Compression is basically of two types.
1. Lossy Compression
2. Lossless Compression.

Lossy compression of data concedes a certain loss of accuracy in exchange for greatly increased compression. An image reconstructed following lossy compression contains degradation relative to the original. Often this is because the compression scheme completely discards redundant information. Under normal viewing conditions no visible is loss is perceived. It proves effective when applied to graphics images and digitized voice.
Lossless compression consists of those techniques guaranteed to generate an exact duplicate of the input data stream after a compress or expand cycle. Here the reconstructed image after compression is numerically identical to the original image. Lossless compression can only achieve a modest amount of compression. This is the type of compression used when storing data base records, spread sheets or word processing files.

IP Telephony

Introduction

If you've never heard of Internet Telephony, get ready to change the way you think about long-distance phone calls. Internet Telephony, or Voice over Internet Protocol, is a method for taking analog audio signals, like the kind you hear when you talk on the phone, and turning them into digital data that can be transmitted over the Internet.
How is this useful? Internet Telephony can turn a standard Internet connection into a way to place free phone calls. The practical upshot of this is that by using some of the free Internet Telephony software that is available to make Internet phone calls, you are bypassing the phone company (and its charges) entirely.

Internet Telephony is a revolutionary technology that has the potential to completely rework the world's phone systems. Internet Telephony providers like Vonage have already been around for a little while and are growing steadily. Major carriers like AT&T are already setting up Internet Telephony calling plans in several markets around the United States, and the FCC is looking seriously at the potential ramifications of Internet Telephony service.
Above all else, Internet Telephony is basically a clever "reinvention of the wheel." In this article, we'll explore the principles behind Internet Telephony, its applications and the potential of this emerging technology, which will more than likely one day replace the traditional phone system entirely.

The interesting thing about Internet Telephony is that there is not just one way to place a call.

There are three different "flavors" of Internet Telephony service in common use today:
ATA - The simplest and most common way is through the use of a device called an ATA (analog telephone adaptor). The ATA allows you to connect a standard phone to your computer or your Internet connection for use with Internet Telephony.

The ATA is an analog-to-digital converter. It takes the analog signal from your traditional phone and converts it into digital data for transmission over the Internet. Providers like Vonage and AT&T CallVantage are bundling ATAs free with their service. You simply crack the ATA out of the box, plug the cable from your phone that would normally go in the wall socket into the ATA, and you're ready to make Internet Telephony calls. Some ATAs may ship with additional software that is loaded onto the host computer to configure it; but in any case, it is a very straightforward setup.

IP Phones - These specialized phones look just like normal phones with a handset, cradle and buttons. But instead of having the standard RJ-11 phone connectors, IP phones have an RJ-45 Ethernet connector. IP phones connect directly to your router and have all the hardware and software necessary right onboard to handle the IP call. Wi-Fi phones allow subscribing callers to make Internet Telephony calls from any Wi-Fi hot spot.

Computer-to-computer - This is certainly the easiest way to use Internet Telephony. You don't even have to pay for long-distance calls. There are several companies offering free or very low-cost software that you can use for this type of Internet Telephony. All you need is the software, a microphone, speakers, a sound card and an Internet connection, preferably a fast one like you would get through a cable or DSL modem. Except for your normal monthly ISP fee, there is usually no charge for computer-to-computer calls, no matter the distance.

If you're interested in trying Internet Telephony, then you should check out some of the free Internet Telephony software available on the Internet. You should be able to download and set it up in about three to five minutes. Get a friend to download the software, too, and you can start tinkering with Internet Telephony to get a feel for how it works.

RPR

Introduction

The nature of the public network has changed. Demand for Internet Protocol (IP) data is growing at a compound annual rate of between 100% and 800%1, while voice demand remains stable. What was once a predominantly circuit switched network handling mainly circuit switched voice traffic has become a circuit-switched network handling mainly IP data. Because the nature of the traffic is not well matched to the underlying technology, this network is proving very costly to scale. User spending has not increased proportionally to the rate of bandwidth increase, and carrier revenue growth is stuck at the lower end of 10% to 20% per year. The result is that carriers are building themselves out of business.

Over the last 10 years, as data traffic has grown both in importance and volume, technologies such as frame relay, ATM, and Point-to-Point Protocol (PPP) have been developed to force fit data onto the circuit network. While these protocols provided virtual connections-a useful approach for many services-they have proven too inefficient, costly and complex to scale to the levels necessary to satisfy the insatiable demand for data services. More recently, Gigabit Ethernet (GigE) has been adopted by many network service providers as a way to network user data without the burden of SONET/SDH and ATM. GigE has shortcomings when applied in carrier networks were recognized and for these problems, a technology called Resilient Packet Ring Technology were developed.

RPR retains the best attributes of SONET/SDH, ATM, and Gigabit Ethernet. RPR is optimized for differentiated IP and other packet data services, while providing uncompromised quality for circuit voice and private line services. It works in point-to-point, linear, ring, or mesh networks, providing ring survivability in less than 50 milliseconds. RPR dynamically and statistically multiplexes all services into the entire available bandwidth in both directions on the ring while preserving bandwidth and service quality guarantees on a per-customer, per-service basis. And it does all this at a fraction of the cost of legacy SONET/SDH and ATM solutions.

Data, rather than voice circuits, dominates today's bandwidth requirements. New services such as IP VPN, voice over IP (VoIP), and digital video are no longer confined within the corporate local-area network (LAN). These applications are placing new requirements on metropolitan-area network (MAN) and wide-area network (WAN) transport. RPR is uniquely positioned to fulfill these bandwidth and feature requirements as networks transition from circuit-dominated to packet-optimized infrastructures.

RPR technology uses a dual counter rotating fiber ring topology. Both rings (inner and outer) are used to transport working traffic between nodes. By utilizing both fibers, instead of keeping a spare fiber for protection, RPR utilizes the total available ring bandwidth. These fibers or ringlets are also used to carry control (topology updates, protection, and bandwidth control) messages. Control messages flow in the opposite direction of the traffic that they represent. For instance, outer-ring traffic-control information is carried on the inner ring to upstream nodes.

PH Control Technique using Fuzzy Logic

Introduction
Fuzzy control is a practical alternative for a variety of challenging control applications since it provides a convenient method for constructing non-linear controllers via the use of heuristic information. Since heuristic information may come from an operator who has acted as "a human in the loop" controller for a process. In the fuzzy control design methodology, a set of rules on how to control the process is written down and then it is incorporated into a fuzzy controller that emulates the decision making process of the human.

In other cases, the heuristic information may come from a control engineer who has performed extensive mathematical modelling, analysis and development of control algorithms for a particular process. The rest of the process is the same as the earlier case. The ultimate objective of using fuzzy control is to provide a user-friendly formalism for representing and implementing the ideas we have about how to achieve high performance control. Apart from being a heavily used technology these days, fuzzy logic control is simple, effective and efficient. In this paper, the structure, working and design of a fuzzy controller is discussed in detail through an in-depth analysis of the development and functioning of a fuzzy logic pH controller.

PH Control

To illustrate the application of fuzzy logic, the remaining section of the paper is directed towards the design and working of a pH control system using fuzzy logic.

PH is an important variable in the field of production especially in chemical plants, sugar industries, etc. PH of a solution is defined as the negative of the logarithm of the hydrogen ion concentration, to the base 10. I.e., PH= -log 10 [H+]
Let us consider the stages of operation of a sugar industry, where PH control is required. The main area of concern is the clarification of raw juice of sugarcane. The raw juice will be having a PH of 5.1 to 5.5. The clarified juice should ideally be neutral. I.e., the set point should be a PH of 7. The process involves addition of lime and SO2 gas for clarifying the raw juice. The addition of these two are called liming and sulphitation respectively. Since the process involves continuous addition of lime and SO2 ; lime has a property of increasing the PH of the clarified juice. This is the principle used for PH control in sugar industries. The PH of the raw juice is measured and this value is compared to the set point and this is further used for changing the diameter of the lime flow pipe as per the requirement.

The whole process can be summarised as follows. The PH sensor measures the PH. This reading is amplified and recorded. The output of the amplifier is also fed to the PH indicator and interface. The output of this block is fed to the fuzzy controller. The output of fuzzy controller is given to the stepper motor drive. This inturn adjusts the diameter of lime flow pipe as per the requirement. Thus, the input to the fuzzy controller is the PH reading of the raw juice.

The output of the fuzzy controller is the diameter of the lime flow pipe valve or a quantity that controls the diameter of the lime flow pipe valve like a DC current, voltage, etc. The output obtained from the fuzzy controller is used to drive a stepper motor which inturn controls the diameter of the value opening of the lime flow pipe. This output tends to maintain the pH value of sugar juice to a target value. A detailed description of the design and functioning of the fuzzy controller is given in the following section.

Multisensor Fusion and Integration

Introduction
Sensor is a device that detects or senses the value or changes of value of the variable being measured. The term sensor some times is used instead of the term detector, primary element or transducer.

The fusion of information from sensors with different physical characteristics, such as light, sound, etc enhances the understanding of our surroundings and provide the basis for planning, decision making, and control of autonomous and intelligent machines.

Sensors Evolution

A sensor is a device that responds to some external stimuli and then provides some useful output. With the concept of input and output, one can begin to understand how sensors play a critical role in both closed and open loops.

One problem is that sensors have not been specified. In other words they tend to respond variety of stimuli applied on it without being able to differentiate one from another. Neverthless, sensors and sensor technology are necessary ingredients in any control type application. Without the feedback from the environment that sensors provide, the system has no data or reference points, and thus no way of understanding what is right or wrong g with its various elements.

Sensors are so important in automated manufacturing particularly in robotics. Automated manufacturing is essentially the procedure of remo0ving human element as possible from the manufacturing process. Sensors in the condition measurement category sense various types of inputs, condition, or properties to help monitor and predict the performance of a machine or system.

Multisensor Fusion And Integration

Multisensor integration is the synergistic use of the information provided by multiple sensory devices to assist in the accomplishment of a task by a system.

Multisensor fusion refers to any stage in the integration process where there is an actual combination of different sources of sensory information into one representational format.

Multisensor Integration

The diagram represents multisensor integration as being a composite of basic functions. A group of n sensors provide input to the integration process. In order for the data from each sensor to be used for integration, it must first be effectively modelled. A sensor model represents the uncertainty and error in the data from each sensor and provides a measure of its quality that can be 7used by the subsequent integration functions.

Integrated Power Electronics Module

Introduction

In power electronics, solid-state electronics is used for the control and conversion of electric power .The goal of power electronics is to realize power conversion from electrical source to an electrical load in a highly efficient, highly reliable and cost effective way. Power electronics modules are key units in a power electronics system. These modules contain integration of power switches and associated electronic circuitry for drive control and protection and other passive components.

During the past decades, power devices underwent generation-by-generation improvements and can now handle significant power density. On the other hand power electronics packaging has not kept pace with the development of semiconductor devices. This is due to the limitations of power electronics circuits. The integration of power electronics circuit is quite different from that of other electronics circuits. The objective of power electronics circuits is electronics energy processing and hence require high power handling capability and proper thermal management.

Most of the currently used power electronic modules are made by using wire-bonding technology [1,2]. In these packages power semi conductor dies are mounted on a common substrate and interconnected with wire bonds. Other associated electronic circuitries are mounted on a multi layer PCB and connected to the power devices by vertical pins. These wire bonds are prone to resistance, parasitic and fatigue failure. Due to its two dimensional structure the package has large size. Another disadvantage is the ringing produced by parasitic associated with the wire bonds.

To improve the performance and reliability of power electronics packages, wire bonds must be replaced. The researches in power electronic packaging have resulted in the development of an advanced packaging technique that can replace wire bonds. This new generation package is termed as 'Integrated Power Electronics Module' (IPEM) [1]. In this, planar metalization is used instead of conventional wire bonds. It uses a three-dimensional integration technique that can provide low profile high-density systems. It offers high frequency operation and improved performance. It also reduces the size, weight and cost of the power modules.

Features Of IPEMS

The basic structure of an IPEM contains power semi conductor devices, control/drive/protection electronics and passive components. Power devices and their drive and protection circuit is called the active IPEM and the remaining part is called passive IPEM. The drive and protection circuits are realized in the form of hybrid integrated circuit and packaged together with power devices. Passive components include inductors, capacitors, transformers etc.

The commonly used power switching devices are MOSFETs and IGBTs [3]. This is mainly due to their high frequency operation and low on time losses. Another advantage is their inherent vertical structure in which the metalization electrode pads are on two sides. Usually the gate source pads are on the top surface with non-solderable thin film metal Al contact. The drain metalization using Ag or Au is deposited on the bottom of chip and is solderable. This vertical structure of power chips offers advantage to build sand witch type 3-D integration constructions.

H.323

Introduction

The H.323 standard provides a foundation for audio, video, and data communications across IP-based networks, including the Internet. By complying with H.323, multimedia products and applications from multiple vendors can interoperate, allowing users to communicate without concern for compatibility. H.323 will be the keystone for LAN-based products for consumer, business, entertainment, and professional applications.

H.323 is an umbrella recommendation from the International Telecommunications Union (ITU) that sets standards for multimedia communications over Local Area Networks (LANs) that do not provide a guaranteed Quality of Service (QoS). These networks dominate today's corporate desktops and include packet-switched TCP/IP and IPX over Ethernet, Fast Ethernet and Token Ring network technologies. Therefore, the H.323 standards are important building blocks for a broad new range of collaborative, LAN-based applications for multimedia communications.

The H.323 specification was approved in 1996 by the ITU's Study Group 16. Version 2 was approved in January 1998. The standard is broad in scope and includes both stand-alone devices and embedded personal computer technology as well as point-to-point and multipoint conferences. H.323 also addresses call control, multimedia management, and bandwidth management as well as interfaces between LANs and other networks.

H.323 is part of a larger series of communications standards that enable videoconferencing across a range of networks. Known as H.32X, this series includes H.320 and H.324, which address ISDN and PSTN communications, respectively.

IMPORTANCE OF H.323

The H.323 Recommendation is comprehensive, yet flexible, and can be applied to voice-only handsets and full multimedia video-conferencing stations, among others. H.323 applications are set to grow into the mainstream market for several reasons.

" H.323 sets multimedia standards for the existing infrastructure (i.e. IP-based networks). Designed to compensate for the effect of highly variable LAN latency, H.323 allows customers to use multimedia applications without changing their network infrastructure.
" IP LANs are becoming more powerful. Ethernet bandwidth is migrating from 10 Mbps to 100 Mbps, and Gigabit Ethernet is making headway into the market.
" By providing device-to-device, application-to-application, and vendor-to-vendor interoperability, H.323 allows customer products to interoperate with other H.323-compliant products.
" PCs are becoming more powerful multimedia platforms due to faster processors, enhanced instruction sets, and powerful multimedia accelerator chips.
" H.323 provides standards for interoperability between LANs and other networks.
" Network loading can be managed. With H.323, the network manager can restrict the amount of network bandwidth available for conferencing. Multicast support also reduces bandwidth requirements.
" H.323 has the support of many computing and communications companies and organizations, including Intel, Microsoft, Cisco, and IBM. The efforts of these companies will generate a higher level of awareness in the market.

GMPLS

Introduction

The emergence of optical transport systems has dramatically increased the raw capacity of optical networks and has enabled new sophisticated applications. For example, network-based storage, bandwidth leasing, data mirroring, add/drop multiplexing [ADM], dense wavelength division multiplexing [DWDM], optical cross-connect [OXC], photonic cross-connect [PXC], and multiservice switching platforms are some of the devices that may make up an optical network and are expected to be the main carriers for the growth in data traffic.

Multiple Types of Switching and Forwarding Hierarchies

Generalized MPLS (GMPLS) differs from traditional MPLS in that it supports multiple types of switching, i.e. the addition of support for TDM, lambda, and fiber (port) switching. The support for the additional types of switching has driven GMPLS to extend certain base functions of traditional MPLS and, in some cases, to add functionality. These changes and additions impact basic LSP properties, how labels are requested and communicated, the unidirectional nature of LSPs, how errors are propagated, and information provided for synchronizing the ingress and egress LSRs.

1. Packet Switch Capable (PSC) interfaces:
Interfaces that recognize packet boundaries and can forward data based on the content of the packet header. Examples include interfaces on routers that forward data based on the content of the IP header and interfaces on routers that forward data based on the content of the MPLS "shim" header.

2 . Time-Division Multiplex Capable (TDM) interfaces:
Interfaces that forward data based on the data's time slot in a repeating cycle. An example of such an interface is that of a SDH/SONET Cross-Connect (XC), Terminal Multiplexer (TM), or Add-Drop Multiplexer (ADM).

3 . Lambda Switch Capable (LSC) interfaces:
Interfaces that forward data based on the wavelength on which the data is received. An example of such an interface is that of a Photonic Cross-Connect (PXC) or Optical Cross-Connect (OXC) that can operate at the level of an individual wavelength. Additional examples include PXC interfaces that can operate at the level of a group of wavelengths, i.e. a waveband.

4. Fiber-Switch Capable (FSC) interfaces:
Interfaces that forward data based on a position of the data in the real world physical spaces. An example of such an interface is that of a PXC or OXC that can operate at the level of a single or multiple fibers.

The diversity and complexity in managing these devices have been the main driving factors in the evolution and enhancement of the MPLS suite of protocols to provide control for not only packet-based domains, but also time, wavelength, and space domains. GMPLS further extends the suite of IP-based protocols that manage and control the establishment and release of label switched paths (LSP) that traverse any combination of packet, TDM, and optical networks. GMPLS adopts all technology in MPLS.

Electrical & Electronics Engineering Seminar Topics





---Electrical Engineering---

Aspects of electrical engineering are found in almost every consumer device or appliance. For example, such consumer technologies as cell phones, personal digital assistants, MP3 and DVD players, personal computers, and high-definition TV each involve several areas of specialization within electrical engineering.
Other technologies,such as the internet, wireless network connections, medical instrumentation, manufacturing, and power distribution -- technologies most people take for granted -- also involve some area of electrical engineering. In fact, it is difficult to find an industry where electrical engineers do not play a role.

Because electrical engineering has so many facets, there are many areas of specialization within the discipline. Our department has a high degree of expertise in electromagnetics and wave propagation, optoelectronics, digital signal processing and communications, power electronics, nanostructures and devices, controls, and computer engineering. Students may pursue specialization in these areas through elective courses.

Adaptive optics

INTROUCTION

Adaptive optics is a new technology which is being used now a days in ground based telescopes to remove atmospheric tremor and thus provide a clearer and brighter view of stars seen through ground based telescopes. Without using this system, the images obtained through telescopes on earth are seen to be blurred, which is caused by the turbulent mixing of air at different temperatures causing speed & direction of star light to vary as it continually passes through the atmosphere
.
Adaptive optics in effect removes this atmospheric tremor. It brings together the latest in computers, material science, electronic detectors, and digital control in a system that warps and bends a mirror in a telescope to counteract, in real time the atmospheric distortion.

The advance promises to let ground based telescopes reach their fundamental limits of resolution and sensitivity, out performing space based telescopes and ushering in a new era in optical astronomy. Finally, with this technology, it will be possible to see gas-giant type planets in nearby solar systems in our Milky Way galaxy. Although about 100 such planets have been discovered in recent years, all were detected through indirect means, such as the gravitational effects on their parent stars, and none has actually been detected directly.


WHAT IS ADAPTIVE OPTICS ?

Adaptive optics refers to optical systems which adapt to compensate for optical effects introduced by the medium between the object and its image. In theory a telescope's resolving power is directly proportional to the diameter of its primary light gathering lens or mirror. But in practice , images from large telescopes are blurred to a resolution no better than would be seen through a 20 cm aperture with no atmospheric blurring. At scientifically important infrared wavelengths, atmospheric turbulence degrades resolution by at least a factor of 10.

Under ideal circumstances, the resolution of an optical system is limited by the diffraction of light waves. This so-called " diffraction limit " is generally described by the following angle (in radians) calculated using the light's wavelength and optical system's pupil diameter:

a = 1.22 ^
D

Where the angle is given in radians. Thus, the fully-dilated human eye should be able to separate objects as close as 0.3 arcmin in visible light, and the Keck Telescope (10-m) should be able to resolve objects as close as 0.013 arcsec.


In practice, these limits are never achieved. Due to imperfections in the cornea nd lens of the eye, the practical limit to resolution only about 1 arcmin. To turn the problem around, scientists wishing to study the retina of the eye can only see details bout 5 (?) microns in size. In astronomy, the turbulent atmosphere blurs images to a size of 0.5 to 1 arcsec even at the best sites.

Adaptive optics provides a means of compensating for these effects, leading to appreciably sharper images sometimes approaching the theoretical diffraction limit. With sharper images comes an additional gain in contrast - for astronomy, where light levels are often very low, this means fainter objects can be detected and studied.

Circuit Breaker Maintenance by Mobile Agent Software Technology

INTROUCTION

Circuit breakers are crucial components for power system operations. They play an important role in switching for the routine network operation and protection of other devices in power systems. To ensure circuit breakers are in healthy condition, periodical inspection and preventive maintenance are typically performed. The maintenance schedules and routines usually follow the recommendation of circuit breaker vendors, although the recommended schedules may be conservative.

New maintenance techniques and methodologies are emerging, while the circuit breakers keep improving in their designs and functions . As an example, some new circuit breakers have embedded monitoring instruments available to measure the coil current profiles and the operation timing . The recorded information can be used to monitor the condition of breakers during each operation. In this case, it may be more appropriate to replace the time-directed maintenance by condition-directed maintenance practice . When applied properly, both the size of the maintenance crew and maintenance cost may be reduced greatly with this approach. Since the number of circuit breakers in a power system is usually very big, a small maintenance cost saving per each circuit breaker can accumulate to a considerable benefit for the whole system. A more systematic solution is
Reliability Centered Maintenance (RCM), which can be used to select the most appropriate maintenance strategy.

During the maintenance or repair work, the maintenance crew will need to access information distributed across the utility and stored using different data formats. By equipping the crew with new information access methods to replace the old paper-based information exchange and logging method, the efficiency may be improved since less time will be spent on preparation, reporting and logging. An information access method that is capable of handling heterogeneous information sources will be helpful to achieve the above goal. Also, the new information access method should be secure and able to work on unreliable public networks.

The mobile agent software provides a flexible framework for mobile agent applications. An agent application program can travel through the internet/intranet to the computers where the mobile agent server or transporter is running. The mobile agent software also supports Distributed Events, Agent Collaboration and Service Bridge. Compared with client server systems, an agent can process the data locally and thus reduce the network traffic. Besides, the Java platform encapsulates the network layer from the agent, which makes the programming easier. The mobile agent software may fit very well in the circuit breaker maintenance scenario. In this paper, we considered how mobile agent software might be applied in circuit breaker maintenance and monitoring from the viewpoint of the maintenance crew.


CIRCUIT BREAKER MAINTENANCE TASKS

The maintenance of circuit breakers deserves special consideration because of their importance for routine switching and for protection of other equipment. Electric transmission system breakups and equipment destruction can occur if a circuit breaker fails to operate because of a lack of a preventive maintenance. The need for maintenance of circuit breaker is often not obvious as circuit breakers may remain idle, either open or closed, for long periods of time. Breakers that remain idle for six months or more should be made to open and close several time in succession to verify proper operation and remove any accumulation of dust or foreign material on, moving parts and contacts.

The circuit breakers mainly consist of the interrupter assembly (contacts, arc interrupters and arc chutes), operating mechanism, operation rod, control panel, sealing system, and breaking medium (SF6, oil, vacuum and air). To ensure the performance of a circuit breaker, all the components should be kept in good condition; therefore time-directed preventive maintenance has been widely adopted. The preventive maintenance tasks include periodic inspection, test, and replacement of worn or defective components and lubrication of the mechanical parts. The maintenance intervals are usually determined using experiences or following the recommended schedules provided by the vendor or standard.

The maintenance practices can be divided into three categories: corrective maintenance, preventive maintenance, and predictive maintenance.

Digital Testing of High Voltage Circuit Breaker

INTROUCTION

With the advancement of power system, the lines and other equipment operate at very high voltages and carry large currents. High-voltage circuit breakers play an important role in transmission and distribution systems. A circuit breaker can make or break a circuit, either manually or automatically under all conditions viz. no-load, full-load and short-circuit conditions. The American National Standard Institute (ANSI) defines circuit breaker as: "A mechanical switching device capable of making, carrying and breaking currents under normal circuit conditions and also making, carrying for a specified time, and breaking currents under specified abnormal circuit conditions such as those of short circuit". A circuit breaker is usually intended to operate infrequently, although some types are suitable for frequent operation.

ESSENTIAL QUALITIES OF HV CIRCUIT BREAKER

High-voltage circuit breaker play an important role in transmission and distribution systems. They must clear faults and isolate faulted sections rapidly and reliably. In-short they must possess the following qualities.

" In closed position they are good conductors.
" In open position they are excellent insulators.
" They can close a shorted circuit quickly and safely without unacceptable contact erosion.
" They can interrupt a rated short-circuit current or lower current quickly without generating an abnormal voltage.

The only physical mechanism that can change in a short period of time from a conducting to insulating state at a certain voltage is the arc.

HISTORY

The first circuit breaker was developed by J.N. Kelman in 1901. It was the predecessor of the oil circuit breaker and capable of interrupting a short circuit current of 200 to 300 Ampere in a 40KV system. The circuit breaker was made up of two wooden barrels containing a mixture of oil and water in which the contacts were immersed. Since then the circuit breaker design has undergone a remarkable development. Now a days one pole of circuit breaker is capable of interrupting 63 KA in a 550 KV network with SF6 gas as the arc quenching medium.

THE NEED FOR TESTING

Almost all people have experienced the effects of protective devices operating properly. When an overload or a short circuit occurs in the home, the usual result is a blown fuse or a tripped circuit breaker. Fortunately few have the misfortune to see the results of a defective device, which may include burned wiring, fires, explosions, and electrical shock.

It is often assumed that the fuses and circuit breakers in the home or industry are infallible, and will operate safely when called upon to do so ten, twenty, or more years after their installation. In the case of fuses, this may be a safe assumption, because a defective fuse usually blows too quickly, causing premature opening of the circuit, and forcing replacement of the faulty component. Circuit breakers, however, are mechanical devices, which are subject to deterioration due to wear, corrosion and environmental contamination, any of which could cause the device to remain closed during a fault condition. At the very least, the specified time delay may have shifted so much that proper protection is no longer afforded to devices on the circuit, or improper coordination causes a main circuit breaker or fuse to open in an inconvenient location.

Eddy current brakes

INTROUCTION

Many of the ordinary brakes, which are being used now days stop the vehicle by means of mechanical blocking. This causes skidding and wear and tear of the vehicle. And if the speed of the vehicle is very high, the brake cannot provide that much high braking force and it will cause problems. These drawbacks of ordinary brakes can be overcome by a simple and effective mechanism of braking system 'The eddy current brake'. It is an abrasion-free method for braking of vehicles including trains. It makes use of the opposing tendency of eddy current
Eddy current is the swirling current produced in a conductor, which is subjected to a change in magnetic field. Because of the tendency of eddy currents to oppose, eddy currents cause energy to be lost. More accurately, eddy currents transform more useful forms of energy such as kinetic energy into heat, which is much less useful. In many applications, the loss of useful energy is not particularly desirable. But there are some practical applications. Such an application is the eddy current brake.

PRINCIPLE OF OPERATIONS

Eddy current brake works according to Faraday's law of electromagnetic induction. According to this law, whenever a conductor cuts magnetic lines of forces, an emf is induced in the conductor, the magnitude of which is proportional to the strength of magnetic field and the speed of the conductor. If the conductor is a disc, there will be circulatory currents i.e. eddy currents in the disc. According to Lenz's law, the direction of the current is in such a way as to oppose the cause, i.e. movement of the disc.
Essentially the eddy current brake consists of two parts, a stationary magnetic field system and a solid rotating part, which include a metal disc. During braking, the metal disc is exposed to a magnetic field from an electromagnet, generating eddy currents in the disc. The magnetic interaction between the applied field and the eddy currents slow down the rotating disc. Thus the wheels of the vehicle also slow down since the wheels are directly coupled to the disc of the eddy current brake, thus producing smooth stopping motion.


EDDY CURRENT INDUCED IN A CONDUCTOR

Essentially an eddy current brake consists of two members, a stationary magnetic field system and a solid rotary member, generally of mild steel, which is sometimes referred to as the secondary because the eddy currents are induced in it. Two members are separated by a short air gap, they're being no contact between the two for the purpose of torque transmission. Consequently there is no wear as in friction brake.

Stator consists of pole core, pole shoe, and field winding. The field winding is wounded on the pole core. Pole core and pole shoes are made of east steel laminations and fixed to the state of frames by means of screw or bolts. Copper and aluminium is used for winding material the arrangement is shown in fig. 1. This system consists of two parts.
1. Stator
2. Rotor

Electric Powerline Networking For A Smart Home

INTROUCTION

Electric power line networking is a method of home networking which is capable of interconnecting computers in your home. It uses exciting AC wiring and power outlets to transmit data around a home or small offices. It is based on the concept of 'no new wires'. It would let you share:
" Web access
" Printers
" PC hard drives
From any plug in your home or office.
Electric power line networking avoids the need for putting PCs near a phone outlet. Instead, computers and other devices are connected through electrical outlet. You are networked by just plugging in. power line networking is based on Homeplug Powerline Alliance's Homeplug 1.0 standards. With power line networking, a home owner can create an entire home network linking his/her,
" Personal computers
" Printers
" Music equipments
" Internet access
Without running any new wires.
Wires and sockets are used simultaneously for electricity and data, without disrupting each other. Thus power line networking provides a cost effective solution for home networking. With power line networking, you will be able to put your desktop PCs anywhere you like in your home. It will also easier to buy and network other devices.

FEATURES

A power line network is having the following features,

" No extra wiring and extra wire maintenance is required since it uses exciting electrical line it self for net working.
" It uses standard outlet/plug. So it is possible to access the network everywhere at home.
" It uses power outlet to let the users more easily relocate the PCs, switches, routers and print servers.
" It is easy to connect with Ethernet switch or router to access internet or exciting home network.
" It provides a 56-bit DES encryption for high security in data transfer.
" It co-exists with current technology to protect previous investment. So customers will not have to discard their exciting network solutions.
" It provides maximum 14Mbps bandwidth over standard home for sharing information, multimedia application and gaming.
" Frequency band 4.3 to 20.9 MHz for low interference from other electrical appliances.

THE TECHNOLOGY

Like home PNA, power-line networking is based on the concept of "no new wires". The convenience is even more obvious in this case because while not every room has a phone jack, but always have an electrical outlet near a computer. In power line networking, you connect your computers to one another through the same outlet. Because it requires no new wiring, and the network adds no cost to your electric bill, power line networking is the cheapest method of connecting computers in different rooms.
There are two competing technologies for power line networking. The older technology called Passport, by company named Intelogis and a new technology called Powerpacket, developed by intellon. Now let's find out how each of these technologies works.

Electrical Impedance Tomography

INTROUCTION

To begin with, the word tomography can be explained with reference to 'tomo' and 'graphy'; 'tomo' originates from the Greek word 'tomos' which means section or slice, and 'graphy' refers to representation. Hence tomography refers to any method which involves reconstruction of the internal structural information within an object mathematically from a series of projections. The projection here is the visual information probed using an emanation which are physical processes involved. These include physical processes such as radiation, wave motion, static field, electric current etc. which are used to study an object from outside.

Medical tomography primarily uses X-ray absorption, magnetic resonance, positron emission, and sound waves (ultrasound) as the emanation. Nonmedical area of application and research use ultrasound and many different frequencies of electromagnetic spectrum such as microwaves, gamma rays etc. for probing the visual information. Besides photons, tomography is regularly performed using electrons and neutrons. In addition to absorption of the particles or radiation, tomography can be based on the scattering or emission of radiation or even using electric current as well.

When electric current is consecutively fed through different available electrode pairs and the corresponding voltage, measured consecutively by all remaining electrode pairs, it is possible to create an image of the impedance of different regions of the volume conductor by using certain reconstruction algorithms. This imaging method is called impedance imaging. Because the image is usually constructed in two dimensions from a slice of the volume conductor, the method is also called impedance tomography and ECCT (electric current computed tomography), or simply, electrical impedance tomography or EIT.

Electrical Impedance Tomography (EIT) is an imaging technology that applies time-varying currents to the surface of a body and records the resulting voltages in order to reconstruct and display the electrical conductivity and permittivity in the interior of the body. This technique exploits the electrical properties of tissues such as resistance and capacitance. It aims at exploiting the differences in the passive electrical properties of tissues in order to generate a tomographic image.

Human tissue is not simply conductive. There is evidence that many tissues also demonstrate a capacitive component of current flow, and therefore, it is appropriate to speak of the specific admittance (admittivity) or specific impedance (impedivity) of tissue rather than the conductivity; hence, electric impedance tomography. Thus, EIT is an imaging method which maybe used to complement X-ray tomography (computer tomography, CT), ultrasound imaging, positron emission tomography (PET), and others.

Electro Dynamic Tether

INTROUCTION

Tether is a word, which is not heard often. The word meaning of tether is 'a rope or chain to fasten an animal so that it can graze within a certain limited area'. We can see animals like cows and goats 'tethered' to trees and posts.

In space also tethers have an application similar to their word meaning. But instead of animals, there are spacecrafts and satellites in space. A tether if connected between two spacecrafts (one having smaller orbital altitude and the other at a larger orbital altitude) momentum exchange can take place between them. Then the tether is called momentum exchange space tether. A tether is deployed by pushing one object up or down from the other. The gravitational and centrifugal forces balance each other at the center of mass. Then what happens is that the lower satellite, which orbits faster, tows its companion along like an orbital water skier. The outer satellite thereby gains momentum at the expense of the lower one, causing its orbit to expand and that of the lower to contract. This was the original use of tethers.

But now tethers are being made of electrically conducting materials like aluminium or copper and they provide additional advantages. Electrodynamic tethers, as they are called, can convert orbital energy into electrical energy. It works on the principle of electromagnetic induction. This can be used for power generation. Also when the conductor moves through a magnetic field, charged particles experience an electromagnetic force perpendicular to both the direction of motion and field. This can be used for orbit raising and lowering and debris removal. Another application of tethers discussed here is artificial gravity inside spacecrafts.

NEED AND ORIGIN OF TETHERS

Space tethers have been studied theoretically since early in the 20th century, it wasn't until 1974 that Guiseppe Colombo came up with the idea of using a long tether to support satellite from an orbiting platform. But that was simple momentum exchange space tether. Now lets see what made scientists think of electrodynamic tethers.

Every spacecraft on every mission has to carry all the energy sources required to get its job done, typically in the form of chemical propellants, photovoltaic arrays or nuclear reactors. The sole alternative - delivery service - can be very expensive. For example, a spacecraft orbiting in the International space Station (ISS) will need an estimated 77 metric tons of booster propellant over its anticipated 10 year life span just to keep itself from gradually falling out of orbit. Assuming a minimal price of $7000 a pound (dirt cheap by current standards) to get fuel up to the station's 360 km altitude, i.e. $1.2 billion simply to maintain the orbital status quo.

So scientists have are taking a new look at space tether, making it electrically conductive. In 1996, NASA launched a shuttle to deploy a satellite on a tether to study the electrodynamic effects of a conducting tether as it passes through the earth's magnetic fields. As predicted by the laws of electromagnetism, a current was produced in the tether as it passed through the earth's magnetic field, acting as an electrical generator. This was the origin of electrodynamic tethers


PHYSICALLY WHAT IS A TETHER

A tether in space is a long, flexible cable connecting two masses. When the cable is electrically conductive, the ensemble becomes an electrodynamic tether.

There can be three main types of electrodynamic tether employed systems providing different advantages:

1. Electrodynamic tether systems - in which two masses are separated by a long flexible cable electrically conductive cable - can perform many of the same functions as conventional spacecrafts but without the use of chemical or nuclear fuel sources.
2. In low earth orbit (LEO) tether systems could provide electrical power and positioning capability for satellites and manned spacecraft, as well as help get rid the region of dangerous debris.
3. On long term missions, such as exploration of Jupiter and its moons, tethers could drastically reduce the amount of fuel needed to maneuver while also providing a dependable source of electricity.