IGCT

Introduction


Thyristor technology is inherently superior to transistor for blocking voltage values above 2.5kV, plasma distributions equal to those of diodes offering the best trade-off between the on-state and blocking voltages. Until the introduction of newer power switches, the only serious contenders for high-power transportation systems and other applications were the GTO (thyristor), with its cumbersome snubbers, and the IGBT (transistor), with its inherently high losses. Until now, adding the gate turn-off feature has resulted in GTO being constrained by a variety of unsatisfactory compromises. The widely used standard GTO drive technology results in inhomogenous turn-on and turn-off that call for costly dv/dt and di/dt snubber circuits combined with bulky gate drive units.

Rooting from the GTO is one of the newest power switches, the Gate-Commutated Thyristor (GCT). It successfully combines the best of the thyristor and transistor characteristics, while fulfilling the additional requirements of manufacturability and high reliability. The GCT is a semiconductor based on the GTO structure, whose cathode emitter can be shut off "instantaneously", thereby converting the device from a low conduction-drop thyristor to a low switching loss, high dv/dt bipolar transistor at turn- off.

The IGCT (Integrated GCT) is the combination of the GCT device and a low inductance gate unit. This technology extends transistor switching performance to well above the MW range, with 4.5kV devices capable of turning off 4kA, and 6kV devices capable of turning off 3kA without snubbers. The IGCT represents the optimum combination of low loss thyristor technology and snubberles gate effective turn off for demanding medium and high voltage power electronics applications.

The thick line shows the variation of the anode voltage during turn-off. The lighter shows the variation of the anode current during turn-off process of IGCT.

GTO and thyristor are four layer (npnp) devices. As such, they have only two stable points their characteristics-'on' and 'off'. Every state in between is unstable and results in current filamentation. The inherent instability is worsened by processing imperfections. This has led to the widely accepted myth that a GTO cannot be operated without a snubber. Essentially, the GTO has to be reduced to a stable pnp device i.e. a transistor, for the few critical microseconds during turn-off.

To stop the cathode (n) from taking part in the process, the bias of the cathode n-p junction has to be reversed before voltage starts to build up at the main junction. This calls for commutation of the full load current from the cathode (n) to the gate (p) within one microsecond. Thanks to a new housing design, 4000A/us can be achieved with a low cost 20V gate unit. Current filamentation is totally suppressed and the turn-off waveforms and safe operating area are identical to those of a transistor.

IGCT technology brings together the power handling device (GCT) and the device control circuitry (freewheeling diode and gate drive) in an integrated package. By offering four levels of component packaging and integration, it permits simultaneous improvement in four interrelated areas; low switching and conduction losses at medium voltage, simplified circuitry for operating the power semiconductor, reduced power system cost, and enhanced reliability and availability. Also, by providing pre- engineered switch modules, IGCT enables medium-voltage equipment designers to develop their products faster.

Iris Scanning

Introduction


In today's information age it is not difficult to collect data about an individual and use that information to exercise control over the individual. Individuals generally do not want others to have personal information about them unless they decide to reveal it. With the rapid development of technology, it is more difficult to maintain the levels of privacy citizens knew in the past. In this context, data security has become an inevitable feature. Conventional methods of identification based on possession of ID cards or exclusive knowledge like social security number or a password are not altogether reliable. ID cards can be almost lost, forged or misplaced: passwords can be forgotten.

Such that an unauthorized user may be able to break into an account with little effort. So it is need to ensure denial of access to classified data by unauthorized persons. Biometric technology has now become a viable alternative to traditional identification systems because of its tremendous accuracy and speed. Biometric system automatically verifies or recognizes the identity of a living person based on physiological or behavioral characteristics.

Since the persons to be identified should be physically present at the point of identification, biometric techniques gives high security for the sensitive information stored in mainframes or to avoid fraudulent use of ATMs. This paper explores the concept of Iris recognition which is one of the most popular biometric techniques. This technology finds applications in diverse fields.

Biometrics - Future Of Identity
Biometric dates back to ancient Egyptians who measured people to identify them. Biometric devices have three primary components.
1. Automated mechanism that scans and captures a digital or analog image of a living personal characteristic
2. Compression, processing, storage and comparison of image with a stored data.
3. Interfaces with application systems.


A biometric system can be divided into two stages: the enrolment module and the identification module. The enrolment module is responsible for training the system to identity a given person. During an enrolment stage, a biometric sensor scans the person's physiognomy to create a digital representation. A feature extractor processes the representation to generate a more compact and expressive representation called a template. For an iris image these include the various visible characteristics of the iris such as contraction, Furrows, pits, rings etc. The template for each user is stored in a biometric system database.

The identification module is responsible for recognizing the person. During the identification stage, the biometric sensor captures the characteristics of the person to be identified and converts it into the same digital format as the template. The resulting template is fed to the feature matcher, which compares it against the stored template to determine whether the two templates match.

The identification can be in the form of verification, authenticating a claimed identity or recognition, determining the identity of a person from a database of known persons. In a verification system, when the captured characteristic and the stored template of the claimed identity are the same, the system concludes that the claimed identity is correct. In a recognition system, when the captured characteristic and one of the stored templates are the same, the system identifies the person with matching template.

Loop magnetic couplers

Introduction


Couplers, also known as "isolators" because they electrically isolate as well as transmit data, are widely used in industrial and factory networks, instruments, and telecommunications. Every one knows the problems with optocouplers. They take up a lot of space, are slow, optocouplers age and their temperature range is quite limited. For years, optical couplers were the only option. Over the years, most of the components used to build instrumentation circuits have become ever smaller. Optocoupler technology, however, hasn't kept up. Existing coupler technologies look like dinosaurs on modern circuit boards.


Magnetic couplers are analogous to optocouplers in a number of ways. Design engineers, especially in instrumentation technology, will welcome a galvanically-isolated data coupler with integrated signal conversion in a single IC. My report will give a detailed study about 'ISOLOOP MAGNETIC COUPLERS'. GROUND LOOPS
When equipment using different power supplies is tied together (with a common ground connection) there is a potential for ground loop currents to exist. This is an induced current in the common ground line as a result of a difference in ground potentials at each piece of equipment.

Normally all grounds are not in the same potential. Widespread electrical and communications networks often have nodes with different ground domains. The potential difference between these grounds can be AC or DC, and can contain various noise components. Grounds connected by cable shielding or logic line ground can create a ground loop-unwanted current flow in the cable. Ground-loop currents can degrade data signals, produce excessive EMI, damage components, and, if the current is large enough, present a shock hazard.


Galvanic isolation between circuits or nodes in different ground domains eliminates these problems, seamlessly passing signal information while isolating ground potential differences and common-mode transients. Adding isolation components to a circuit or network is considered good design practice and is often mandated by industry standards. Isolation is frequently used in modems, LAN and industrial network interfaces (e.g., network hubs, routers, and switches), telephones, printers, fax machines, and switched-mode power supplies.


Giant Magnetoresistive (GMR):
Large magnetic field dependent changes in resistance are possible in thin film ferromagnet/nonmagnetic metallic multilayers. The phenomenon was first observed in France in 1988, when changes in resistance with magnetic field of up to 70% were seen. Compared to the small percent change in resistance observed in anisotropic magnetoresistance, this phenomenon was truly 'giant' magnetoresistance.


The spin of electrons in a magnet is aligned to produce a magnetic moment. Magnetic layers with opposing spins (magnetic moments) impede the progress of the electrons (higher scattering) through a sandwiched conductive layer. This arrangement causes the conductor to have a higher resistance to current flow.


An external magnetic field can realign all of the layers into a single magnetic moment. When this happens, electron flow will be less effected (lower scattering) by the uniform spins of the adjacent ferromagnetic layers. This causes the conduction layer to have a lower resistance to current flow. Note that these phenomenon takes places only when the conduction layer is thin enough (less than 5 nm) for the ferromagnetic layer's electron spins to affect the conductive layer's electron's path.

LWIP

Introduction


Over the last few years, the interest for connecting computers and computer supported devices to wireless networks has steadily increased. Computers are becoming more and more seamlessly integrated with everyday equipment and prices are dropping. At the same time wireless networking technologies, such as Bluetooth and IEEE 802.11b WLAN , are emerging. This gives rise to many new fascinating scenarios in areas such as health care, safety and security, transportation, and processing industry. Small devices such as sensors can be connected to an existing network infrastructure such as the global Internet, and monitored from anywhere.

The Internet technology has proven itself flexible enough to incorporate the changing network environments of the past few decades. While originally developed for low speed networks such as the ARPANET, the Internet technology today runs over a large spectrum of link technologies with vastly different characteristics in terms of bandwidth and bit error rate. It is highly advantageous to use the existing Internet technology in the wireless networks of tomorrow since a large amount of applications using the Internet technology have been developed. Also, the large connectivity of the global Internet is a strong incentive.

Since small devices such as sensors are often required to be physically small and inexpensive, an implementation of the Internet protocols will have to deal with having limited computing resources and memory. This report describes the design and implementation of a small TCP/IP stack called lwIP that is small enough to be used in minimal systems.

Overview

As in many other TCP/IP implementations, the layered protocol design has served as a guide for the design of the implementation of lwIP. Each protocol is implemented as its own module, with a few functions acting as entry points into each protocol. Even though the protocols are implemented separately, some layer violations are made, as discussed above, in order to improve performance both in terms of processing speed and memory usage. For example, when verifying the checksum of an incoming TCP segment and when demultiplexing a segment, the source and destination IP addresses of the segment has to be known by the TCP module. Instead of passing these addresses to TCP by the means of a function call, the TCP module is aware of the structure of the IP header, and can therefore extract this information by itself.

lwIP consists of several modules. Apart from the modules implementing the TCP/IP protocols (IP, ICMP, UDP, and TCP) a number of support modules are implemented.
The support modules consists of :-

" The operating system emulation layer (described in Chapter3)

" The buffer and memory management subsystems
(described in Chapter 4)

" Network interface functions (described in Chapter 5)

" Functions for computing Internet checksum (Chapter 6)

" An abstract API (described in Chapter 8 )

Image Authentication Techniques

Introduction


This paper explores the various techniques used to authenticate the visual data recorded by the automatic video surveillance system. Automatic video surveillance systems are used for continuous and effective monitoring and reliable control of remote and dangerous sites. Some practical issues must be taken in to account, in order to take full advantage of the potentiality of VS system. The validity of visual data acquired, processed and possibly stored by the VS system, as a proof in front of a court of law is one of such issues. But visual data can be modified using sophisticated processing tools without leaving any visible trace of the modification.

So digital or image data have no value as legal proof, since doubt would always exist that they had been intentionally tampered with to incriminate or exculpate the defendant. Besides, the video data can be created artificially by computerized techniques such as morphing. Therefore the true origin of the data must be indicated to use them as legal proof. By data authentication we mean here a procedure capable of ensuring that data have not been tampered with and of indicating their true origin.

Automatic Visual Surveillance System

Automatic Visual Surveillance system is a self monitoring system which consists of a video camera unit, central unit and transmission networks A pool of digital cameras is in charge of frame the scene of interest and sent corresponding video sequence to central unit. The central unit is in charge of analyzing the sequence and generating an alarm whenever a suspicious situation is detected.

Central unit also transmits the video sequences to an intervention centre such as security service provider, the police department or a security guard unit. Somewhere in the system the video sequence or some part of it may be stored and when needed the stored sequence can be used as a proof in front of court of law. If the stored digital video sequences have to be legally credible, some means must be envisaged to detect content tampering and reliably trace back to the data origin

Authentication Techniques

Authentication techniques are performed on visual data to indicate that the data is not a forgery; they should not damage visual quality of the video data. At the same time, these techniques must indicate the malicious modifications include removal or insertion of certain frames, change of faces of individual, time and background etc. Only a properly authenticated video data has got the value as legal proof. There are two major techniques for authenticating video data.


They are as follows


1. Cryptographic Data Authentication

It is a straight forward way to provide video authentication, namely through the joint use of asymmetric key encryption and the digital Hash function.

Cameras calculate a digital summary (digest) of the video by means of hash function. Then they encrypt the digest with their private key, thus obtaining a signed digest which is transmitted to the central unit together with acquired sequences. This digest is used to prove data integrity or to trace back to their origin. Signed digest can only read by using public key of the camera.

2. Watermarking- based authentication

Watermarking data authentication is the modern approach to authenticate visual data by imperceptibly embedding a digital watermark signal on the data.

Digital watermarking is the art and science of embedding copyright information in the original files. The information embedded is called 'watermarks '. Digital watermarks are difficult to remove without noticeably degrading the content and are a covert means in situation where copyright fails to provide robustness.

Seasonal Influence on Safety of Substation Grounding

Introduction


With the development of modern power system to the direction of extra-high voltage, large capacity, far distance transmission and application of advanced technologies the demand on the safety, stability and economic operation of power system became higher. A good grounding system is the fundamental insurance to keep the safe operation of the power system. The good grounding system should ensure the following:


" To provide safety to personnel during normal and fault conditions by limiting step and touch potential.
" To assure correct operation of electrical devices.
" To prevent damage to electrical apparatus.
" To dissipate lightning strokes.
" To stabilize voltage during transient conditions and therefore to minimize the probability of flashover during the transients


As it is stated in the ANSI/IEEE Standard 80-1986 "IEEE Guide for Safety in AC substation grounding," a safe grounding design has two objectives:


" To provide means to carry electric currents into the earth under normal and fault condition without exceeding any operational and equipment limit or adversely affecting continuity of service.
" To assure that a person in the vicinity of grounded facilities is not exposed to the danger of critical electrical shock.


A practical approach to safe grounding considers the interaction of two grounding systems: The intentional ground, consisting of ground electrodes buried at some depth below the earth surface, and the accidental ground, temporarily established by a person exposed to a potential gradient at a grounded facility.


An ideal ground should provide a near zero resistance to remote earth. In practice, the ground potential rise at the facility site increases proportionally to the fault current; the higher the current, the lower the value of total system resistance which must be obtained. For most large substations the ground resistance should be less than 1 Ohm. For smaller distribution substations the usually acceptable range is 1-5 Ohms, depending on the local conditions.
When a grounding system is designed, the fundamental method is to ensure the safety of human beings and power apparatus is to control the step and touch voltages in their respective safe region. step and touch voltage can be defined as follows.

Step Voltage
It is defined as the voltage between the feet of the person standing in near an energized object. It is equal to the difference in voltage given by the voltage distribution curve between two points at different distance from the electrode.


Touch Voltage
It is defined as the voltage between the energized object and the feet of the person in contact with the object. It is equal to the difference in voltage between the object and a point some distance away from it.
In different season, the resistivity of the surface soil layer would be changed. This would affect the safety of grounding systems. The value of step and touch voltage will move towards safe region or to the hazard side is the main concerned question

Wavelet transforms

Introduction


Wavelet transforms have been one of the important signal processing developments in the last decade, especially for the applications such as time-frequency analysis, data compression, segmentation and vision. During the past decade, several efficient implementations of wavelet transforms have been derived. The theory of wavelets has roots in quantum mechanics and the theory of functions though a unifying framework is a recent occurrence. Wavelet analysis is performed using a prototype function called a wavelet.

Wavelets are functions defined over a finite interval and having an average value of zero. The basic idea of the wavelet transform is to represent any arbitrary function f (t) as a superposition of a set of such wavelets or basis functions. These basis functions or baby wavelets are obtained from a single prototype wavelet called the mother wavelet, by dilations or contractions (scaling) and translations (shifts). Efficient implementation of the wavelet transforms has been derived based on the Fast Fourier transform and short-length 'fast-running FIR algorithms' in order to reduce the computational complexity per computed coefficient.

First of all, why do we need a transform, or what is a transform anyway?

Mathematical transformations are applied to signals to obtain further information from that signal that is not readily available in the raw signal. Now, a time-domain signal is assumed as a raw signal, and a signal that has been transformed by any available transformations as a processed signal.

There are a number of transformations that can be applied such as the Hilbert transform, short-time Fourier transform, Wigner transform, the Radon transform, among which the Fourier transform is probably the most popular transform. These mentioned transforms constitute only a small portion of a huge list of transforms that are available at engineers and mathematicians disposal. Each transformation technique has its own area of application, with advantages and disadvantages.

Importance Of The Frequency Information

Often times, the information that cannot be readily seen in the time-domain can be seen in the frequency domain. Most of the signals in practice are time-domain signals in their raw format. That is, whatever that signal is measuring, is a function of time. In other words, when we plot the signal one of the axis is time (independent variable) and the other (dependent variable) is usually the amplitude.

When we plot time-domain signals, we obtain a time-amplitude representation of the signal. This representation is not always the best representation of the signal for most signal processing related applications. In many cases, the most distinguished information is hidden in the frequency content of the signal. The frequency spectrum of a signal is basically the frequency components (spectral components) of that signal. The frequency spectrum of a signal shows what frequencies exist in the signal.

Cyberterrorism

Definition

Cyberterrorism is a new terrorist tactic that makes use of information systems or digital technology, especially the Internet, as either an instrument or a target. As the Internet becomes more a way of life with us,it is becoming easier for its users to become targets of the cyberterrorists. The number of areas in which cyberterrorists could strike is frightening, to say the least.

The difference between the conventional approaches of terrorism and new methods is primarily that it is possible to affect a large multitude of people with minimum resources on the terrorist's side, with no danger to him at all. We also glimpse into the reasons that caused terrorists to look towards the Web, and why the Internet is such an attractive alternative to them.

The growth of Information Technology has led to the development of this dangerous web of terror, for cyberterrorists could wreak maximum havoc within a small time span. Various situations that can be viewed as acts of cyberterrorism have also been covered. Banks are the most likely places to receive threats, but it cannot be said that any establishment is beyond attack. Tips by which we can protect ourselves from cyberterrorism have also been covered which can reduce problems created by the cyberterrorist.


We, as the Information Technology people of tomorrow need to study and understand the weaknesses of existing systems, and figure out ways of ensuring the world's safety from cyberterrorists. A number of issues here are ethical, in the sense that computing technology is now available to the whole world, but if this gift is used wrongly, the
consequences could be disastrous. It is important that we understand and mitigate cyberterrorism for the benefit of society, try to curtail its growth, so that we can heal the present, and live the future…

Ipv6 - The Next Generation Protocol

Definition

The Internet is one of the greatest revolutionary innovations of the twentieth century.It made the 'global village utopia ' a reality in a rather short span of time. It is changing the way we interact with each other, the way we do business, the way we educate ourselves and even the way we entertain ourselves. Perhaps even the architects of Internet would not have foreseen the tremendous growth rate of the network being witnessed today.With the advent of the Web and multimedia services, the technology underlying t he Internet has been under stress.

It cannot adequately support many services being envisaged, such as real time video conferencing, interconnection of gigabit networks with lower bandwidths, high security applications such as electronic commerce, and interactive virtual reality applications. A more serious problem with today's Internet is that it can interconnect a maximum of four billion systems only, which is a small number as compared to the projected systems on the Internet in the twenty-first century.

Each machine on the net is given a 32-bit address. With 32 bits, a maximum of about four billion addresses is possible. Though this is a large a number, soon the Internet will have TV sets, and even pizza machines connected to it, and since each of them must have an IP address, this number becomes too small. The revision of IPv4 was taken up mainly to resolve the address problem, but in the course of refinements, several other features were also added to make it suitable for the next generation Internet.

This version was initially named IPng (IP next generation) and is now officially known as IPv6. IPv6 supports 128-bit addresses, the source address and the destination address, each being, 128 bits long. IPv5 a minor variation of IPv4 is presently running on some routers. Presently, most routers run software that support only IPv4. To switch over to IPv6 overnight is an impossible task and the transition is likely to take a very long time.

However to speed up the transition, an IPv4 compatible IPv6 addressing scheme has been worked out. Major vendors are now writing softwares for various computing environments to support IPv6 functionality. Incidentally, software development for different operating systems and router platforms will offer major jobs opportunities in coming years.

Driving Optical Network Evolution

Definition

Over the years, advancement in technologies has improved transmission limitations, the number of wavelengths we can send down a piece of fiber, performance, amplification techniques, and protection and redundancy of the network. When people have described and spoken at length about optical networks, they have typically limited the discussion of optical network technology to providing physical-layer connectivity.

When actual network services are discussed, optical transport is augmented through the addition of several protocol layers, each with its own sets of unique requirements, to make up a service-enabling network. Until recently, transport was provided through specific companies that concentrated on the core of the network and provided only point-to- point transport services.

A strong shift in revenue opportunities from a service provider and vendor perspective, changing traffic patterns from the enterprise customer, and capabilities to drive optical fiber into metropolitan (metro) areas has opened up the next emerging frontier of networking. Providers are now considering emerging lucrative opportunities in the metro space. Whereas traditional or incumbent vendors have been installing optical equipment in the space for some time, little attention has been paid to the opportunity available through the introduction of new technology advancements and the economic implications these technologies will have.

Specifically, the new technologies in the metro space provide better and more profitable economics, scale, and new services and business models. The current metro infrastructure comprises this equipment, which emphasizes voice traffic; is limited in scalability; and was not designed to take advantage of new technologies, topologies, and changing traffic conditions.

Next-generation equipment such as next-generation Synchronous Optical Network (SONET), metro core dense wavelength division multiplexing (DWDM), metro-edge DWDM, and advancements in the optical core have taken advantage of these limitations, and they are scalable and data optimized; they include integrated DWDM functionality and new amplification techniques; and they have made improvements in the operational and provisioning cycles.This tutorial provides technical information that can help engineers address numerous Cisco innovations and technologies for Cisco Complete Optical Multiservice Edge and Transport (Cisco COMET). They can be broken down into five key areas: photonics, protection, protocols, packets, and provisioning.

Radio Network Controller

Definition

A Radio Network Controller (RNC) provides the interface between the wireless devices communicating through Node B transceivers and the network edge. This includes controlling and managing the radio transceivers in the Node B equipment, as well as management tasks like soft handoff.

The RNC performs tasks in a 3G wireless network analogous to those of the Base Station Controller (BSC) in a 2G or 2.5G network. It interfaces with GPRS Service Nodes (SGSNs) and Gateways (GGSNs) to mediate with the network service providers.

A radio network controller manages hundreds of Node B transceiver stations while switching and provisioning services off the Mobile Switching Center and 3G data network interfaces. The connection from the RNC to a Node B is called the User Plane Interface Layer and it uses T1/E1 transport to the RNC.

Due to the large number of Node B transceivers, a T1/E1 aggregator is used to deliver the Node B data over channelized OC-3 optical transport to the RNC. The OC-3 pipe can be a direct connection to the RNC or through traditional SONET/SDH transmission networks.

A typical Radio Network Controller may be built on a PICMG or Advanced TCA chassis. It contains several different kinds of cards specialized for performing the functions and interacting with the various interfaces of the RNC.

Wireless Networked Digital Devices

Definition

The proliferation of mobile computing devices including laptops, personal digital assistants (PDAs),and wearable computers has created a demand for wireless personal area networks (PANs).PANs allow proximal devices to share
information and resources.The mobile nature of these devices places unique requirements on PANs,such as low power consumption, frequent make-and-break connections, resource discovery and utilization, and international regulations.

This paper examines wireless technologies appropriate for PANs and reviews promising research in resource discovery and service utilization. We recognize the need for PDAs to be as manageable as mobile phones and also the restrictive screen area and input area in mobile phone. Thus the need for a new breed of computing devices to fit the bill for a PAN. The above devices become especially relevant for mobile users such as surgeons and jet plane mechanics who need both hands free and thus would need to have "wearable" computers.

This paper first examines the technology used for wireless communication.Putting a radio in a digital device provides physical connectivity;however,to make the device useful in a larger context a networking infrastructure is required. The
infrastructure allows devices o share data,applications,and resources such as printers, mass storage, and computation power. Defining a radio standard is a tractable problem as demonstrated by the solutions presented in this paper.
Designing a network infrastructure is much more complex.

The second half of the paper describes several research projects that try to address components of the networking infrastructure. Finally there are the questions that go beyond the scope of this paper, yet will have he greatest effect on the direction,capabilities,and future of this paradigm. Will these networking strategies be incompatible, like he various cellular phone systems in the United States, or will there be a standard upon which manufacturers and developers agree, like the GSM (global system for mobile communication)cellular phones in Europe?

Communication demands compatibility, which is challenging in a heterogeneous marketplace. Yet by establishing and implementing compatible systems, manufacturers can offer more powerful and useful devices to their customers. Since these are, after all, digital devices living in a programmed digital
world, compatibility and interoperation are possible.

Technologies explored:
1. Electric field- use human body as a current conduit.
2.Magnetic field-use base station technology for picocells of space.
3.Infra Red- Basic issues including opaque body obstruction.
4.Wireless Radio Frequency- The best technology option however has to deal with the finite resource of the electro magnetic spectrum.

Also must meet international standards by a compatible protocol.
a. UHF Radio.
b. Super regenerative receiver
c. SAW/ASH Receiver.