Electrical & Electronics Engineering Seminar Topics





---Electrical Engineering---

Aspects of electrical engineering are found in almost every consumer device or appliance. For example, such consumer technologies as cell phones, personal digital assistants, MP3 and DVD players, personal computers, and high-definition TV each involve several areas of specialization within electrical engineering.
Other technologies,such as the internet, wireless network connections, medical instrumentation, manufacturing, and power distribution -- technologies most people take for granted -- also involve some area of electrical engineering. In fact, it is difficult to find an industry where electrical engineers do not play a role.

Because electrical engineering has so many facets, there are many areas of specialization within the discipline. Our department has a high degree of expertise in electromagnetics and wave propagation, optoelectronics, digital signal processing and communications, power electronics, nanostructures and devices, controls, and computer engineering. Students may pursue specialization in these areas through elective courses.

Adaptive optics

INTROUCTION

Adaptive optics is a new technology which is being used now a days in ground based telescopes to remove atmospheric tremor and thus provide a clearer and brighter view of stars seen through ground based telescopes. Without using this system, the images obtained through telescopes on earth are seen to be blurred, which is caused by the turbulent mixing of air at different temperatures causing speed & direction of star light to vary as it continually passes through the atmosphere
.
Adaptive optics in effect removes this atmospheric tremor. It brings together the latest in computers, material science, electronic detectors, and digital control in a system that warps and bends a mirror in a telescope to counteract, in real time the atmospheric distortion.

The advance promises to let ground based telescopes reach their fundamental limits of resolution and sensitivity, out performing space based telescopes and ushering in a new era in optical astronomy. Finally, with this technology, it will be possible to see gas-giant type planets in nearby solar systems in our Milky Way galaxy. Although about 100 such planets have been discovered in recent years, all were detected through indirect means, such as the gravitational effects on their parent stars, and none has actually been detected directly.


WHAT IS ADAPTIVE OPTICS ?

Adaptive optics refers to optical systems which adapt to compensate for optical effects introduced by the medium between the object and its image. In theory a telescope's resolving power is directly proportional to the diameter of its primary light gathering lens or mirror. But in practice , images from large telescopes are blurred to a resolution no better than would be seen through a 20 cm aperture with no atmospheric blurring. At scientifically important infrared wavelengths, atmospheric turbulence degrades resolution by at least a factor of 10.

Under ideal circumstances, the resolution of an optical system is limited by the diffraction of light waves. This so-called " diffraction limit " is generally described by the following angle (in radians) calculated using the light's wavelength and optical system's pupil diameter:

a = 1.22 ^
D

Where the angle is given in radians. Thus, the fully-dilated human eye should be able to separate objects as close as 0.3 arcmin in visible light, and the Keck Telescope (10-m) should be able to resolve objects as close as 0.013 arcsec.


In practice, these limits are never achieved. Due to imperfections in the cornea nd lens of the eye, the practical limit to resolution only about 1 arcmin. To turn the problem around, scientists wishing to study the retina of the eye can only see details bout 5 (?) microns in size. In astronomy, the turbulent atmosphere blurs images to a size of 0.5 to 1 arcsec even at the best sites.

Adaptive optics provides a means of compensating for these effects, leading to appreciably sharper images sometimes approaching the theoretical diffraction limit. With sharper images comes an additional gain in contrast - for astronomy, where light levels are often very low, this means fainter objects can be detected and studied.

Circuit Breaker Maintenance by Mobile Agent Software Technology

INTROUCTION

Circuit breakers are crucial components for power system operations. They play an important role in switching for the routine network operation and protection of other devices in power systems. To ensure circuit breakers are in healthy condition, periodical inspection and preventive maintenance are typically performed. The maintenance schedules and routines usually follow the recommendation of circuit breaker vendors, although the recommended schedules may be conservative.

New maintenance techniques and methodologies are emerging, while the circuit breakers keep improving in their designs and functions . As an example, some new circuit breakers have embedded monitoring instruments available to measure the coil current profiles and the operation timing . The recorded information can be used to monitor the condition of breakers during each operation. In this case, it may be more appropriate to replace the time-directed maintenance by condition-directed maintenance practice . When applied properly, both the size of the maintenance crew and maintenance cost may be reduced greatly with this approach. Since the number of circuit breakers in a power system is usually very big, a small maintenance cost saving per each circuit breaker can accumulate to a considerable benefit for the whole system. A more systematic solution is
Reliability Centered Maintenance (RCM), which can be used to select the most appropriate maintenance strategy.

During the maintenance or repair work, the maintenance crew will need to access information distributed across the utility and stored using different data formats. By equipping the crew with new information access methods to replace the old paper-based information exchange and logging method, the efficiency may be improved since less time will be spent on preparation, reporting and logging. An information access method that is capable of handling heterogeneous information sources will be helpful to achieve the above goal. Also, the new information access method should be secure and able to work on unreliable public networks.

The mobile agent software provides a flexible framework for mobile agent applications. An agent application program can travel through the internet/intranet to the computers where the mobile agent server or transporter is running. The mobile agent software also supports Distributed Events, Agent Collaboration and Service Bridge. Compared with client server systems, an agent can process the data locally and thus reduce the network traffic. Besides, the Java platform encapsulates the network layer from the agent, which makes the programming easier. The mobile agent software may fit very well in the circuit breaker maintenance scenario. In this paper, we considered how mobile agent software might be applied in circuit breaker maintenance and monitoring from the viewpoint of the maintenance crew.


CIRCUIT BREAKER MAINTENANCE TASKS

The maintenance of circuit breakers deserves special consideration because of their importance for routine switching and for protection of other equipment. Electric transmission system breakups and equipment destruction can occur if a circuit breaker fails to operate because of a lack of a preventive maintenance. The need for maintenance of circuit breaker is often not obvious as circuit breakers may remain idle, either open or closed, for long periods of time. Breakers that remain idle for six months or more should be made to open and close several time in succession to verify proper operation and remove any accumulation of dust or foreign material on, moving parts and contacts.

The circuit breakers mainly consist of the interrupter assembly (contacts, arc interrupters and arc chutes), operating mechanism, operation rod, control panel, sealing system, and breaking medium (SF6, oil, vacuum and air). To ensure the performance of a circuit breaker, all the components should be kept in good condition; therefore time-directed preventive maintenance has been widely adopted. The preventive maintenance tasks include periodic inspection, test, and replacement of worn or defective components and lubrication of the mechanical parts. The maintenance intervals are usually determined using experiences or following the recommended schedules provided by the vendor or standard.

The maintenance practices can be divided into three categories: corrective maintenance, preventive maintenance, and predictive maintenance.

Digital Testing of High Voltage Circuit Breaker

INTROUCTION

With the advancement of power system, the lines and other equipment operate at very high voltages and carry large currents. High-voltage circuit breakers play an important role in transmission and distribution systems. A circuit breaker can make or break a circuit, either manually or automatically under all conditions viz. no-load, full-load and short-circuit conditions. The American National Standard Institute (ANSI) defines circuit breaker as: "A mechanical switching device capable of making, carrying and breaking currents under normal circuit conditions and also making, carrying for a specified time, and breaking currents under specified abnormal circuit conditions such as those of short circuit". A circuit breaker is usually intended to operate infrequently, although some types are suitable for frequent operation.

ESSENTIAL QUALITIES OF HV CIRCUIT BREAKER

High-voltage circuit breaker play an important role in transmission and distribution systems. They must clear faults and isolate faulted sections rapidly and reliably. In-short they must possess the following qualities.

" In closed position they are good conductors.
" In open position they are excellent insulators.
" They can close a shorted circuit quickly and safely without unacceptable contact erosion.
" They can interrupt a rated short-circuit current or lower current quickly without generating an abnormal voltage.

The only physical mechanism that can change in a short period of time from a conducting to insulating state at a certain voltage is the arc.

HISTORY

The first circuit breaker was developed by J.N. Kelman in 1901. It was the predecessor of the oil circuit breaker and capable of interrupting a short circuit current of 200 to 300 Ampere in a 40KV system. The circuit breaker was made up of two wooden barrels containing a mixture of oil and water in which the contacts were immersed. Since then the circuit breaker design has undergone a remarkable development. Now a days one pole of circuit breaker is capable of interrupting 63 KA in a 550 KV network with SF6 gas as the arc quenching medium.

THE NEED FOR TESTING

Almost all people have experienced the effects of protective devices operating properly. When an overload or a short circuit occurs in the home, the usual result is a blown fuse or a tripped circuit breaker. Fortunately few have the misfortune to see the results of a defective device, which may include burned wiring, fires, explosions, and electrical shock.

It is often assumed that the fuses and circuit breakers in the home or industry are infallible, and will operate safely when called upon to do so ten, twenty, or more years after their installation. In the case of fuses, this may be a safe assumption, because a defective fuse usually blows too quickly, causing premature opening of the circuit, and forcing replacement of the faulty component. Circuit breakers, however, are mechanical devices, which are subject to deterioration due to wear, corrosion and environmental contamination, any of which could cause the device to remain closed during a fault condition. At the very least, the specified time delay may have shifted so much that proper protection is no longer afforded to devices on the circuit, or improper coordination causes a main circuit breaker or fuse to open in an inconvenient location.

Eddy current brakes

INTROUCTION

Many of the ordinary brakes, which are being used now days stop the vehicle by means of mechanical blocking. This causes skidding and wear and tear of the vehicle. And if the speed of the vehicle is very high, the brake cannot provide that much high braking force and it will cause problems. These drawbacks of ordinary brakes can be overcome by a simple and effective mechanism of braking system 'The eddy current brake'. It is an abrasion-free method for braking of vehicles including trains. It makes use of the opposing tendency of eddy current
Eddy current is the swirling current produced in a conductor, which is subjected to a change in magnetic field. Because of the tendency of eddy currents to oppose, eddy currents cause energy to be lost. More accurately, eddy currents transform more useful forms of energy such as kinetic energy into heat, which is much less useful. In many applications, the loss of useful energy is not particularly desirable. But there are some practical applications. Such an application is the eddy current brake.

PRINCIPLE OF OPERATIONS

Eddy current brake works according to Faraday's law of electromagnetic induction. According to this law, whenever a conductor cuts magnetic lines of forces, an emf is induced in the conductor, the magnitude of which is proportional to the strength of magnetic field and the speed of the conductor. If the conductor is a disc, there will be circulatory currents i.e. eddy currents in the disc. According to Lenz's law, the direction of the current is in such a way as to oppose the cause, i.e. movement of the disc.
Essentially the eddy current brake consists of two parts, a stationary magnetic field system and a solid rotating part, which include a metal disc. During braking, the metal disc is exposed to a magnetic field from an electromagnet, generating eddy currents in the disc. The magnetic interaction between the applied field and the eddy currents slow down the rotating disc. Thus the wheels of the vehicle also slow down since the wheels are directly coupled to the disc of the eddy current brake, thus producing smooth stopping motion.


EDDY CURRENT INDUCED IN A CONDUCTOR

Essentially an eddy current brake consists of two members, a stationary magnetic field system and a solid rotary member, generally of mild steel, which is sometimes referred to as the secondary because the eddy currents are induced in it. Two members are separated by a short air gap, they're being no contact between the two for the purpose of torque transmission. Consequently there is no wear as in friction brake.

Stator consists of pole core, pole shoe, and field winding. The field winding is wounded on the pole core. Pole core and pole shoes are made of east steel laminations and fixed to the state of frames by means of screw or bolts. Copper and aluminium is used for winding material the arrangement is shown in fig. 1. This system consists of two parts.
1. Stator
2. Rotor

Electric Powerline Networking For A Smart Home

INTROUCTION

Electric power line networking is a method of home networking which is capable of interconnecting computers in your home. It uses exciting AC wiring and power outlets to transmit data around a home or small offices. It is based on the concept of 'no new wires'. It would let you share:
" Web access
" Printers
" PC hard drives
From any plug in your home or office.
Electric power line networking avoids the need for putting PCs near a phone outlet. Instead, computers and other devices are connected through electrical outlet. You are networked by just plugging in. power line networking is based on Homeplug Powerline Alliance's Homeplug 1.0 standards. With power line networking, a home owner can create an entire home network linking his/her,
" Personal computers
" Printers
" Music equipments
" Internet access
Without running any new wires.
Wires and sockets are used simultaneously for electricity and data, without disrupting each other. Thus power line networking provides a cost effective solution for home networking. With power line networking, you will be able to put your desktop PCs anywhere you like in your home. It will also easier to buy and network other devices.

FEATURES

A power line network is having the following features,

" No extra wiring and extra wire maintenance is required since it uses exciting electrical line it self for net working.
" It uses standard outlet/plug. So it is possible to access the network everywhere at home.
" It uses power outlet to let the users more easily relocate the PCs, switches, routers and print servers.
" It is easy to connect with Ethernet switch or router to access internet or exciting home network.
" It provides a 56-bit DES encryption for high security in data transfer.
" It co-exists with current technology to protect previous investment. So customers will not have to discard their exciting network solutions.
" It provides maximum 14Mbps bandwidth over standard home for sharing information, multimedia application and gaming.
" Frequency band 4.3 to 20.9 MHz for low interference from other electrical appliances.

THE TECHNOLOGY

Like home PNA, power-line networking is based on the concept of "no new wires". The convenience is even more obvious in this case because while not every room has a phone jack, but always have an electrical outlet near a computer. In power line networking, you connect your computers to one another through the same outlet. Because it requires no new wiring, and the network adds no cost to your electric bill, power line networking is the cheapest method of connecting computers in different rooms.
There are two competing technologies for power line networking. The older technology called Passport, by company named Intelogis and a new technology called Powerpacket, developed by intellon. Now let's find out how each of these technologies works.

Electrical Impedance Tomography

INTROUCTION

To begin with, the word tomography can be explained with reference to 'tomo' and 'graphy'; 'tomo' originates from the Greek word 'tomos' which means section or slice, and 'graphy' refers to representation. Hence tomography refers to any method which involves reconstruction of the internal structural information within an object mathematically from a series of projections. The projection here is the visual information probed using an emanation which are physical processes involved. These include physical processes such as radiation, wave motion, static field, electric current etc. which are used to study an object from outside.

Medical tomography primarily uses X-ray absorption, magnetic resonance, positron emission, and sound waves (ultrasound) as the emanation. Nonmedical area of application and research use ultrasound and many different frequencies of electromagnetic spectrum such as microwaves, gamma rays etc. for probing the visual information. Besides photons, tomography is regularly performed using electrons and neutrons. In addition to absorption of the particles or radiation, tomography can be based on the scattering or emission of radiation or even using electric current as well.

When electric current is consecutively fed through different available electrode pairs and the corresponding voltage, measured consecutively by all remaining electrode pairs, it is possible to create an image of the impedance of different regions of the volume conductor by using certain reconstruction algorithms. This imaging method is called impedance imaging. Because the image is usually constructed in two dimensions from a slice of the volume conductor, the method is also called impedance tomography and ECCT (electric current computed tomography), or simply, electrical impedance tomography or EIT.

Electrical Impedance Tomography (EIT) is an imaging technology that applies time-varying currents to the surface of a body and records the resulting voltages in order to reconstruct and display the electrical conductivity and permittivity in the interior of the body. This technique exploits the electrical properties of tissues such as resistance and capacitance. It aims at exploiting the differences in the passive electrical properties of tissues in order to generate a tomographic image.

Human tissue is not simply conductive. There is evidence that many tissues also demonstrate a capacitive component of current flow, and therefore, it is appropriate to speak of the specific admittance (admittivity) or specific impedance (impedivity) of tissue rather than the conductivity; hence, electric impedance tomography. Thus, EIT is an imaging method which maybe used to complement X-ray tomography (computer tomography, CT), ultrasound imaging, positron emission tomography (PET), and others.

Electro Dynamic Tether

INTROUCTION

Tether is a word, which is not heard often. The word meaning of tether is 'a rope or chain to fasten an animal so that it can graze within a certain limited area'. We can see animals like cows and goats 'tethered' to trees and posts.

In space also tethers have an application similar to their word meaning. But instead of animals, there are spacecrafts and satellites in space. A tether if connected between two spacecrafts (one having smaller orbital altitude and the other at a larger orbital altitude) momentum exchange can take place between them. Then the tether is called momentum exchange space tether. A tether is deployed by pushing one object up or down from the other. The gravitational and centrifugal forces balance each other at the center of mass. Then what happens is that the lower satellite, which orbits faster, tows its companion along like an orbital water skier. The outer satellite thereby gains momentum at the expense of the lower one, causing its orbit to expand and that of the lower to contract. This was the original use of tethers.

But now tethers are being made of electrically conducting materials like aluminium or copper and they provide additional advantages. Electrodynamic tethers, as they are called, can convert orbital energy into electrical energy. It works on the principle of electromagnetic induction. This can be used for power generation. Also when the conductor moves through a magnetic field, charged particles experience an electromagnetic force perpendicular to both the direction of motion and field. This can be used for orbit raising and lowering and debris removal. Another application of tethers discussed here is artificial gravity inside spacecrafts.

NEED AND ORIGIN OF TETHERS

Space tethers have been studied theoretically since early in the 20th century, it wasn't until 1974 that Guiseppe Colombo came up with the idea of using a long tether to support satellite from an orbiting platform. But that was simple momentum exchange space tether. Now lets see what made scientists think of electrodynamic tethers.

Every spacecraft on every mission has to carry all the energy sources required to get its job done, typically in the form of chemical propellants, photovoltaic arrays or nuclear reactors. The sole alternative - delivery service - can be very expensive. For example, a spacecraft orbiting in the International space Station (ISS) will need an estimated 77 metric tons of booster propellant over its anticipated 10 year life span just to keep itself from gradually falling out of orbit. Assuming a minimal price of $7000 a pound (dirt cheap by current standards) to get fuel up to the station's 360 km altitude, i.e. $1.2 billion simply to maintain the orbital status quo.

So scientists have are taking a new look at space tether, making it electrically conductive. In 1996, NASA launched a shuttle to deploy a satellite on a tether to study the electrodynamic effects of a conducting tether as it passes through the earth's magnetic fields. As predicted by the laws of electromagnetism, a current was produced in the tether as it passed through the earth's magnetic field, acting as an electrical generator. This was the origin of electrodynamic tethers


PHYSICALLY WHAT IS A TETHER

A tether in space is a long, flexible cable connecting two masses. When the cable is electrically conductive, the ensemble becomes an electrodynamic tether.

There can be three main types of electrodynamic tether employed systems providing different advantages:

1. Electrodynamic tether systems - in which two masses are separated by a long flexible cable electrically conductive cable - can perform many of the same functions as conventional spacecrafts but without the use of chemical or nuclear fuel sources.
2. In low earth orbit (LEO) tether systems could provide electrical power and positioning capability for satellites and manned spacecraft, as well as help get rid the region of dangerous debris.
3. On long term missions, such as exploration of Jupiter and its moons, tethers could drastically reduce the amount of fuel needed to maneuver while also providing a dependable source of electricity.

Flexible Ship Electric Power System Design

INTROUCTION

The first electrical power system was installed on the USS Trenton in 1883 (Ykema 1988). The system consisted of a single dynamo supplying current to 247 lamps at a voltage of 10 volts d.c. Until the 1914 to 1917 period, the early electrical power systems were principally d.c. with the loads consisting mainly of motors and lighting. It was during World War I that 230 volt, 60 hertz power systems were seriously introduced into naval vessels. Since World War II the ship's electrical systems have continued to improve, including the use of 4,160 volt power systems and the introduction of electronic solid-state protective devices.

Protective devices were developed to monitor the essential parameters of electrical power systems and then through built-in logic, determine the degree of configuration of the system necessary to limit the damage to continuity of electric service for the vessel (Ykema 1988).

Fuses are the oldest form of protective devices used in electrical power systems in commercial systems and on navy vessels. Circuit breakers were added around the turn of the century. The first electronic solid-state over current protective device used by the Navy was installed on the 4,160 power system in Nimitz class carriers. Navy systems of today supply electrical energy to sophisticated weapons systems, communications systems, navigational systems, and operational systems. To maintain the availability of energy to the connected loads to keep all systems and equipment operational, the navy electrical systems utilize fuses, circuit breakers, and protective relays to interrupt the smallest portion of the system under any abnormal condition.

The existing protection system has several shortcomings in providing continuous supply under battle and certain major failure conditions. The control strategies which are implemented when these types of damage occur are not effective in isolating only the loads affected by the damage, and are highly dependent on human intervention to manually reconfigure the distribution system to restore supply to healthy loads.

This paper discusses new techniques which aim to overcome the shortcomings of the protective system. These techniques are composed of advanced monitoring and control, automated failure location, automated intelligent system reconfiguration and restoration, and self-optimizing under partial failure.

These new techniques will eliminate human mistakes, make intelligent reconfiguration decisions more quickly, and reduce the manpower required to perform the functions. It will also provide optimal electric power service through the surviving system. With fewer personnel being available on ships in the future, the presence of this automated system on a ship may mean the difference between disaster and survival.

SHIPBOARD POWER SYSTEM STRUCTURE

Navy Ships use three phase power generated and distributed in an ungrounded delta configuration. Ungrounded systems are used to ensure continued operation of the electrical system despite the presence of a single phase ground. The voltages are generated at levels of 450 volts a.c. at 60 hertz. The most popular topology used in Navy electrical system is a ring configuration of the generators which provides more flexibility in terms of generation connection and system configuration. In this type of topology, any
generator can provide power to any load. This feature is of great importance in order to ensure supply of power to vital loads if failure of an operating generating unit occurs.

Generator switchboards are composed of one or more switchgear units and are located close to their associated generators. Further the generator switchboards are composed of three sections: one section contains the generator breaker, generator controls, breaker controls, and protective devices; the other two sections contain a bus tie breaker, load center breakers, and breakers for major loads.

Hybrid Electric Vehicle

INTROUCTION

Have you pulled your car up to the gas pump lately and been shocked by the high price of gasoline? As the pump clicked past $20 or $30, maybe you thought about trading in that SUV for something that gets better mileage. Or maybe you are worried that your car is contributing to the greenhouse effect. Or maybe you just want to have the coolest car on the block.

Currently, there is a solution for all this problems; it's the hybrid electric vehicle. The vehicle is lighter and roomier than a purely electric vehicle, because there is less need to carry as many heavy batteries. The internal combustion engine in hybrid-electric is much smaller and lighter and more efficient than the engine in a conventional vehicle. In fact, most automobile manufacturers have announced plans to manufacture their own hybrid versions.

How does a hybrid car work? What goes on under the hood to give you 20 or 30 more miles per gallon than the standard automobile? And does it pollute less just because it gets better gas mileage. In this seminar we will study how this amazing technology works and also discuss about TOYOTA & HONDA hybrid cars.

WHAT IS A "HYBRID ELECTRIC VEHICLE"?

Any vehicle is hybrid when it combines two or more sources of power. In fact, many people have probably owned a hybrid vehicle at some point. For example, a mo-ped (a motorized pedal bike) is a type of hybrid because it combines the power of a gasoline engine with the pedal power of its rider.

Hybrid electric vehicles are all around us. Most of the locomotives we see pulling trains are diesel-electric hybrids. Cities like Seattle have diesel-electric buses -- these can draw electric power from overhead wires or run on diesel when they are away from the wires. Giant mining trucks are often diesel-electric hybrids. Submarines are also hybrid vehicles -- some are nuclear-electric and some are diesel-electric. Any vehicle that combines two or more sources of power that can directly or indirectly provide propulsion power is a hybrid.

The most commonly used hybrid is gasoline-electric hybrid car which is just a cross between a gasoline-powered car and an electric car. A 'gasoline-electric hybrid car' or 'hybrid electric vehicle' is a vehicle which relies not only on batteries but also on an internal combustion engine which drives a generator to provide the electricity and may also drive a wheel. In hybrid electric vehicle the engine is the final source of the energy used to power the car. All electric cars use batteries charged by an external source, leading to the problem of range which is being solved in hybrid electric vehicle.


HYBRID STRUCTURE

You can combine the two power sources found in a hybrid car in different ways. One way, known as a parallel hybrid, has a fuel tank, which supplies gasoline to the engine. But it also has a set of batteries that supplies power to an electric motor. Both the engine and the electric motor can turn the transmission at the same time, and the transmission then turns the wheels.

Hy-Wire

INTROUCTION

The world today consumes a large amount of energy. Most of the energy requirements are fulfilled using conventional sources of energy. Of this energy consumed, a large part is utilized by the automotive sector. If the people continue using the conventional sources of energy at this rate, the earth will be facing an energy crisis very soon. The introduction of an efficient electric vehicle can greatly improve the conditions of today by helping curb the use of traditional fuels.

The Hy-Wire, discussed in this paper, runs on the electricity generated by a hydrogen fuel cell, more accurately called the 'Proton Exchange Membrane' fuel cell. This fuel cell uses hydrogen as a source of fuel. The fuel cell produces dc voltage, which is converted to ac voltage and used to run an ac motor.

The by-wire concept removes the mechanical linkages and replaces all of them by wires and electromechanical actuators. This makes the whole vehicle lighter and more spacious. In the Hy-Wire vehicle, the whole system has been modeled into an 11-inch thick chassis. This chassis houses all the electrical components and mechanical components of the vehicle. This lets us make the body in a customized version and also lets us change the chassis architecture with radical new designs.

The by-wire system is made practical by the higher voltages inherent in a fuel cell system. The 42-V technology is made use of in this vehicle. It is said to be a luxury car in the sense that it provides the space and visibility that a luxury car does.

Hydrogen Fuel Cells

A fuel cell is an electrochemical energy conversion device. A fuel cell converts the hydrogen and oxygen into water and in the process produces electricity. Such fuel cells, which use hydrogen as a source of fuel, are called hydrogen fuel cells. The other electrochemical device that we are all familiar with is the battery. A battery has all of its chemicals stored inside, and it converts those chemicals into electricity too. This means that a battery eventually goes dead and you either throw it away or recharge it. With a fuel cell, chemicals constantly flow into the cell so it never goes dead - as long as there is a flow of chemicals into the cell, the electricity flows out of the cell.

Sir William Grove invented the first fuel cell in 1839. He used dilute sulphuric acid as electrolyte, oxygen as the oxidizing agent and hydrogen as fuel. In 1959, Francis T Bacon came up with an alkaline fuel cell, but it could produce only 5-kilowatt power.

A fuel cell produces dc voltage that can be used for various needs. The fuel cells are classified into various types depending upon the electrolyte they use. They are classified as follows: -
a) Direct method fuel cells
b) Solid oxide fuel cells
c) Phosphoric acid fuel cells
d) Alkaline fuel cells
e) Molten carbonate fuel cells

Illumination with Solid State lighting

INTROUCTION

Light emitting diodes (LEDs) have gained broad recognition as the ubiquitous little lights that tell us that our monitors are on, the phone is off the hook or the oven is hot semiconductor. The basic principle behind the emission of light is that: When charge carrier pairs recombine in a semiconductor with an appropriate energy band-gap generates light. In a forward biased diode, little recombination occurs in the depletion layer. Most occurs in a few microns of either P- region or N -region, depending on which one is lightly doped. LEDs produce narrow band radiations, with wave length determined by energy band of the semiconductor.

Solid state electronics have replaced their vacuum tube predecessors for almost five decades. However in the next decade they will be brighter, more efficient and inexpensive enough to replace conventional lighting sources (i.e. incandescent bulbs, fluorescent tubes).

Recent development in AlGaP and AlInGaP blue and green semiconductor growth technology have enabled applications where several single to several millions of these indicator LEDs can be packed together to be used in full color signs, automotive tail lambs, traffic lights etc. still the preponderance of applications require that the viewer has to look directly into the LED. This is not "SOLID STATE LIGHTING"

Artificial lighting sources share three common characteristics:
-They are rarely viewed directly: light from sources are viewed as reflection off the illuminated object.
- The unit of measure is kilo lumen or higher not mille lumen or lumen as it is incase of LEDs
-Lighting sources are pre dominantly white with CIE color coordinates, producing excellent color rendering
Today there is no such commercially using "SOLID STATE LAMP" However high power LED sources are being developed, which will evolve into lighting sources

EVOLUTION OF LEDs

The first practical LED was developed in 1962 and was made of a compound semiconductor alloy, gallium arsenide phosphide, when emitted red light. From 1962, compound semiconductors would provide the foundation for the commercial expansion of LEDs. From 1962 when first LEDs were introduced at 0.001 lm/LED using GaAsP until the mid-1990s commercial LEDs were used exclusively as indicators. In terms of number of LEDs sold, indicators and other small signal applications in 2002 still consume the largest volume of LEDs, with annual global consumption exceeding several LEDs per person on the planet.

Analogous to famous Moore's law in silicon which predicts a doubling of number of transistors in a chip every 18-24 months, LED luminous output has been following Haitz's law, doubling every 18-24 months for past 34 years.

Intelligent Management Of Electrical Systems in Industries

INTROUCTION

Industrial plants have put continuous pressure on the advanced process automation. However, there has not been so much focus on the automation of the electricity distribution networks. Although, the uninterrupted electricity distribution is one basic requirement for the process. A disturbance in electricity supply causing the"downrun" of the process may cost huge amount of money. Thus the intelligent management of electricity distribution including, for example, preventive condition monitoring and on-line reliability analysis has a great importance.

Nowadays the above needs have aroused the increased interest in the electricity distribution automation of industrial plants. The automation of public electricity distribution has developed very rapidly in the past few years. Very promising results has been gained, for example, in decreasing outage times of customers. However, the same concept as such cannot be applied in the field of industrial electricity distribution, although the bases of automation systems are common. The infrastructures of different industry plants vary more from each other as compared to the public electricity distribution, which is more homogeneous domain. The automation devices, computer systems, and databases are not in the same level and the integration of them is more complicated.


Applications for supporting the public distribution network management


It was seen already in the end of 80's that the conventional automation system (i.e. SCADA) cannot solve all the problems regarding to network operation. On the other hand, the different computer systems (e.g. AM/FM/GIS) include vast amount of data which is useful in network operation. The operators had considerable heuristic knowledge to be utilized, too. Thus new tools for practical problems were called for, to which AI-based methods (e.g. object-oriented approach, rule-based technique, uncertainty modeling and fuzzy sets, hypertext technique, neural networks and genetic algorithms) offers new problem solving methods.

So far a computer system entity, called as a distribution management system (DMS), has been developed. The DMS is a part of an integrated environment composed of the SCADA, distribution automation (e.g. microprocessor-based protection relays), the network database (i.e. AM/FM/GIS), the geographical database, the customer database, and the automatic telephone answering machine system. The DMS includes many intelligent applications needed in network operation. Such applications are, for example, normal state-monitoring and optimization, real-time network calculations, short term load forecasting, switching planning, and fault management.

The core of the whole DMS is the dynamic object-oriented network model. The distribution network is modeled as dynamic objects which are generated based on the network data read from the network database. The network model includes the real-time state of the network (e.g. topology and loads). Different network operation tasks call for different kinds of problem solving methods. Various modules can operate interactively with each other through the network model, which works as a blackboard (e.g. the results of load flow calculations are stored in the network model, where they are available in all other modules for different purposes).

The present DMS is a Windows NT -program implemented by Visual C++. The prototyping meant the iteration loop of knowledge acquisition, modeling, implementation, and testing. Prototype versions were tested in a real environment from the very beginning. Thus the feedback on new inference models, external connections, and the user-interface was obtained at a very early stage. The aim of a real application in the technical sense was thus been achieved. The DMS entity was tested in the pilot company, Koillis-Satakunnan Sähkö Oy, having about 1000 distribution substations and 1400 km of 20 kV feeders. In the pilot company different versions of the fault location module have been used in the past years in over 300 real faults.

Most of the faults have been located with an accuracy of some hundred meters, while the distance of a fault from the feeding point has been from a few to tens of kilometers. The fault location system has been one reason for the reduced outage times of customers (i.e. about 50 % in the 8 past years) together with other automation.

Isoloop Magnetic Couplers

INTROUCTION

Couplers, also known as "isolators" because they electrically isolate as well as transmit data, are widely used in industrial and factory networks, instruments, and telecommunications. Every one knows the problems with optocouplers. They take up a lot of space, are slow, optocouplers age and their temperature range is quite limited. For years, optical couplers were the only option. Over the years, most of the components used to build instrumentation circuits have become ever smaller. Optocoupler technology, however, hasn't kept up. Existing coupler technologies look like dinosaurs on modern circuit boards.

Magnetic couplers are analogous to optocouplers in a number of ways. Design engineers, especially in instrumentation technology, will welcome a galvanically-isolated data coupler with integrated signal conversion in a single IC. My report will give a detailed study about 'Isoloop Magnetic Couplers'.

2. INDUSTRIAL NETWORKS NEED ISOLATION

2.1 GROUND LOOPS
When equipment using different power supplies is tied together (with a common ground connection) there is a potential for ground loop currents to exist. This is an induced current in the common ground line as a result of a difference in ground potentials at each piece of equipment. Normally all grounds are not in the same potential.
Widespread electrical and communications networks often have nodes with different ground domains. The potential difference between these grounds can be AC or DC, and can contain various noise components. Grounds connected by cable shielding or logic line ground can create a ground loop-unwanted current flow in the cable. Ground-loop currents can degrade data signals, produce excessive EMI, damage components, and, if the current is large enough, present a shock hazard.

Galvanic isolation between circuits or nodes in different ground domains eliminates these problems, seamlessly passing signal information while isolating ground potential differences and common-mode transients. Adding isolation components to a circuit or network is considered good design practice and is often mandated by industry standards. Isolation is frequently used in modems, LAN and industrial network interfaces (e.g., network hubs, routers, and switches), telephones, printers, fax machines, and switched-mode power supplies.

GALVANIC COUPLERS
Magnetic couplers are analogous to optocouplers in a number of ways. Optocouplers transmit signals by means of light through a bulk dielectric that provides galvanic isolation

Large Scale Power Generation Using Fuel Cell

INTROUCTION

Technology is increasing our energy needs, but it is also showing in new ways to generate power more effetely with less impact on the environment. One of the most promising options for supplementing future power supplies is the fuel cells. They have the potential to create much more reliable power, with lower levels of undesirable emissions and noise and higher over all efficiency than more traditional power generation systems with existing and projected applications ranging from space craft to private automobiles, large stationary power generator systems to small electronic devices, fuel cells are poised to play an increasingly critical role in meeting the world's plowing demand for clean, reliable power.

What is a fuel cell

Fuel cell is an electrochemical energy conversion device which converts chemicals hydrogen and oxygen to produce electricity by slipping electrons from hydrogen. The hydrogen med is exceeded from natural gun, propane and other common fuel and oxygen is from air.

A fuel cell system consists of 3 major components
1. A fuel cell stack
2. A processor to extract pare hydrogen from the fuel source
3. A storage and conditioners system to adapt the fuel cell's continuous power only out to fluctuating demand.
4. A mechanism for recovering heat from electrochemical process.

The remainder of the system consists of pumps compressors and controls.
Fuel cell stack: in fuel cell stack, purified hydrogen and oxygen from air pass through linked platter similar to those in battery .the electrochemical reaction generator electricity and heat.
An energy storage and power conditioners system adapts the fuel cell's maximum power flour to fluctuating power loads. A battery storage system with dc-ac inventor stores power from low demand periods for use during peak demand .

Heat recovery system directs heat from the jacket of water surrounding the fuel cell in to a preheat tank for the domes tie hot water system.

Types of fuel cells.

There are different types of fuel cells

1.Research is underway to develop proton exchange membrane fuel cell.
2.Proton exchange membrane fuel cell user one of the simplest reactions of any fuel cell.

PEM fuel cell history

PEM technology was developed after 1960. It was developed for U.S. Navy and Army. The first unit was fueled by hydrogen generated by mixing water and lithium hydride.

The next development in PEM Technology was for NASA's project Gemini in the early days of the U.S. piloted space program .batteries had provided power for earlier missions, but future missions would be longer repairing a different power source.

By mid -1970s PEM cells were developed for under water life support leading to the US nay oxygen generation plant.

Local Multipoint Distribution Service

INTROUCTION

Local Multipoint Distribution Service (LMDS), or Local Multipoint Communication Systems (LMCS), as the technology is known in Canada, is a broadband wireless point-to-multipoint communication system operating above 20 GHz that can be used to provide digital two-way voice, data, Internet, and video services. The term "Local" indicates that the signals range limit. "Multipoint" indicates a broadcast signal from the subscribers; the term "distribution" defines the wide range of data that can be transmitted, data ranging anywhere from voice, or video to Internet and video traffic. It provides high capacity point to multipoint data access that is less investment intensive.

Services using LMDS technology include high-speed Internet access, real-time multimedia file transfer, remote access to corporate local area networks, interactive video, video-on-demand, video conferencing, and telephony among other potential applications. In the United States LMDS uses 1.3 GHz of RF spectrum to transmit voice, video and fast data to and from homes and businesses. With current LMDS technology, this roughly translates to a 1 Gbps digital data pipeline. Canada already has 3 GHz of spectrum set aside for LMDS and is actively setting up systems around the country. Many other developing countries see this technology as a way to bypass the expensive implementation of cable or fiber optics into the twenty-first century.

1.1 IMPORTANCE OF LMDS

Point-to-point fixed wireless network has been commonly deployed to offer high-speed dedicated links between high-density nodes in a network. More recent advances in a point-to-multipoint technology offer service providers a method of providing high capacity local access that is less capital intensive than wireline solution, faster to deploy than wireline, and able to offer a combination of applications. Moreover, as large part of a wireless network's cost is not incurred until the Customer Premise Equipment (CPE) is installed, the network service operator can time capital expenditures to coincide with the signing of new customers. LMDS provides an effective last-mile solution for the incumbent service provider and can be used by competitive service providers to deliver services directly to end-users.

1.2 BENEFITS OF LMDS

The main benefits of LMDS are listed below:
" Lower entry and deployment costs
" Ease and speed of deployment (systems can be deployed rapidly with minimal disruption to the community and environment)
" Fast realization of revenue (as a result of rapid deployment)
" Demand based build out (scalable architecture employing open industry standards ensuring services and coverage areas can be easily expanded as customer demand warrants)
" Cost, shift from fixed to variable components. (For wireline systems most of the capital investment is in the infrastructure, while with LMDS a greater percentage of investment is shifted to CPE)
" No stranded capital when customers churn.
" Cost-effective network maintenance, management, and operating costs.

Low - k Dielectrics

INTROUCTION

In this fast moving world time delay is one of the most dreaded situations in the field of data communication. A delay in the communication is as bad as loosing the information, whether it is on the internet or on television or talking over a telephone. We need to find out different ways to improve the communication speed. The various methods adopted by the communication industry are the wireless technology, optical communications, ultra wide band communication networks etc. But all these methods need an initial capital amount which makes all these methods cost ineffective. So improving the existing network is very important especially in a country like INDIA.

The communication systems mainly consist of a transeiver and a channel. The tranceiver is the core of all data communications. It has a very vast variety of electronic components mostly integrated into different forms of IC chips. These ICs provide the various signal modifications like amplification, modulation etc. The delay caused in these circuits will definitely affect the speed of data communication.

This is where this topic LOW-k DIELCTRICS becomes relevant. It is one of the most recent developments in the field of integrated electronics. Mostly the IC s are manufactured using the CMOS technology. This technology has an embedded coupling capacitance that reduces the speed of operation. There are many other logics available like the RTL,DTL,ECL,TTL etc . But all these other logics have higher power consumption than the CMOS technology. So the industry prefer CMOS over other logics .

Inside the IC there are lots of interconnections between points in the CMOS substrate. These refer to the connection between the different transistors in the IC. For example , in the case of NAND LOGICS there are lots of connections between the transistors and their feedbacks. These connections are made by the INTERCONNECT inside the IC . Aluminum has been the material of choice for the circuit lines used to connect transistors and other chip components. These thin aluminum lines must be isolated from each other with an insulating material, usually silicon dioxide (SiO2).

This basic circuit construction technique has worked well through the many generations of computer chip advances predicted by Moore's Law1. However, as aluminum circuit lines approach 0.18 mm in width, the limiting factor in computer processor speed shifts from the transistors' gate delay to interconnect delay caused by the aluminum lines and the SiO2 insulation material. With the introduction of copper lines, part of the "speed limit" has been removed. However, the properties of the dielectric material between the layers and lines must now be addressed. Although integration of low-k will occur at the 0.13mm technology node, industry opinion is that the 0.10mm generation, set for commercialization in 2003 or 2004, will be the true proving ground for low-k dielectrics because the whole industry will need to use low-k at that line width.

Mesh Radio

INTROUCTION

Governments are keen to encourage the roll-out of broadband interactive multimedia services to business and residential customers because they recognise the economic benefits of e-commerce, information and entertainment. Digital cable networks can provide a compelling combination of simultaneous services including broadcast TV, VOD, fast Internet and telephony. Residential customers are likely to be increasingly attracted to these bundles as the cost can be lower than for separate provision. Cable networks have therefore been implemented or upgraded to digital in many urban areas in the developed countries.

ADSL has been developed by telcos to allow on-demand delivery via copper pairs. A bundle comparable to cable can be provided if ADSL is combined with PSTN telephony and satellite or terrestrial broadcast TV services but incumbant telcos have been slow to roll it out and 'unbundling' has not proved successful so far. Some telcos have been accused of restricting ADSL performance and keeping prices high to protect their existing business revenues. Prices have recently fallen but even now the ADSL (and SDSL) offerings are primarily targeted at provision of fast (but contended) Internet services for SME and SOHO customers. This slow progress (which is partly due to the unfavourable economic climate) has also allowed cable companies to move slowly.


A significant proportion of customers in suburban and semi-rural areas will only be able to have ADSL at lower rates because of the attenuation caused by the longer copper drops. One solution is to take fibre out to street cabinets equipped for VDSL but this is expensive, even where ducts are already available.

Network operators and service providers are increasingly beset by a wave of technologies that could potentially close the gap between their fibre trunk networks and a client base that is all too anxious for the industry to accelerate the rollout of broadband. While the established vendors of copper-based DSL and fibre-based cable are finding new business, many start-up operators, discouraged by the high cost of entry into wired markets, have been looking to evolving wireless radio and laser options.

One relatively late entrant into this competitive mire is mesh radio, a technology that has quietly emerged to become a potential holder of the title 'next big thing'. Mesh Radio is a new approach to Broadband Fixed Wireless Access (BFWA) that avoids the limitations of point to multi-point delivery. It could provide a cheaper '3rd Way' to implement residential broadband that is also independent of any existing network operator or service provider. Instead of connecting each subscriber individually to a central provider, each is linked to several other subscribers nearby by low-power radio transmitters; these in turn are connected to others, forming a network, or mesh, of radio interconnections that at some point links back to the central transmitter.

MicroGrid

INTROUCTION

Evolutionary changes in the regulatory and operational climate of traditional electric utilities and the emergence of smaller generating systems such as microturbines have opened new opportunities for on-site power generation by electricity users. In this context, distributed energy resources (DER) - small power generators typically located at users' sites where the energy (both electric and thermal) they generate is used - have emerged as a promising option to meet growing customer needs for electric power with an emphasis on reliability and power quality.

The portfolio of DER includes generators, energy storage, load control, and, for certain classes of systems, advanced power electronic interfaces between the generators and the bulk power provider. This paper proposes that the significant potential of smaller DER to meet customers' and utilities' needs, can be best captured by organizing these resources into MicroGrids.

MicroGrid concept assumes an aggregation of loads and microsources operating as a single system providing both power and heat. The majority of the microsources must be power electronic based to provide the required flexibility to insure operation as a single aggregated system. This control flexibility allows the MicroGrid to present itself to the bulk power system as a single controlled unit that meets local needs for reliability and security.

The MicroGrid would most likely exist on a small, dense group of contiguous geographic sites that exchange electrical energy through a low voltage (e.g., 480 V) network and heat through exchange of working fluids. In the commercial sector, heat loads may well be absorption cooling. The generators and loads within the cluster are placed and coordinated to minimize the joint cost of serving electricity and heat demand, given prevailing market conditions, while operating safely and maintaining power balance and quality. MicroGrids move the PQR choice closer to the end uses and permits it to match the end user's needs more effectively. MicroGrids can, therefore, improve the overall efficiency of electricity delivery at the point of end use, and, as micrgrids become more prevalent, the PQR standards of the macrogrid can ultimately be matched to the purpose of bulk power delivery.

MICROGRID ARCHITECTURE

The MicroGrid structure assumes an aggregation of loads and microsources operating as a single system providing both power and heat. The majority of the microsources must be power electronic based to provide the required flexibility to insure controlled operation as a single aggregated system. This control flexibility allows the MicroGrid to present itself to the bulk power system as a single controlled unit, have plug-and-play simplicity for each microsource, and meet the customers' local needs. These needs include increased local reliability and security.

Nuclear Batteries

INTROUCTION

Micro electro mechanical systems (MEMS) comprise a rapidly expanding research field with potential applications varying from sensors in air bags, wrist-warn GPS receivers, and matchbox size digital cameras to more recent optical applications. Depending on the application, these devices often require an on board power source for remote operation, especially in cases requiring for an extended period of time. In the quest to boost micro scale power generation several groups have turn their efforts to well known enable sources, namely hydrogen and hydrocarbon fuels such as propane, methane, gasoline and diesel.

Some groups are develo ping micro fuel cells than, like their micro scale counter parts, consume hydrogen to produce electricity. Others are developing on-chip combustion engines, which actually burn a fuel like gasoline to drive a minuscule electric generator. But all these approaches have some difficulties regarding low energy densities, elimination of by products, down scaling and recharging. All these difficulties can be overcome up to a large extend by the use of nuclear micro batteries.

Radioisotope thermo electric generators (RTGs) exploited the extraordinary potential of radioactive materials for
generating electricity. RTGs are particularly used for generating electricity in space missions. It uses a process known as See-beck effect. The problem with RTGs is that RTGs don't scale down well. So the scientists had to find some other ways of converting nuclear energy into electric energy. They have succeeded by developing nuclear batteries.


NUCLEAR BATTERIES

Nuclear batteries use the incredible amount of energy released naturally by tiny bits of radio active material without any fission or fusion taking place inside the battery. These devices use thin radioactive films that pack in energy at densities thousands of times greater than those of lithium-ion batteries. Because of the high energy density nuclear batteries are extremely small in size. Considering the small size and shape of the battery the scientists who developed that battery fancifully call it as "DAINTIEST DYNAMO". The word 'dainty' means pretty.

Types of nuclear batteries

Scientists have developed two types of micro nuclear batteries. One is junction type battery and the other is self-reciprocating cantilever. The operations of both are explained below one by one.

1. JUNCTION TYPE BATTERY

The kind of nuclear batteries directly converts the high-energy particles emitted by a radioactive source into an electric current. The device consists of a small quantity of Ni-63 placed near an ordinary silicon p-n junction - a diode, basically.

WORKING:

As the Ni-63 decays it emits beta particles, which are high-energy electrons that spontaneously fly out of the radioisotope's unstable nucleus. The emitted beta particles ionized the diode's atoms, exciting unpaired electrons and holes that are separated at the vicinity of the p-n interface. These separated electrons and holes streamed away form the junction, producing current.

It has been found that beta particles with energies below 250KeV do not cause substantial damage in Si [4] [5]. The maximum and average energies (66.9KeV and 17.4KeV respectively) of the beta particles emitted by Ni-63 are well below the threshold energy, where damage is observing silicon. The long half-life period (100 years) makes Ni-63 very attractive for remote long life applications such as power of spacecraft instrumentation. In addition, the emitted beta particles of Ni-63 travel a maximum of 21 micrometer in silicon before disintegrating; if the particles were more energetic they would travel longer distances, thus escaping. These entire things make Ni-63 ideally suitable in nuclear batteries.

Optical Interconnects

INTROUCTION

Parallel optical interfaces can be conceived that consist of arrays of optoelectronic devices of the order of one thousand optical channels each 'running at speeds around I Gbit/s and hence offering and overall capacity of 1 Gbit/s to a single integrated circuit. Although there are still unresolved difficulties in the areas of architectural design, manufacturing processes, simulation and packaging (as explained later), the technology has now developed to the point that it is possible to contemplate its use in commercial systems within a time-frame of 5-10 years. Fig 1st shows the concept of chip-to-chip communication using optics.

The idea of using optical techniques to address the chip-to-chip interconnection problems has been around for a long time. However, it is only in the last few years that technology with a realistic promise of eventual commercial applications has emerged. Progress can be attributed to a shift away from trying to develop custom VSLI techniques with in-built optoelectronic capability, towards developing techniques to allow parallel arrays of separately fabricated optoelectronic devices to be tightly integrated with standard foundry VLSI electronics, e.g. CMOS

2. Motivation

The optics can reduce the energy for irreversible communication at logic level signals inside digital processing machines. This is because quantum detectors, quantum sources can perform and effective impedance transformation that matches the high impedance of small devices to the low impedances encountered in the electromagnetic propagation. This energy argument suggests that all except the shortest intrachip communication should be optical.

We see that there is a limit to the total number of bits per second, of information that can flow in simple digital electrical interconnection set by the aspect ratio at the interconnection. This limit is largely independent of the details of the design at the electrical lines. As the limit is scale-invariant, neither growing nor shrinking the system substantially changes the limit. Exceeding the limits will require additional multi-level modulation. Such a limit will become a problem for a high band-width machine. Optical interconnect can solve this problem since they avoid the resistive losses that gives the limit.

3. Capability and Limitations of Electrical Interconnects

Now the physical origins of the limitation of conventional electrical interconnects are listed

3.1 Frequency Dependent loss: - The main physical limitation on the use of electrical signaling over long distances in frequency dependant loss due to the skin effect and dielectric absorption. Attenuation due to the skin effect increases in proportion to ?f above a certain critical frequency. This given rise to a so called 'aspect-ratio' limit on the


The constant of proportionality Bo is related to resistivity of copper interconnects and is only dependant on the particular fabrication technology. It ranges from a 1015 bit per second to 1016 bit per second

The aspect-ratio limit is scale invariant and applies equally to band to band interconnect as well as to connection on a Multichip-Module. Also, for a fixed cross-section, the limit is independent of whether the interconnect is made up of many slow wires or a few fast wires. The aspect ratio limit is part of the reason why fiber-optics has replaced co-axial cables in telecommunication networks.

Attention due to dielectric absorption increases in proportion to frequency leadint and upper limit on operating speed, which is inversely proportional to distance.

It is independent of conductor Cross-section and is not scale-invariant. For a 1 Gbit/s interconnect, it would limit the distance to 1 m in a standard fiber-glass interconnect and may be to 10m in a good low-loss material like poly tetra fluoroethene (PTFE).
However, attenuation due to dielectric absorption does not limit the overall band width of an interconnect over a certain distance in the same way as the skin effect, because a higher overall bandwidth could be obtained by using more conductors within the same cross-section.

Optical Technology in Current Measurement

INTROUCTION

Over the past 15 years, optical current sensors have received significant attention by a number or search groups around the world as next generation high voltage measurement devises, with a view to replacing iron-core current transformers in the electric power industry. Optical current sensors bring the significant advantages that they are non-conductive and lightweight, which can allow for much simpler insulation and mounting the designs. In addition, optical sensors do not exhibit hysteresis and provide a much larger dynamic range and frequency response than iron-core CT's.

A common theme of many of the optical current sensors is that they work on the principle of the Faraday effect. Current measurement plays an important role in protection and control of electric power systems. With the development of the conventional CT, the accuracy of the CT is up to 0.2% in the steady state power system. However many disadvantages of the conventional CT appear with the short circuit capacities of electric power systems getting larger and the voltage levels going higher for example, saturation under fault current conditions, ferroresonance effects, potential for catastrophic failure etc. Today there is number of interest in using optical current transformer (OCT) to measure the electric current by means of Faraday effect.

The benefits of an OCT are the inverse of the conventional CT's problems. That is, no saturation under fault current conditions, with out iron core and there fore no ferroresonance effects, with out oil and there fore cannot explode, light weight, small size, etc.


A common theme of many of the optical current sensors is that they work on the principle of the Faraday effect. Current flowing in a conductor induces a magnetic field, which, through the Faraday effect, rotates the plane of polarization of the light traveling in a sensing path encircling the conductor. Ampere's law guarantees that if the light is uniformly sensitive to magnetic field all along the sensing path, and the sensing path defines a closed loop, then the accumulated rotation of the plane of polarization of the light is directly proportional to the current flowing in the enclosed wire.

The sensor is insensitive to all externally generated magnetic fields such as those created by currents flowing in near by wires. A measurement of the polarization state rotation thus yields a measurement of the desired current. The technology originated 8 years ago to measure currents in Series Capacitor installations. Since then, it has been introduced not only to Series Capacitor and Thyristor Controlled Series Capacitor installations (FACTS), but also into High Voltage Direct Current Systems (HVDC).

These FACTS & HVDC systems gain their very high availability and reliability using the optically powered CT technology. Further integration of the optically powered technology has led to an economical and solid metering and protection current transformer without any of the known environmental problems associated with the oil or SF6-gas filled technology.

Researchers have perfected the OPCT to measure currents and transmit the data from high voltage system to ground potential using state of the art Laser technology. The fundamental of this technology includes the idea of using fiber optic cables to isolate the current transformers from ground potentials. The advantages of the optically powered scheme compared to the conventional, high voltage, free standing magnetic CT include an environmentally friendly, light weight, non seismic critical composite signal column together with proven, conventional, low voltage rated 'dry type' CT technology.

PEA Space Charge Measurement System

INTROUCTION

The pulsed electro acoustic analysis (PEA) can be used for space charge measurements under dc or ac fields. The PEA method is a non-destructive technique for profiling space charge accumulation in polymeric materials. The method was first proposed by T.Takada et al. in 1985. The pulsed electro acoustic (PEA) method has been used for various applications. PEA systems can measure space charge profiles in the thickness direction of specimen, with a resolution of around 10 microns, and a repetition rate in the order of milliseconds. The experimental results contribute to the investigation of the charge transport in dielectrics, aging of insulating materials and the clarification of the effect of chemical properties on space charge formation. PEA method can measure only net charges and does not indicate the source of the charge.

Various space charge measurement techniques are thermal step, thermal pulse, piezoelectric pressure step, and laser induced pressure pulse, the pulse electro acoustic method. In the thermal step method, both electrodes are initially in contact with a heat sink at a temperature around -10 degrees Celsius. A heat source is then brought into contact with one electrode, and the temperature profile through the sample begins to evolve towards equilibrium consistent with the new boundary conditions.

The resulting thermal expansion of the sample causes a current to flow between the electrodes, and application of an appropriate deconvolution procedure using Fourier analysis allows extraction of the space charge distribution from the current flow data. This technique is particularly suited for thicker samples (between 2 and 20 mm). Next is the thermal pulse technique. The common characteristic is a temporary, non -destructive displacement of the space charge in the bulk of a sample created by a traveling disturbance, such as a thermal wave, leading to a time dependent change in charge induced on the electrodes by the space charge. Compression or expansion of the sample will also contribute to the change in induced charge on the electrodes, through a change in relative permittivity. The change in electrode charge is analyzed to yield the space charge distribution.

Thermal pulse technique yields only the first moment of the charge distribution and its first few Fourier coefficients. Next is laser induced pressure pulse. A temporary displacement of space charge can also be achieved using a pressure pulse in the form of a longitudinal sound wave. Such a wave is generated, through conservation of momentum, when a small volume of a target attached to the sample is ablated following absorption of energy delivered in the form of a short laser pulse. The pressure pulse duration in laser induced pressure pulse measurements depends on the laser pulse duration and it can be chosen to suite the sample thickness, ie, thinner the sample the shorter should be the laser pulse.

Space charge measurement has become a common method for investigating the dielectric properties of solid materials. Space charge observation is becoming the most widely used technique to evaluate polymeric materials for dc-insulation applications, particularly high-voltage cables. The presence of space charges is the main problem causing premature failure of high-voltage dc polymeric cables. It has been shown that insulation degradation under service stresses can be diagnosed by space charge measurements.

The term" space charge" means uncompensated real charge generated in the bulk of the sample as a result of (a) charge injection from electrodes, driven by a dc field not less than approximately 10 KV/mm, (b) application of mechanical/thermal stress, if the material is piezoelectric/ pyroelectric (c) field-assisted thermal ionization of impurities in the bulk of the dielectric.

Pebble-Bed Reactor

INTROUCTION

The development of the nuclear power industry has been nearly stagnant in the past few decades. In fact there have been no new nuclear power plant construction in the United States since the late 1970s. What many thought was a promising technology during the "Cold War" days of this nation; they now frown upon, despite the fact that nuclear power currently provides the world with 17% of its energy needs. Nuclear technology's lack of popularity is not difficult to understand since the fear of it has been promoted by the entertainment industry, news media, and extremists. There is public fear because movies portray radiation as the cause of every biological mutation and now; terrorist threats against nuclear installations have been hypothesized. Also, the lack of understanding of nuclear science has kept news media and extremists on the offensive. The accidents at Three Mile Island (TMI) and Chernobyl were real and their effects were dangerous and, in the latter case, lethal. However, many prefer to give up the technology rather than learn from these mistakes.

Recently, there has been a resurgence of interest in nuclear power development by several governments, despite the resistance. The value of nuclear power as an alternative fuel source is still present and public fears have only served to make the process of obtaining approval more difficult. This resurgence is due to the real threat that global warming, caused by the burning of fossil fuels, is destroying the environment. Moreover, these limited resources are quickly being depleted because of their increased usage from a growing population.

The estimation is that developing countries will expand their energy consumption to 3.9 times that of today by the mid-21st century and global consumption is expected to grow by 2.2 times. Development has been slow since deregulation of the power industry has forced companies to look for short term return, inexpensive solutions to our energy needs rather than investment in long term return, expensive solutions. Short-term solutions, such as the burning of natural gas in combined cycle gas turbines (CCGT), have been the most cost effective but remain resource limited. Therefore, a few companies and universities, subsidized by governments, are examining new ways to provide nuclear power.

An acceptable nuclear power solution for energy producers and consumers would depend upon safety and cost effectiveness. Many solutions have been proposed including the retrofit of the current light water reactors (LWR). At present, it seems the most popular solution is a High Temperature Gas Cooled Reactor (HTGR) called the Pebble Bed Modular Reactor (PBMR).

HISTORY OF PBMR

The history of gas-cooled reactors (GCR) began in November of 1943 with the graphite-moderated, air-cooled, 3.5-MW, X-10 reactor in Oak Ridge, Tennessee. Gas-cooled reactors use graphite as a moderator and a circulation of gas as a coolant. A moderator like graphite is used to slow the prompt neutrons created from the reaction such that a nuclear reaction can be sustained. Reactors used commercially in the United States are generally LWRs, which use light water as a moderator and coolant.

Development of the more advanced HTGRs began in the 1950s to improve upon the performance of the GCRs. HTGRs use helium as a gas coolant to increase operating temperatures. Initial HTGRs were the Dragon reactor in the U.K., developed in 1959 and almost simultaneously, the Arbeitsgemeinshaft Versuchsreaktor (AVR) reactor in Germany.

Dr Rudolf Schulten (considered "father" of the pebble bed concept) decided to do something different for the AVR reactor. His idea was to compact silicon carbide coated uranium granules into hard billiard-ball-like graphite spheres (pebbles) and use them as fuel for the helium cooled reactor.

The first HTGR prototype in the United States was the Peach Bottom Unit 1 in the late 1960s. Following the success of these reactors included construction of the Fort S. Vrain (FSV) in Colorado and the Thorium High Temperature Reactor (THTR-300) in Germany. These reactors used primary systems enclosed in prestressed concrete reactor vessels rather than steel vessels of previous designs. The FSV incorporated ceramic-coated fuel particles imbedded within rods placed in large hexagonal shaped graphite elements and the THTR-300 used spherical fuel elements (pebble bed). These test reactors provided valuable information for future design.

Polyfuse

INTROUCTION

Polyfuses is a new standard for circuit protection .It is re-settable by itself. Many manufactures also call it as Polyswitch or Multifuse. Polyfuses are not fuses but Polymeric Positive temperature Coefficient Thermistors (PPTC).

We can use several circuit protection schemes in power supplies to provide protection against fault condition and the resultant over current and over temperature damage. Current can be accomplished by using resistors, fuses, switches, circuit breakers or positive temperature coefficient devices.

Resistors are rarely an acceptable solution because the high power resistors required are expensive .One shot fuses can be used but they might fatigue and they must be replaced after a fault event. Another good solution available is the resettable Ceramic Positive Temperature Coefficient (CPTC) device. This technology is not widely used because of its high resistance and power dissipation characteristics. These devices are also relatively large and vulnerable to cracking as result of shock and vibration.

The preferred solution is the PPTC device, which has a very low resistance in normal operation and high resistance when exposed to fault. Electrical shorts and electrically overloaded circuits can cause over current and over temperature damage.

Like traditional fuses, PPTC devices limit the flow of dangerously high current during fault condition. Unlike traditional fuses, PPTC devices reset after the fault is cleared and the power to the circuit is removed. Because a PPTC device does not usually have to be replaced after it trips and because it is small enough to be mounted directly into a motor or on a circuit board, it can be located inside electronic modules, junction boxes and power distribution centers.

THE BASICS

Technically Polyfuses are not fuses but Polymeric Positive Temperature Coefficient Thermistors. For thermistors characterized as positive temperature coefficient, the device resistance increases with temperature. The PPTC circuit protection devices are formed from thin sheets of conductive semi-crystalline plastic polymers with electrodes attached to either side. The conductive plastic is basically a non-conductive crystalline polymer loaded with a highly conductive carbon to make it conductive. The electrodes ensure the distribution of power through the circuit.

Polyfuses are usually packaged in radial, axial, surface mount, chip or washer form. These are available in voltage ratings of 30 to 250 volts and current ratings of 20 mA to 100A.


PRINCIPLE OF OPERATION

PPTC circuit protection devices are formed from a composite of semi-crystalline polymer and conductive carbon particles. At normal temperature the carbon chains form low resistance conductive network through the polymer. In case an excessive current flows through the device, the temperature of the conductive plastic material rises. When the temperature exceeds the device's switching temperature, the crystallides in the polymer suddenly melts and become amorphous. The increase in volume during melting of the crystalline phase cause separation of the conductive particles and results in a large non-linear increase in the resistance of the device. The resistance typically increases by 3 or orders of magnitude.

Power Quality

INTROUCTION

The aim of the power system has always been to supply electrical energy to customers. Earlier the consumers of electrical energy were mere acceptors. Interruptions and other voltage disturbances were part of the deal. But today electric power is viewed as a product with certain characteristics which can be measured, predicted, guaranteed, improved etc. Moreover it has become an integral part of our life. The term 'power quality' emerged as a result of this new emphasis placed on the customer utility relationship.

The fact that power quality has become an issue recently does not mean that it was not important in the past. Utilities all over the world have for decades worked on the improvement of what is now known as power quality. In the recent years, users of electric power have detected an increasing number of drawbacks caused by electric power quality variations. These variations already existed on the electrical system but only recently they are causing serious problems. This is because of the fact that end use equipments have become more sensitive to disturbances that arise both on the supplier as well as the utility side. End use equipments are more interconnected in networks and industrial processes, that the impact of a problem with any piece of equipment is much more severe. To improve power quality with adequate solutions, it is necessary to know what kinds of disturbances occurred. A power quality monitoring system that is able to automatically detect, characterize and classify disturbances on electrical lines is therefore required.


INCREASED INTEREST IN POWER QUALITY

Power quality is an increasingly important issue for all business. A recent study by IBM showed that power quality problems cost US business more than $15 billion a year. The increased interest in power quality has resulted in significant advances in monitoring equipments that can be used to characterize disturbances and power quality variations. The recent increased interest in power quality can be explained in a number of ways.

¢ Equipments have become more sensitive to voltage disturbances
The electronic and power electronic equipments have especially become much more sensitive to voltage disturbances than their counterparts 10 or 20years ago.

¢ Equipments cause voltage disturbances
Modern electronic and power electronic equipments are not only sensitive to voltage disturbances but also cause disturbances for other customers. E.g. Non-sinusoidal current drawn by rectifiers and inverters.

¢ Technical challenge taken up by utilities
Designing a system with a high reliability of supply at a limited cost is a technical challenge which appealed to many in the power industry and hopefully still does in the future.

¢ Power quality can be measured.
The availability of electronic equipments to measure and show wave forms has certainly contributed to the interest in power quality.

Robotic control Using Fuzzy Logic

INTROUCTION

Automatic guided vehicle or mobile robots is an intelligent machine that has intelligence to determine its motion starts according to the environment conditions. For an AGV to operate it must sense its environment be able to plan its operations and then act based on this plan.

The running environment could be varied such as the path orientation, road flatness, obstacle position, road surface friction etc. There are great many uncertainties of what condition will emerge during its operation. Thus a new control method other than the conventional control method is demanded to manage the response of the whole system.

In the last years, fuzzy logic has been applied mobile robot and autonomous vehicle control significantly. The best arguments supporting fuzzy control are the ability to cope with imprecise information in heuristic rule based knowledge and sensor measurements.

Fuzzy logic can help design robust individual behaviors units. Fuzzy logic controllers incorporate heuristic control knowledge. It is convenient choice when a precise linear model of the system to be controlled cannot be easily found. Another advantage of fuzzy logic control is to use fuzzy logic for representing uncertainties, such as vagueness or imprecision which cannot be solved by probability theory. Also fuzzy logic offers greater flexibility to user, among which we can choose the one that best, fits the type of combination to be performed.


WHAT IS FUZZY LOGIC?

Fuzzy logic is another class of AI, but its history and applications are more recent than those of the expert systems (ES). According to George Boole, human thinking and decisions are based on "Yes / No" reasoning or "1 / 0 " logic. According to Boolean logic developed and expert system principles were formatted based on Boolean logic. It has been argued that human thinking does not always follow crisp "Yes / No " logic, but is often vague, quantitative, uncertain, imprecise or fuzzy in nature.


For example in terms of " Yes / No " logic, a thinking rule may be
" If it is not raining AND outside temperature is less than 80 F ,THEN take a sight seeing trip for more than 100 miles"


In actual thinking it might be
"IF weather is good AND outside temperature is mild THEN take a long sight seeing trip".


Based on the nature of human thinking, Lotfi Zadeh, a computer scientist at the university of California, Berkeley, originated "The Fuzzy Logic" or "Fuzzy Set Theory" in 1965. In the beginning, he was highly criticized by professional community, but gradually this emerged as an entirely new discipline of AI. The general methodology of reasoning in fuzzy logic and expert system by "IF.. THEN…" statements or rules are the same, therefore it is often called "Fuzzy Expert System "

A fuzzy logic can help to supplement an ES and it is sometime hybrid with the latter to solve complex problems. Fuzzy logic has been successfully applied in process control, modeling, estimation, identification diagnostics, military science, stock market prediction etc.

Robotic Monitoring of Power Systems

INTROUCTION

Economically effective maintenance and monitoring of power systems to ensure high quality and reliability of electric power supplied to customers is becoming one of the most significant tasks of today's power industry. This is highly important because in case of unexpected failures, both the utilities as well as the consumers will have to face several losses. The ideal power network can be approached through minimizing maintenance cost and maximizing the service life and reliability of existing power networks. But both goals cannot be achieved simultaneously. Timely preventive maintenance can dramatically reduce system failures. Currently, there are three maintenance methods employed by utilities: corrective maintenance, scheduled maintenance and condition-based maintenance. The following block diagram shows the important features of the various maintenance methods.

Corrective maintenance dominates in today's power industry. This method is passive, i.e. no action is taken until a failure occurs. Scheduled maintenance on the other hand refers to periodic maintenance carried out at pre-determined time intervals. Condition-based maintenance is defined as planned maintenance based on continuous monitoring of equipment status. Condition-based maintenance is very attractive since the maintenance action is only taken when required by the power system components. The only drawback of condition-based maintenance is monitoring cost. Expensive monitoring devices and extra technicians are needed to implement condition-based maintenance. Mobile monitoring solves this problem.

Mobile monitoring involves the development of a robotic platform carrying a sensor array. This continuously patrols the power cable network, locates incipient failures and estimates the aging status of electrical insulation. Monitoring of electric power systems in real time for reliability, aging status and presence of incipient faults requires distributed and centralized processing of large amounts of data from distributed sensor networks. To solve this task, cohesive multidisciplinary efforts are needed from such fields as sensing, signal processing, control, communications and robotics.

As with any preventive maintenance technology, the efforts spent on the status monitoring are justified by the reduction in the fault occurrence and elimination of consequent losses due to disruption of electric power and damage to equipment. Moreover, it is a well recognized fact in surveillance and monitoring fields that measurement of parameters of a distributed system has higher accuracy when it is when it is accomplished using sensing techniques. In addition to sensitivity improvement and


subsequent reliability enhancement, the use of robotic platforms for power system maintenance has many other advantages like replacing man workers for dangerous and highly specialized operations such as live line maintenance.

MOBILE ROBOT PLATFORM

Generally speaking, the mobile monitoring of power systems involves the following issues:
SENSOR FUSION: The aging of power cables begins long before the cable actually fails. There are several external phenomena indicating ongoing aging problems including partial discharges, hot spots, mechanical cracks and changes of insulation dielectric properties. These phenomena can be used to locate the position of the deteriorating cables and estimate the remaining lifetime of these cables. If incipient failures can be detected, or the aging process can be predicted accurately, possible outages and following economical losses can be avoided.

In the robotic platform, non-destructive miniature sensors capable of determining the status of power cable systems are developed and integrated into a monitoring system including a video sensor for visual inspection, an infrared thermal sensor for detection of hot spots, an acoustic sensor for identifying partial discharge activities and a fringing electric field sensor for determining aging status of electrical insulation. Among failure phenomena, the most important one is the partial discharge activity

Snake Robot

INTROUCTION

In the past two decades it is estimated that disasters are responsible for about 3 million deaths worldwide, 800million people adversely affected, and property damage exceeding US$50 billion. The recent earthquake in Turkey in November of 1999 left 700 dead and 5000 injured. Many of these deaths were from structural collapse as buildings fell down onto people. Urban Search and Rescue involves the location, rescue (extrication), and initial medical stabilization of victims trapped in confined spaces. Voids formed when a buildings collapse is one instance of a confined space. Urban Search and Rescue may be needed for a variety of situations, including earthquakes, hurricanes, tornadoes floods, fires, terrorist activities, and hazardous materials (hazmat) accidents. Currently, a typical search and rescue team is composed of about ten people, including canine handlers and dogs, a paramedic, a structural engineer, and various specialists in handling special equipment to find and extract a victim. Current state of the art search equipment includes search cameras and listening devices. Search cameras are usually video cameras mounted on some device like a pole that can be inserted into gaps and holes to look for signs of people. Often a hole is bored into the obstructing walls if a void is suspected to exist on the other side. Thermal imaging is also used. This is especially useful in finding warm bodies that have been coated with dust and debris effectively camouflaging the victim. The listening devices are highly sensitive microphones that can listen for a person who may be moving or attempting to respond to rescuers calls. This hole process can take many hours to search one building. If a person is found extrication can take even longer. This paper presents the developments of a modular robot system towards USAR applications as well as the issues that would need to be addressed in order to make such a system practical.

SERPENTINE RESCUE ROBOTS: LEADING APPROACHES

Sensor-Based Online Path Planning

This section presents multisensor-based online path planning of a serpentine robot in the unstructured, changing environment of earthquake rubble during the search of living bodies. The robot presented in this section is composed of six identical segments joined together through a two-way, two degrees-of- freedom (DOF) joint enabling yaw and pitch rotation (Fig.), while our prototype mechanism (to be discussed later in this article) is made of ten joints with 1 DOF each.

Configuration of each segment

The robot configuration of this section results in 12 controllable DOF. An ultrasound sensor, used for detecting the obstacles, and a thermal camera are located in the first segment (head). The camera is in a dust free, anti shock casting and operates intermittently when needed

Modified distance transform

The modified distance transform (MDT) is the original distance transform method modified for snake robot such that the goal cell is turned in to a valley of zero values within which the serpentine robot can nest. Other modifications are also made to render the method on line

" Distance transform is first computed for the line of sight directed towards the intermediate goal, without taking into account sensorial data about obstacles and free space. This is the goal-oriented planning.
" The obstacle cells are superimposed on the cellular workspace. This modification to the original distance transform integrates IR data that represent the obstacles are assigned high values

Surge current protection using superconductors

INTROUCTION

Damage from a short circuit is a constant threat to any electric power system. Insulation damaged by aging an accident or lightning strike can unloose immense fault currents practically the only limit on their size being the impedance of the system between their location and power sources. At their worst, faults can exceed the largest current expected under normal load - the nominal current by a factor of 100 producing mechanical and thermal stresses in proportion to the square of the current's value.

All power system components must be designed to withstand short circuit stresses for certain period determined by time needed for circuit breakers to activate (20-300 ms). The higher the fault currents anticipated the higher will be the equipment and also the maintenance cost. So there obviously is a big demand for devices that under normal operating conditions have negligible influence on power system but in case of fault will limit the prospective fault current. A device of this kind is called fault current limiter.

According to the accumulated intelligence of many utility experts, an ideal fault current limit would:
(i) Have zero impedance throughout normal operation
(ii) Provide sufficiently large impedance under fault conditions
(iii) Provide rapid detection and initiation of limiting action within less than one cycle or 16ms.

(iv) Provide immediate (half cycle or 8ms) recovery of normal operation after clearing of a fault.
(v) Be capable of addressing tow faults within a period of 15 seconds.
Ideal limiters would also have to be compact, light weight inexpensive, fully automatic, and highly reliable besides having long life.

In the past, the customary means of limiting fault current have included artificially raising impedance in the system with air-coil rectors or with high stray impedance of transformers and generators or splitting power-grids artificially to lower the number of power sources that could feed a fault current. Nut such measures are inconsistent with today's demand for higher power quality, which implies increased voltage stiffness and strongly interconnected grids with low impedance.

What is need is a device that normally would hardly affect a power system bit during a fault would hold surge current close to nominal value that is a fault current limiter. Until recently most fault current limiter concepts depend on mechanical means, on the detuning of L_C resonance circuit or use of strongly non-linear materials other than High Temperature super conditions (HTS). None is without drawbacks.

Super conductors because of their sharp transition from zero resistance at normal currents to finite resistance at higher current densities are tailor made for use in fault current limiters. Equipped with proper power controlled electronics, a super conducting limiter can rapidly detect a surge and taken and can also immediately recover to normal operation after a fault is cleared.

Superconductors lose their electrical resistance below certain critical values of temperature, magnetic field and current density. A simplified phase diagram of a super conductor defines three regions