Sunday 30 August 2009

Series batteries

PARTS AND MATERIALS
Two 6-volt batteries
One 9-volt battery

Actually, any size batteries will suffice for this experiment, but it is recommended to have at least two different voltages available to make it more interesting.
LEARNING OBJECTIVES

How to connect batteries to obtain different voltage levels

SCHEMATIC DIAGRAM




ILLUSTRATION

Numbers and symbols

The expression of numerical quantities is something we tend to take for granted. This is both a good and a bad thing in the study of electronics. It is good, in that we're accustomed to the use and manipulation of numbers for the many calculations used in analyzing electronic circuits. On the other hand, the particular system of notation we've been taught from grade school onward is not the system used internally in modern electronic computing devices, and learning any different system of notation requires some re-examination of deeply ingrained assumptions.
First, we have to distinguish the difference between numbers and the symbols we use to represent numbers. A number is a mathematical quantity, usually correlated in electronics to a physical quantity such as voltage, current, or resistance. There are many different types of numbers. Here are just a few types, for example:

WHOLE NUMBERS:
1, 2, 3, 4, 5, 6, 7, 8, 9 . . .

INTEGERS:
-4, -3, -2, -1, 0, 1, 2, 3, 4 . . .

IRRATIONAL NUMBERS:
π (approx. 3.1415927), e (approx. 2.718281828),
square root of any prime

REAL NUMBERS:
(All one-dimensional numerical values, negative and positive,
including zero, whole, integer, and irrational numbers)

COMPLEX NUMBERS:
3 - j4 , 34.5 ∠ 20o

Different types of numbers find different application in the physical world. Whole numbers work well for counting discrete objects, such as the number of resistors in a circuit. Integers are needed when negative equivalents of whole numbers are required. Irrational numbers are numbers that cannot be exactly expressed as the ratio of two integers, and the ratio of a perfect circle's circumference to its diameter (π) is a good physical example of this. The non-integer quantities of voltage, current, and resistance that we're used to dealing with in DC circuits can be expressed as real numbers, in either fractional or decimal form. For AC circuit analysis, however, real numbers fail to capture the dual essence of magnitude and phase angle, and so we turn to the use of complex numbers in either rectangular or polar form.
If we are to use numbers to understand processes in the physical world, make scientific predictions, or balance our checkbooks, we must have a way of symbolically denoting them. In other words, we may know how much money we have in our checking account, but to keep record of it we need to have some system worked out to symbolize that quantity on paper, or in some other kind of form for record-keeping and tracking. There are two basic ways we can do this: analog and digital. With analog representation, the quantity is symbolized in a way that is infinitely divisible. With digital representation, the quantity is symbolized in a way that is discretely packaged.
You're probably already familiar with an analog representation of money, and didn't realize it for what it was. Have you ever seen a fund-raising poster made with a picture of a thermometer on it, where the height of the red column indicated the amount of money collected for the cause? The more money collected, the taller the column of red ink on the poster.

Systems of numeration
The Romans devised a system that was a substantial improvement over hash marks, because it used a variety of symbols (or ciphers) to represent increasingly large quantities. The notation for 1 is the capital letter I. The notation for 5 is the capital letter V. Other ciphers possess increasing values:
X = 10
L = 50
C = 100
D = 500
M = 1000
If a cipher is accompanied by another cipher of equal or lesser value to the immediate right of it, with no ciphers greater than that other cipher to the right of that other cipher, that other cipher's value is added to the total quantity. Thus, VIII symbolizes the number 8, and CLVII symbolizes the number 157. On the other hand, if a cipher is accompanied by another cipher of lesser value to the immediate left, that other cipher's value is subtracted from the first. Therefore, IV symbolizes the number 4 (V minus I), and CM symbolizes the number 900 (M minus C). You might have noticed that ending credit sequences for most motion pictures contain a notice for the date of production, in Roman numerals. For the year 1987, it would read: MCMLXXXVII. Let's break this numeral down into its constituent parts, from left to right:
M = 1000
+
CM = 900
+
L = 50
+
XXX = 30
+
V = 5
+
II = 2
Aren't you glad we don't use this system of numeration? Large numbers are very difficult to denote this way, and the left vs. right / subtraction vs. addition of values can be very confusing, too. Another major problem with this system is that there is no provision for representing the number zero or negative numbers, both very important concepts in mathematics. Roman culture, however, was more pragmatic with respect to mathematics than most, choosing only to develop their numeration system as far as it was necessary for use in daily life.
We owe one of the most important ideas in numeration to the ancient Babylonians, who were the first (as far as we know) to develop the concept of cipher position, or place value, in representing larger numbers. Instead of inventing new ciphers to represent larger numbers, as the Romans did, they re-used the same ciphers, placing them in different positions from right to left. Our own decimal numeration system uses this concept, with only ten ciphers (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9) used in "weighted" positions to represent very large and very small numbers.
Each cipher represents an integer quantity, and each place from right to left in the notation represents a multiplying constant, or weight, for each integer quantity. For example, if we see the decimal notation "1206", we known that this may be broken down into its constituent weight-products as such:
1206 = 1000 + 200 + 6
1206 = (1 x 1000) + (2 x 100) + (0 x 10) + (6 x 1)

Each cipher is called a digit in the decimal numeration system, and each weight, or place value, is ten times that of the one to the immediate right. So, we have a ones place, a tens place, a hundreds place, a thousands place, and so on, working from right to left.
Right about now, you're probably wondering why I'm laboring to describe the obvious. Who needs to be told how decimal numeration works, after you've studied math as advanced as algebra and trigonometry? The reason is to better understand other numeration systems, by first knowing the how's and why's of the one you're already used to.
The decimal numeration system uses ten ciphers, and place-weights that are multiples of ten. What if we made a numeration system with the same strategy of weighted places, except with fewer or more ciphers?
The binary numeration system is such a system. Instead of ten different cipher symbols, with each weight constant being ten times the one before it, we only have two cipher symbols, and each weight constant is twice as much as the one before it. The two allowable cipher symbols for the binary system of numeration are "1" and "0," and these ciphers are arranged right-to-left in doubling values of weight. The rightmost place is the ones place, just as with decimal notation. Proceeding to the left, we have the twos place, the fours place, the eights place, the sixteens place, and so on. For example, the following binary number can be expressed, just like the decimal number 1206, as a sum of each cipher value times its respective weight constant:
11010 = 2 + 8 + 16 = 26
11010 = (1 x 16) + (1 x 8) + (0 x 4) + (1 x 2) + (0 x 1)
This can get quite confusing, as I've written a number with binary numeration (11010), and then shown its place values and total in standard, decimal numeration form (16 + 8 + 2 = 26). In the above example, we're mixing two different kinds of numerical notation. To avoid unnecessary confusion, we have to denote which form of numeration we're using when we write (or type!). Typically, this is done in subscript form, with a "2" for binary and a "10" for decimal, so the binary number 110102 is equal to the decimal number 2610.
The subscripts are not mathematical operation symbols like superscripts (exponents) are. All they do is indicate what system of numeration we're using when we write these symbols for other people to read. If you see "310", all this means is the number three written using decimal numeration. However, if you see "310", this means something completely different: three to the tenth power (59,049). As usual, if no subscript is shown, the cipher(s) are assumed to be representing a decimal number.
Commonly, the number of cipher types (and therefore, the place-value multiplier) used in a numeration system is called that system's base. Binary is referred to as "base two" numeration, and decimal as "base ten." Additionally, we refer to each cipher position in binary as a bit rather than the familiar word digit used in the decimal system.
Now, why would anyone use binary numeration? The decimal system, with its ten ciphers, makes a lot of sense, being that we have ten fingers on which to count between our two hands. (It is interesting that some ancient central American cultures used numeration systems with a base of twenty. Presumably, they used both fingers and toes to count!!). But the primary reason that the binary numeration system is used in modern electronic computers is because of the ease of representing two cipher states (0 and 1) electronically. With relatively simple circuitry, we can perform mathematical operations on binary numbers by representing each bit of the numbers by a circuit which is either on (current) or off (no current). Just like the abacus with each rod representing another decimal digit, we simply add more circuits to give us more bits to symbolize larger numbers. Binary numeration also lends itself well to the storage and retrieval of numerical information: on magnetic tape (spots of iron oxide on the tape either being magnetized for a binary "1" or demagnetized for a binary "0"), optical disks (a laser-burned pit in the aluminum foil representing a binary "1" and an unburned spot representing a binary "0"), or a variety of other media types.
Before we go on to learning exactly how all this is done in digital circuitry, we need to become more familiar with binary and other associated systems of numeration.

Decimal versus binary numeration
Let's count from zero to twenty using four different kinds of numeration systems: hash marks, Roman numerals, decimal, and binary:
System: Hash Marks Roman Decimal Binary
------- ---------- ----- ------- ------
Zero n/a n/a 0 0
One I 1 1
Two II 2 10
Three III 3 11
Four IV 4 100
Five // V 5 101
Six // VI 6 110
Seven // VII 7 111
Eight // VIII 8 1000
Nine // IX 9 1001
Ten // // X 10 1010
Eleven // // XI 11 1011
Twelve // // XII 12 1100
Thirteen // // XIII 13 1101
Fourteen // // XIV 14 1110
Fifteen // // // XV 15 1111
Sixteen // // // XVI 16 10000
Seventeen // // // XVII 17 10001
Eighteen // // // XVIII 18 10010
Nineteen // // // XIX 19 10011
Twenty // // // // XX 20 10100
Neither hash marks nor the Roman system are very practical for symbolizing large numbers. Obviously, place-weighted systems such as decimal and binary are more efficient for the task. Notice, though, how much shorter decimal notation is over binary notation, for the same number of quantities. What takes five bits in binary notation only takes two digits in decimal notation.
This raises an interesting question regarding different numeration systems: how large of a number can be represented with a limited number of cipher positions, or places? With the crude hash-mark system, the number of places IS the largest number that can be represented, since one hash mark "place" is required for every integer step. For place-weighted systems of numeration, however, the answer is found by taking base of the numeration system (10 for decimal, 2 for binary) and raising it to the power of the number of places. For example, 5 digits in a decimal numeration system can represent 100,000 different integer number values, from 0 to 99,999 (10 to the 5th power = 100,000). 8 bits in a binary numeration system can represent 256 different integer number values, from 0 to 11111111 (binary), or 0 to 255 (decimal), because 2 to the 8th power equals 256. With each additional place position to the number field, the capacity for representing numbers increases by a factor of the base (10 for decimal, 2 for binary).
An interesting footnote for this topic is the one of the first electronic digital computers, the Eniac. The designers of the Eniac chose to represent numbers in decimal form, digitally, using a series of circuits called "ring counters" instead of just going with the binary numeration system, in an effort to minimize the number of circuits required to represent and calculate very large numbers. This approach turned out to be counter-productive, and virtually all digital computers since then have been purely binary in design.
To convert a number in binary numeration to its equivalent in decimal form, all you have to do is calculate the sum of all the products of bits with their respective place-weight constants. To illustrate:
Convert 110011012 to decimal form:
bits = 1 1 0 0 1 1 0 1
. - - - - - - - -
weight = 1 6 3 1 8 4 2 1
(in decimal 2 4 2 6
notation) 8

The bit on the far right side is called the Least Significant Bit (LSB), because it stands in the place of the lowest weight (the one's place). The bit on the far left side is called the Most Significant Bit (MSB), because it stands in the place of the highest weight (the one hundred twenty-eight's place). Remember, a bit value of "1" means that the respective place weight gets added to the total value, and a bit value of "0" means that the respective place weight does not get added to the total value. With the above example, we have:
12810 + 6410 + 810 + 410 + 110 = 20510

If we encounter a binary number with a dot (.), called a "binary point" instead of a decimal point, we follow the same procedure, realizing that each place weight to the right of the point is one-half the value of the one to the left of it (just as each place weight to the right of a decimal point is one-tenth the weight of the one to the left of it). For example:
Convert 101.0112 to decimal form:
.
bits = 1 0 1 . 0 1 1
. - - - - - - -
weight = 4 2 1 1 1 1
(in decimal / / /
notation) 2 4 8
410 + 110 + 0.2510 + 0.12510 = 5.37510

Sunday 23 August 2009

Static Electricity

It was discovered centuries ago that certain types of materials would mysteriously attract one another being rubbed together. For example: after rubbing a piece of silk against a piece of glass, the silk and glass would tend to stick together. Indeed, there was an attractive force that could be demonstrated even when the two materials were separated:
Now, this was really strange to witness. After all, none of these objects were visibly altered by the rubbing, yet they definitely behaved differently than before they were rubbed. Whatever change took place to make these materials attract or repel one another was invisible.
Some experimenters speculated that invisible "fluids" were being transferred from one object to another during the process of rubbing, and that these "fluids" were able to effect a physical force over a distance. Charles Dufay was one the early experimenters who demonstrated that there were definitely two different types of changes wrought by rubbing certain pairs of objects together. The fact that there was more than one type of change manifested in these materials was evident by the fact that there were two types of forces produced: attraction and repulsion. The hypothetical fluid transfer became known as a charge.
One pioneering researcher, Benjamin Franklin, came to the conclusion that there was only one fluid exchanged between rubbed objects, and that the two different "charges" were nothing more than either an excess or a deficiency of that one fluid. After experimenting with wax and wool, Franklin suggested that the coarse wool removed some of this invisible fluid from the smooth wax, causing an excess of fluid on the wool and a deficiency of fluid on the wax. The resulting disparity in fluid content between the wool and wax would then cause an attractive force, as the fluid tried to regain its former balance between the two materials.
Postulating the existence of a single "fluid" that was either gained or lost through rubbing accounted best for the observed behavior: that all these materials fell neatly into one of two categories when rubbed, and most importantly, that the two active materials rubbed against each other always fell into opposing categories as evidenced by their invariable attraction to one another. In other words, there was never a time where two materials rubbed against each other both became either positive or negative.
Following Franklin's speculation of the wool rubbing something off of the wax, the type of charge that was associated with rubbed wax became known as "negative" (because it was supposed to have a deficiency of fluid) while the type of charge associated with the rubbing wool became known as "positive" (because it was supposed to have an excess of fluid). Little did he know that his innocent conjecture would cause much confusion for students of electricity in the future!
Precise measurements of electrical charge were carried out by the French physicist Charles Coulomb in the 1780's using a device called a torsional balance measuring the force generated between two electrically charged objects. The results of Coulomb's work led to the development of a unit of electrical charge named in his honor, the coulomb. If two "point" objects (hypothetical objects having no appreciable surface area) were equally charged to a measure of 1 coulomb, and placed 1 meter (approximately 1 yard) apart, they would generate a force of about 9 billion newtons (approximately 2 billion pounds), either attracting or repelling depending on the types of charges involved.
It discovered much later that this "fluid" was actually composed of extremely small bits of matter called electrons, so named in honor of the ancient Greek word for amber: another material exhibiting charged properties when rubbed with cloth. Experimentation has since revealed that all objects are composed of extremely small "building-blocks" known as atoms, and that these atoms are in turn composed of smaller components known as particles. The three fundamental particles comprising atoms are called protons, neutrons, and electrons. Atoms are far too small to be seen, but if we could look at one, it might appear something like this:


Even though each atom in a piece of material tends to hold together as a unit, there's actually a lot of empty space between the electrons and the cluster of protons and neutrons residing in the middle.
This crude model is that of the element carbon, with six protons, six neutrons, and six electrons. In any atom, the protons and neutrons are very tightly bound together, which is an important quality. The tightly-bound clump of protons and neutrons in the center of the atom is called the nucleus, and the number of protons in an atom's nucleus determines its elemental identity: change the number of protons in an atom's nucleus, and you change the type of atom that it is. In fact, if you could remove three protons from the nucleus of an atom of lead, you will have achieved the old alchemists' dream of producing an atom of gold! The tight binding of protons in the nucleus is responsible for the stable identity of chemical elements, and the failure of alchemists to achieve their dream.
Neutrons are much less influential on the chemical character and identity of an atom than protons, although they are just as hard to add to or remove from the nucleus, being so tightly bound. If neutrons are added or gained, the atom will still retain the same chemical identity, but its mass will change slightly and it may acquire strange nuclear properties such as radioactivity.
However, electrons have significantly more freedom to move around in an atom than either protons or neutrons. In fact, they can be knocked out of their respective positions (even leaving the atom entirely!) by far less energy than what it takes to dislodge particles in the nucleus. If this happens, the atom still retains its chemical identity, but an important imbalance occurs. Electrons and protons are unique in the fact that they are attracted to one another over a distance. It is this attraction over distance which causes the attraction between rubbed objects, where electrons are moved away from their original atoms to reside around atoms of another object.
Electrons tend to repel other electrons over a distance, as do protons with other protons. The only reason protons bind together in the nucleus of an atom is because of a much stronger force called the strong nuclear force which has effect only under very short distances. Because of this attraction/repulsion behavior between individual particles, electrons and protons are said to have opposite electric charges. That is, each electron has a negative charge, and each proton a positive charge. In equal numbers within an atom, they counteract each other's presence so that the net charge within the atom is zero. This is why the picture of a carbon atom had six electrons: to balance out the electric charge of the six protons in the nucleus. If electrons leave or extra electrons arrive, the atom's net electric charge will be imbalanced, leaving the atom "charged" as a whole, causing it to interact with charged particles and other charged atoms nearby. Neutrons are neither attracted to or repelled by electrons, protons, or even other neutrons, and are consequently categorized as having no charge at all.
The process of electrons arriving or leaving is exactly what happens when certain combinations of materials are rubbed together: electrons from the atoms of one material are forced by the rubbing to leave their respective atoms and transfer over to the atoms of the other material. In other words, electrons comprise the "fluid" hypothesized by Benjamin Franklin. The operational definition of a coulomb as the unit of electrical charge (in terms of force generated between point charges) was found to be equal to an excess or deficiency of about 6,250,000,000,000,000,000 electrons. Or, stated in reverse terms, one electron has a charge of about 0.00000000000000000016 coulombs. Being that one electron is the smallest known carrier of electric charge, this last figure of charge for the electron is defined as the elementary charge.
The result of an imbalance of this "fluid" (electrons) between objects is called static electricity. It is called "static" because the displaced electrons tend to remain stationary after being moved from one material to another. In the case of wax and wool, it was determined through further experimentation that electrons in the wool actually transferred to the atoms in the wax, which is exactly opposite of Franklin's conjecture! In honor of Franklin's designation of the wax's charge being "negative" and the wool's charge being "positive," electrons are said to have a "negative" charging influence. Thus, an object whose atoms have received a surplus of electrons is said to be negatively charged, while an object whose atoms are lacking electrons is said to be positively charged, as confusing as these designations may seem. By the time the true nature of electric "fluid" was discovered, Franklin's nomenclature of electric charge was too well established to be easily changed, and so it remains to this day.

Thursday 13 August 2009

Solar Energy

Solar energy is the cleanest, most abundant, renewable energy source available. And the U.S. has some of the richest solar resources shining across the nation. Today's technology allows us to capture this power in several ways giving the public and commercial entities flexible ways to employ both the heat and light of the sun.
The greatest challenge the U.S. solar market faces is scaling up production and distribution of solar energy technology to drive the price down to be on par with traditional fossil fuel sources.
Solar energy can be produced on a distributed basis, called distributed generation, with equipment located on rooftops or on ground-mounted fixtures close to where the energy is used. Large-scale concentrating solar power systems can also produce energy at a central power plant.
There are four ways we harness solar energy: photovoltaics (converting light to electricity), heating and cooling systems (solar thermal), concentrating solar power (utility scale), and lighting. Active solar energy systems employ devices that convert the sun's heat or light to another form of energy we use. Passive solar refers to special siting, design or building materials that take advantage of the sun's position and availability to provide direct heating or lighting. Passive solar also considers the need for shading devices to protect buildings from excessive heat from the sun.
Solar energy technologies use the sun's energy and light to provide heat, light, hot water, electricity, and even cooling, for homes, businesses, and industry.
There are a variety of technologies that have been developed to take advantage of solar energy. These include:

Sunday 9 August 2009

Thermistors

Thermistors are special solid temperature sensors that behave like temperature-sensitive electrical resistors. No surprise then that their name is a contraction of "thermal" and "resistor". There are basically two broad types, NTC-Negative Temperature Coefficient, used mostly in temperature sensing and PTC-Positive Temperature Coefficient, used mostly in electric current control.
Thermistor is a thermally sensitive resistor that exhibits a change in electrical resistance with a change in its temperature. The resistance is measured by passing a small, measured direct current (dc) through it and measuring the voltage drop produced.
The standard reference temperature is the thermistor body temperature at which nominal zero-power resistance is specified, usually 25°C.The zero-power resistance is the dc resistance value of a thermistor measured at a specified temperature with a power dissipation by the thermistor low enough that any further decrease in power will result in not more than 0.1 percent (or 1/10 of the specified measurement tolerance, whichever is smaller) change in resistance.
The resistance ratio characteristic identifies the ratio of the zero-power resistance of a thermistor measured at 25°C to that resistance measured at 125°C.
The zero-power temperature coefficient of resistance is the ratio at a specified temperature (T), of the rate of change of zero-power resistance with temperature to the zero-power resistance of the thermistor. A NTC thermistor is one in which the zero-power resistance decreases with an increase in temperature.A PTC thermistor is one in which the zero-power resistance increases with an increase in temperature.The maximum operating temperature is the maximum body temperature at which the thermistor will operate for an extended period of time with acceptable stability of its characteristics. This temperature is the result of internal or external heating, or both, and should not exceed the maximum value specified..The maximum power rating of a thermistor is the maximum power which a thermistor will dissipate for an extended period of time with acceptable stability of its characteristics.The dissipation constant is the ratio, (in milliwatts per degree C) at a specified ambient temperature, of a change in power dissipation in a thermistor to the resultant body temperature change.The thermal time constant of a thermistor is the time required for a thermistor to change 63.2 percent of the total difference between its initial and final body temperature when subjected to a step function
The resistance-temperature characteristic of a thermistor is the relationship between the zero-power resistance of a thermistor and its body temperature.The temperature-wattage characteristic of a thermistor is the relationship at a specified ambient temperature between the thermistor temperature and the applied steady state wattage.The current-time characteristic of a thermistor is the relationship at a specified ambient temperature between the current through a thermistor and time, upon application or interruption of voltage to it.The stability of a thermistor is the ability of a thermistor to retain specified characteristics after being subjected to designated environmental or electrical test conditions.

Friday 7 August 2009

Comming Soon

Green Energy





??????????????????????????????
What do think about this..?

Wednesday 5 August 2009

Liquid Crystal Phases

The liquid crystal state is a distinct phase of matter observed between the crystalline (solid) and isotropic (liquid) states. There are many types of liquid crystal states, depending upon the amount of order in the material. This section will explain the phase behavior of liquid crystal materials

Nematic Phases
The nematic liquid crystal phase is characterized by molecules that have no positional order but tend to point in the same direction (along the director). In the following diagram, notice that the molecules point vertically but are arranged with no particular order

Smectic Phases
The word "smectic" is derived from the Greek word for soap. This seemingly ambiguous origin is explained by the fact that the thick, slippery substance often found at the bottom of a soap dish is actually a type of smectic liquid crystal.
Many compounds are observed to form more than one type of smectic phase. As many as 12 of these variations have been identified, however only the most distinct phases are discussed here.
In the smectic-A mesophase, the director is perpendicular to the smectic plane, and there is no particular positional order in the layer. Similarly, the smectic-B mesophase orients with the director perpendicular to the smectic plane, but the molecules are arranged into a network of hexagons within the layer. In the smectic-C mesophase, molecules are arranged as in the smectic-A mesophase, but the director is at a constant tilt angle measured normally to the smectic plane.

The cholesteric (or chiral nematic) liquid crystal phase is typically composed of nematic mesogenic molecules containing a chiral center which produces intermolecular forces that favor alignment between molecules at a slight angle to one another. This leads to the formation of a structure which can be visualized as a stack of very thin 2-D nematic-like layers with the director in each layer twisted with respect to those above and below. In this structure, the directors actually form in a continuous helical pattern about the layer normal as illustrated by the black arrow in the following figure and animation. The black arrow in the animation represents director orientation in the succession of layers along the stack.



Broadband

The commercial promise in the 1990's was that broadband will take over, and the sellers will make a fortune. It hasn't happened so far. I don't think it will until the price changes.
Yes, the speed of broadband is attractive. However, for the phone company to make more money than a plain phone line, it needs to charge more. Basically, for most people, broadband isn't worth three or four times a phone line. What can they get over broadband that is worth the extra cost? Films? Broadband costs make that pointless. The bandwidth of a single video tape exceed the allocation for a month of home broadband.
Broadband often isn't worth having in Australia because of download limits and pricing. Most low end DSL pricing allows only 500 MB a month downloaded, at a price approximating a phone connection and dialup internet service with a similar download capacity.
Cable (for TV) rollout stalled many years ago, with a very limited number of connections. No real increase in subscriber numbers appears likely, while the cable companies lose money. So for new users, the most likely access is DSL rather than cable. For many years, cable companies were uninterested in providing internet hookups, and offered only TV.
The big win for broadband is speed. It is quicker than dialup, provided the other end is working quickly. However half the sites you connect to are slow from their end, not at your end.
The always on nature (at least when it isn't having technical hitches) is also of use. I can see internet cafes, small business offices and similar sized enterprises finding it of use. Pricing plans for large quantities of data reduce the cost per byte, so sharing the line makes sense (of course, then the speed may drop).
Always on connections are just a way of serving up viruses, and being attacked by crackers. You need to weigh the increased risk against the advantages.
ADSL here is often reported as flakey and unreliable, with two hour outages reported. I have no idea whether this is accurate or typical, but I'd like to hear good things about it prior to paying for it myself.
Single use DSL connections run something over A$50 for 500MB a month. That is less than an hour of downloading. Less than a CD worth of data. From the takeup rates, it looks like many Australians decided they didn't download that much very often (email and news feeds will not need very much). When you start getting 3GB or more, prices more than double, which seems to be worthwhile only to people with an interest in multimedia downloads. In 2002, 70,000 Australian businesses ran broadband (maybe 10%), as did 233,700 homes, while another source says 363,500 subscribers. yet another says 173,200 cable and 139,900 DSL subscribers. These figures are increasing faster than economic growth, and it will be interesting to see when they stabilise.
It seems to me that at present there are few compelling applications or content to justify paying a premium for broadband.
Another problem with DSL is that it simply isn't universally available, and never will be. If you are distant from the phone exchange it doesn't work. It also doesn't work with phone services connected via RIM (remote integrated multiplexor, a little curbside mini-exchange used in congested areas). It doesn't work with pair gain wiring, where an existing line has been split between two subscribers, as is also common in areas short of connections.
DSL isn't portable. If you work from two locations, like an office plus your home, you can't transfer your DSL account between them the way you can a dialup connection. DSL is also no use when you travel with a computer, so it is no use when you go on holidays.

Tuesday 4 August 2009

Introduction To IC

The processor (CPU, for Central Processing Unit) is the computer's brain. It allows the processing of numeric data, meaning information entered in binary form, and the execution of instructions stored in memory.
The first microprocessor (Intel 4004) was invented in 1971. It was a 4-bit calculation device with a speed of 108 kHz. Since then, microprocessor power has grown exponentially. So what exactly are these little pieces of silicone that run our computers

Operation
The processor (called CPU, for Central Processing Unit) is an electronic circuit that operates at the speed of an internal clock thanks to a quartz crystal that, when subjected to an electrical currant, send pulses, called "peaks". The clock speed (also called cycle), corresponds to the number of pulses per second, written in Hertz (Hz). Thus, a 200 MHz computer has a clock that sends 200,000,000 pulses per second. Clock frequency is generally a multiple of the system frequency (FSB, Front-Side Bus), meaning a multiple of the motherboard frequency.
With each clock peak, the processor performs an action that corresponds to an instruction or a part thereof. A measure called CPI (Cycles Per Instruction) gives a representation of the average number of clock cycles required for a microprocessor to execute an instruction. A microprocessor’s power can thus be characterized by the number of instructions per second that it is capable of processing. MIPS (millions of instructions per second) is the unit used and corresponds to the processor frequency divided by the CPI

Instructions
An instruction is an elementary operation that the processor can accomplish. Instructions are stored in the main memory, waiting to be processed by the processor. An instruction has two fields:
the operation code, which represents the action that the processor must execute;
the operand code, which defines the parameters of the action. The operand code depends on the operation. It can be data or a memory address.
The number of bits in an instruction varies according to the type of data (between 1 and 4 8-bit bytes).
Instructions can be grouped by category, of which the main ones are:
Memory Access: accessing the memory or transferring data between registers.
Arithmetic Operations: operations such as addition, subtraction, division or multiplication.
Logic Operations: operations such as AND, OR, NOT, EXCLUSIVE NOT, etc.
Control: sequence controls, conditional connections, etc.

Registers
When the processor executes instructions, data is temporarily stored in small, local memory locations of 8, 16, 32 or 64 bits called registers. Depending on the type of processor, the overall number of registers can vary from about ten to many hundreds.
The main registers are:
the accumulator register (ACC), which stores the results of arithmetic and logical operations;
the status register (PSW, Processor Status Word), which holds system status indicators (carry digits, overflow, etc.);
the instruction register (RI), which contains the current instruction being processed;
the ordinal counter (OC or PC for Program Counter), which contains the address of the next instruction to process;
the buffer register, which temporarily stores data from the memory.

Cache Memory
Cache memory (also called buffer memory) is local memory that reduces waiting times for information stored in the RAM (Random Access Memory). In effect, the computer's mainmemory is slower than that of the processor. There are, however, types of memory that are much faster, but which have a greatly increased cost. The solution is therefore to include this type of local memory close to the processor and to temporarily store the primary data to be processed in it. Recent model computers have many different levels of cache memory:
Level one cache memory (called L1 Cache, for Level 1 Cache) is directly integrated into the processor. It is subdivided into two parts:
the first part is the instruction cache, which contains instructions from the RAM that have been decoded as they came across the pipelines.
the second part is the data cache, which contains data from the RAM and data recently used during processor operations.Level 1 caches can be accessed very rapidly. Access waiting time approaches that of internal processor registers.
Level two cache memory (called L2 Cache, for Level 2 Cache) is located in the case along with the processor (in the chip). The level two cache is an intermediary between the processor, with its internal cache, and the RAM. It can be accessed more rapidly than the RAM, but less rapidly than the level one cache.
Level three cache memory (called L3 Cache, for Level 3 Cache) is located on the motherboard.All these levels of cache reduce the latency time of various memory types when processing or transferring information. While the processor works, the level one cache controller can interface with the level two controller to transfer information without impeding the processor. As well, the level two cache interfaces with the RAM (level three cache) to allow transfers without impeding normal processor operation

Thought Processor INTEL

The Intel CE 2110 Media Processor combines a 1 GHz Intel XScale processing core with powerful audio-video processing, graphics and I/O components. Single chip solution is important as consumer electronics manufacturers need to accelerate development/production process.
The MPEG-2 and H.264 video codecs maximizes system-level performance by enabling the Intel XScale processor core to be used exclusively for applications. In addition to the Intel XScale processor core, this highly integrated consumer electronics platform building block includes an Intel Micro Signal Architecture DSP core for audio codecs, a 2D/3D graphics accelerator, hardware accelerators for encryption and decryption, comprehensive peripheral interfaces, analog and digital input/output, and a transport interface for ATSC/DVB input.



Laser wafer marking tracks IC production

The making of semiconductor integrated-circuit (IC) chips--at one time a labor-intensive operation in which silicon wafers were hand-carried from machine to machine, aligned by eye through a microscope, and tracked by careful technicians--has become a highly automated process where milliseconds count and glitches cannot be tolerated. If a wafer goes astray, or information on the number of manufacturing steps it has gone through disappears, an entire production line may have to be idled while troubleshooters are called in to figure out what went wrong.
To prevent such problems and to keep tabs on the manufacturing process itself, most modern IC fabrication facilities ("fabs") require that each wafer be labeled with its own identification (ID) mark in the form of a string of characters, a barcode, or a two-dimensional (2-D) matrix of pixels (Fab equipment then automatically tracks the wafer through its manufacturing stages to the point at which it is diced into individual IC chips. Any inspection data accumulated along the way can be unambiguously tied to the proper wafer.
Laser marking, with its combination of speed, permanence, and reliability, has become the standard means of marking wafers. Although the technology has been around since the 1970s, it has, through steady improvement and the advent of new applications, continued to serve the semiconductor industry

IC makers weigh improvements

Because silicon has a higher absorption for green light than for near-IR, most manufacturers of laser wafer markers now offer frequency-doubled solid-state lasers as an option--or, as in the case of NEC Corp., as standard equipment. The disadvantages of a frequency-doubled laser--lower power and higher cost--can be offset by improved marking performance resulting from the fact that energy absorption of the doubled light occurs closer to the wafer surface. But the choice between green and near-IR is not clear-cut. Because each IC chip maker has developed its own proprietary methods, what works well at one fab may not pass muster at another.
In the case of backside die marking, the consequence of silicon`s lower near-IR absorption is more obvious. Although opaque to the eye, a silicon wafer transmits enough of the Nd:YAG laser`s 1064-nm fundamental wavelength that a small amount of light can reach all the way to the underside of the die itself, potentially causing damage. But damage of this sort "is uncommon," says Downes of General Scanning. He notes that of all the chips being manufactured at fabs where backside marking is used, only one type of chip at one fab suffered performance degradation due to underside irradiation. Even so, General Scanning offers optional frequency doubling of its lasers, he says.
When operating at high power and slow scan speeds, laser wafer-marking systems are capable of digging pits and trenches in silicon with depths of from a few to more than 100 µm, called "hard" marks. But this sort of marking creates particles that contaminate and ruin chips. In addition, when used for backside die ID, hard marking can produce raised kerfs up to 30 µm high that prevent a finished chip from adequately contacting its heat sink.



Solderless Flip Chip Using Polymer Flip Chip ProcessesA reliable and manufacturable flip chip infrastructure continues to develop worldwide.

A reliable and manufacturable flip chip infrastructure continues to develop worldwide. Significant advances in equipment, processes for flip chip interconnect, and long term reliability of the flip chip assemblies are causing a shift from chip and wire interconnect to non-packaged direct chip attach.
Miniaturized packages, higher density electronics and higher speed are the motivating forces for the true chip size, low inductance electrical interconnection that flip chip offers. As shown in Table 1, the ability to form a high input-output (I/O) packaging concept with low contact resistance, low capacitance, and low lead inductance will drive the microelectronics industry conversion from chip and wire to flip chip.
Flip chip interconnect technology will become the ultimate surface mount (SMT) technique in the 21st century, replacing BGA, μBGA, and CSP, which are best categorized as transition packages. All of these will use flip chip for electrically attaching the integrated circuit (IC) to the package substrate, until cost and space needs require eliminating the package altogether.

The three basic technologies underlying most of the hundreds of flip chip interconnect techniques are anisotropic materials, metallic bump technology, and isotropic conductive polymers. The process and reliability information which follows here focuses on the isotropic conductive polymer approach, or PFC® process. This process uses silver (Ag) filled thermoset and thermoplastic polymers, in combination with stencil printing processes, to form polymer bump interconnects for flip chip integrated circuit (IC) devices.
The following discussion of under-bump metallization (UBM) over aluminum, bump formation processes, and overall reliability of flip chip devices compares the relative performance of the thermoset and thermoplastic polymers which form the primary electrical interconnection.

Sputtered UBM/Electroplated Solder

Electroplating of solder was developed as a less costly and more flexible method than evaporation. The UBM is typically an adhesion layer of titanium tungsten (TiW), a copper wetting layer, and a gold protective layer. The UBM is sputtered or evaporated over the entire surface of the wafer, providing a good conduction path for the electroplating currents.
Bumping begins with photopatterning and plating a copper minibump on the bump sites. This thick copper allows the use of high-tin eutectic solders without consuming the thin copper UBM layer. A second photopatterning and plating of the solder alloy over the minibump forms the solder bump. The photoresist is then removed from the wafer and the bump is reflowed to form a sphere.
Electroplated bumping processes generally are less costly than evaporated bumping. Electroplating in general has a long history and processes are well characterized. The UBM adheres well to the bond pads and passivation, protecting the aluminum pads. Plating can allow closer bump spacing (35 to 50 microns) than other methods of bump formation. Electroplating has become more popular for high bump count (>3,000) chips becasue of its small feature size and precision.
Plating bath solutions and current densities must be carefully controlled to avoid variations in alloy composition and bump height across the wafer. Plating generally is limited to binary alloys.

Solder Bump Flip Chip

This is the second in a series of flip chip tutorials intended for flip chip users and potential users. Tutorial #2 presents an overview of solder bump flip chip bumping and assembly processes. Concurrently, FlipChips Dot Com’s Technology News Updates present industry experts describing the newest developments in their fields; our Literature and Photo pages give supplemental material.

GENERAL
Flip chip assembly by means of solder connection to the bond pads was the first commercial use of flip chip, dating to IBM's introduction of flip chip in the 1960's. Solder bump has the longest production history, the highest current and cumulative production volumes, and the most extensive reliability data of any flip chip technology. Delco developed their solder bump processes in the 1970's; Delco Delphi now assembles over 300,000 solder bumped die per day for automotive electronics.
More recent solder bump flip chip process variations have lowered the manufacturing cost, widened flip chip applicability, and made solder bumped die and wafers available from several suppliers to the commercial market. This introductory survey discusses the operations performed in solder bumping and assembly, and describes several of the solder bump processes now commercially available. The references listed at the end of the tutorial provide details.

PROCESS OVERVIEW
The solder bump flip chip process may be considered as four sequential steps: preparing the wafer for solder bumping, forming or placing the solder bumps, attaching the bumped die to the board, substrate, or carrier, and completing the assembly with an adhesive underfill.

Under-Bump Metallization
The first step in solder bumping is to prepare the semiconductor wafer bumping sites on the bond pads of the IC's. This preparation may include cleaning, removing insulating oxides, and providing a pad metallurgy that will protect the IC while making a good mechanical and electrical connection to the solder bump and the board.
This under-bump metallization (UBM) generally consists of successive layers of metal with functions described by their names. The "adhesion layer" must adhere well to both the bond pad metal and the surrounding passivation, providing a strong, low-stress mechanical and electrical connection. The "diffusion barrier" layer limits the diffusion of solder into the underlying material

The Coming of Copper UBM

Several developments in the past few years have solved the numerous problems associated with using copper metal in place of aluminum as the IC interconnect metal. Copper is about three times more conductive than aluminum, and allows higher frequencies to be used with smaller line widths. Many fabricators are converting to copper not just for speed, but also for cost reduction. Thinner conductors allow closer spacing and smaller chips. The switch to copper allows many times more dies per wafer, and this is where the savings come from.
Since copper is much more compatible with bump metals than aluminum, the transition to copper is expected to boost flip chip technology, possibly by eliminating the UBM step. It remains to be seen what the final finish will be for copper ICs. The industry will continue to use wire bonding as the main interconnect method, so the final pad must be compatible with gold wire bonding.
IBM has indicated that its copper chips will have aluminum as the final pad layer to accommodate wire bonding, and this may become standard practice. However, aluminum can be removed easily, without affecting the copper underneath. Aluminum is an amphoteric metal that can be dissolved in both acid and base. Dilute caustic (sodium hydroxide) quickly removes aluminum. Many other reagents also can be used.
Thus, even if new copper chips come with an aluminum finish, a simple aqueous washing step will unveil the desired copper layer. In fact, the aluminum over copper would serve to protect the copper from oxidation. Gold over nickel could also be used on copper pads similar to PWB common finishes. This too would be a very good surface for most bumps.
Conductive adhesives would receive a real boost in the switch to copper because none are compatible with aluminum. Some of the conductive adhesives form reasonably stable junctions with bare copper, especially those using an oxide-penetrating mechanism. Even for those adhesives which are not suitable for bare copper, simple UBM methods could be used. Silver and other finishes can be applied to copper by electroless, maskless plating.
The advent of copper-based chip interconnection metallurgy undoubtedly will simplify FC fabrication in the near future. It will probably be possible to directly bond the copper pads with conductive adhesives. This simple processing ability would have a great impact on cost and infrastructure issues by eliminating UBM and maybe the bumping step. Assemblers could run the entire FC preconditioning and bonding process.

Critical Issues of Wafer Level Chip Scale Package (WLCSP)

ABSTRACT:Some of the critical issues of wafer level chip scale package (WLCSP) are mentioned and discussed in this investigation. Emphasis is placed on the cost analysis of WLCSP through the important parameters such as wafer-level redistribution, wafer-bumping, and wafer-level underfilling. Useful and simple equations in terms of these parameters are also provided. Only solder-bumped with pad-redistribution WLCSPs are considered in this study.

INTRODUCTION:There are at least two major reasons why directly attaching the solder bumped flip chip on organic substrates is not popular yet [1, 2]. Because of the thermal expansion mismatch between the silicon chip and the epoxy PCB, underfill encapsulant is usually needed for solder joint reliability. However, due to the underfill operation, the manufacturing cost is increased and the manufacturing throughput is reduced. In addition, the rework of an underfilled flip chip on PCB is very difficult, if it is not impossible.
The other reason is because the pitch and size of the pads on the peripheral-arrayed chips are very small and pose great demands on the supporting PCB. The high-density PCBs with sequential build-up circuits connected through microvias are not commonly available at reasonable cost yet.
Meantime, a new class of packaging called wafer level chip scale package (WLCSP) provides a solution to these problems [1 – 15]. There are many different kinds of WLCSP, for examples, eight different (ChipScale, EPIC, FCT, Fujitsu, Mitsubishi, National Semiconductor, Sandia National Laboratories, and ShellCase) companies’ WLCSP are reported in [2] and six different (EPS/APTOS, Amkor/Anam, Hyundai, FormFactor, Tessera, and Oxford) companies’ WLCSP are reported in [1]. Just like many other new technologies,

The infrastructure of WLCSP is not well established
The standard of WLCSP is not well established
WLCSP expertise is not commonly available
Bare wafer is not commonly available
Bare wafer handling is delicate
High cost for poor-yield IC wafers
Wafer bumping is still too costly
High cost for low wafer-bumping yield, especially for high-cost dies
Wafer-level redistribution is still too costly
High cost for low wafer-level redistribution yield, especially for high-cost dies
Troubles with System Makers if the die shrinks
Test at speed and burn-in at high temperature on a wafer are difficult
Single-point touch-up on the wafer is difficult
PCB assembly of WLCSP is more difficult
Solder joint reliability is more critical
Alpha particles produce soft errors by penetrating through the lead-bearing solder on WLCSP
Impact of lead-free solder regulations on WLCSP
Who should do the WLCSP? IC Foundries or Bump Houses?
What are the cost-effective and reliable WLCSPs and for what IC devices?
How large is the WLCSP market?
What is the life cycle of WLCSP?

WLCSP COSTS

Since 100% perfect wafers cannot be made at high volume today, the true IC chip yield (YT) plays the most important role in cost analysis. Also, the physical possible number of undamaged chips (Nc) stepped from a wafer is need for cost analysis, since (YTNc) is the number of truly good die on a wafer. Nc is given by [1, 2, 16]
where
A = xy (2)
and
In Equations (1) – (3), x and y are the dimensions of a rectangular chip (in millimeters, mm) with x no less than y; q is the ratio between x and y; f is the wafer diameter (mm); and A is the area of the chip (in square millimeters, mm2). For example, for a 200 mm wafer with A = 10 x 10 = 100 mm2, then Nc ~

Wafer Redistribution Costs

Wafer-level redistribution is the heart of the WLCSPs. The cost of wafer-level redistribution is affected by the true yield (YT) of the IC chip, the wafer-level redistribution yield (YR), and the good die cost (CD). The actual wafer-level redistribution cost per wafer (CR) is:
CR=CWR+(1–YR)YTNCCD (4)
where CWR is the wafer-level redistribution cost per wafer (ranging from $50 to $200), YR is the wafer-level redistribution yield per wafer, CD is the good die cost (not the cost of an individual die on the wafer), Nc is given in Equation (1), and YT is the true IC chip yield after at-speed/burn-in system tests (or individual die yield). Again, it can be seen that the actual wafer-level redistribution cost per wafer depends not only on the wafer-level redistribution cost per wafer but also on the true IC chip yield per wafer, wafer-level redistribution yield per wafer, and good die cost.
Wafer-level redistribution yield (YR) plays a very important role in WLCSP. The wafer-level redistribution yield loss (1-YR) could be due to: (1) more process steps; (2) wafer breakage; (3) wafer warping; (4) process defects such as spots of contamination or irregularities on the wafer surface; (5) mask defects such as spot, hole, inclusion, protrusion, break, and bridge; (6) feature-size distortions; (7) pattern mis-registration; (8) lack of resist adhesion; (9) over etch; (10) undercutting; (11) incomplete etch; and (12) wrong materials. It should be noted that wafer-level redistribution are not reworkable. It has to be right the first time, otherwise, someone has to pay for it!
The uses of Equations (1) and (4) are shown in the following examples. If the die size of a 200 mm wafer is 100 mm2, true IC chip yield per wafer is 80% (since the importance of YT has been shown in [16, 17], only one value of YT will be consider in this study), wafer-level redistribution yield per wafer is 90%, wafer-level redistribution cost per wafer is $100, and the die cost is $100 (e.g., microprocessors), then from Equation (1), Nc = 255, and from Equation (4), the actual wafer-level redistribution cost per wafer is $2140. For the same size of wafer if the die cost is $5 (e.g., memory devices), then the actual wafer-level redistribution cost per wafer is $202. It is noted that for both cases, the actual wafer-level redistribution cost per wafer is much higher than the wafer-level redistribution cost (CWR = $100)!
On the other hand, if the wafer-level redistribution yield is increased from 90% to 99%, then the actual cost for redistributing the microprocessors wafer is reduced from $2140 to $304 and for redistributing the memory wafer is reduced from $202 to $110.2. Thus, wafer-level redistribution yield plays an important role in the cost of wafer-level redistribution and the wafer-level redistribution houses should stride to make YR > 99%, especially for expensive good dies

Wafer Bumping Costs

Wafer bumping is the heart of solder-bumped WLCSPs. The cost of wafer bumping is affected by YT, CD, YR and the wafer-bumping yield (YB). The actual wafer bumping cost per wafer (CB) is:
CB=CWB+(1–YB)YRYTNCCD (5)
where CWB is the wafer bumping cost per wafer (ranging from $25 to $250), YB is the wafer-bumping yield per wafer, YR is the wafer-level redistribution yield per wafer, CD is the good die cost, Nc is given in Equation (1), and YT is the true IC chip yield after at-speed/burn-in system tests (or individual die yield). Again, it can be seen that the actual wafer bumping cost per wafer depends not only on the wafer-bumping cost per wafer but also on the true IC chip yield per wafer, wafer-bumping yield per wafer, good die cost, and wafer-level redistribution yield per wafer.
Just like YR, wafer bumping yield (YB) plays a very important role in wafer bumping. The wafer bumping yield loss (1-YB) could be due to: (1) wrong process;, (2) different materials; (3) too tall or short of a bump height; (4) not enough shear strength; (5) un-even shear strength; (6) broken wafers or dies; (7) solder bridging; (8) damaged bumps; (9) missing bumps; and (10) scratch of the wafer.
For the pervious example, if the wafer-bumping yield per wafer is 90% and wafer bumping cost per wafer is $120, then the actual wafer bumping costs per (the microprocessors) wafer are, respectively, $1956 if YR = 90% and $2139.6 if YR = 99%, and the actual wafer bumping costs per (the memory) wafer are, respectively, $211.8 if YR = 90% and $220.98 if YR = 99%. Again, it should be noted that the actual wafer-bumping cost per wafer is much higher than the wafer-bumping cost (CWB = $120).
On the other hand, if the wafer-bumping yield is increased from 90% to 99%, then the actual costs for bumping the microprocessors wafer are, respectively, $303.6 if YR = 90% and $321.96 if YR = 99%, and the actual costs for bumping the memory wafer are, respectively, $129.18 if YR = 90% and $130.1 if YR = 99%. Thus, wafer-bumping yield plays an important role in the cost of wafer bumping and the wafer bumping houses should stride to make YBYR > 99%, especially for expensive good dies. If there is no wafer-level redistribution, then there is no wafer redistribution yield loss, i.e., YR = 1, then Equation (5)

SUMMARY

More than 20 different critical issues of WLCSP have been mentioned. The most important issue (cost) of WLCSP has been analyzed in terms of the true IC chip yield, wafer-level redistribution yield, wafer-bumping yield, wafer-level underfill yield, and die size and cost. Also, useful equations in terms of these parameters have been presented and demonstrated through examples. Some important results are summarized as follows.
IC chip yield (YT) plays the most important role in WLCSP. If YT is low for a particular IC device, then it is not cost-effective to house the IC with WLCSP, unless it is compensated for by performance, density, and form factor.
Wafer-level redistribution yield (YR) plays the second most important role in WLCSP. Since this is the first post wafer processing after the IC FAB, the wafer-level redistribution houses should stride to make YR > 99% (99.9% is preferred). Otherwise, it will make the subsequent steps very expensive by wasting the material and process on the damage dies.
Wafer-bumping yield (YB) plays the third most important role in WLCSP. The wafer bumping house should strive to make YRYB > 99% (99.9% is preferred) to minimize the hidden cost, since they cannot afford to damage the already redistributed good dies.
Based on cost and process points of view, wafer-level underfill is not a good idea for solder-bumped flip chip on low-cost substrates.

Sunday 2 August 2009

The basis of Faraday's law

greatest contribution to physics was to show that a voltage, E, is generated by a coil of wire when the magnetic flux, , enclosed by it changes

E = N×d/dt volts
where N is the number of turns.
(Vector quantities express this more rigorously, but Faraday was even less of a maths fan than me.) The flux may change because

A nearby permanent magnet is moving about.
The coil rotates with respect to the magnetic field.
The coil is wound on a core whose effective permeabilty changes.
The coil is the secondary winding on a transformer where the primary current is changing.
In electric motors and generators you will usually have more than one of these causes at the same time. It doesn't matter what causes the change; the result is an induced voltage, and the faster the flux changes the greater the voltage.

The effect of coil current
OK, that's all clear enough, but there is one other reason for alteration to the flux: current flow in the coil. Hans Christian Oersted discovered that an electric current can produce a magnetic field. The more current you have the more flux you generate. That, too, is easy enough to grasp. What needs a firm intellectual grip is to appreciate that Faraday's Law does not stop operating just because you have current flowing in the coil. When the coil current varies then that will alter the flux and, says Faraday, if the flux changes then you get an induced voltage. This merry-go-round between current, flux and voltage lies at the heart of electromagnetism. Calculating the sequence of operation just described is quite easy.

The 'flip side to Faraday'
Now let's spin our fairground ride in the other direction. Instead of getting an induced voltage by putting in a current we'll put a voltage across the coil and see what happens. Normally, if you put several volts across any randomly arranged bit of wire then what will happen will be a flash and a bang; the current will follow Ohm's law and (unless the wire is very long and thin) there won't be enough resistance to prevent fireworks. It's a different story when the wire is wound into a coil. If the current increases then we get flux build up which induces a voltage of its own. The sign of this induced voltage is always such that the voltage will be positive if the current into the coil increases. We say that the induced voltage will oppose the externally applied voltage which made the current change (Lenz's law). This creates a limit to the rate of rise of the current and prevents (at least temporarily) the melt-down we get without coiling.
Calculating this 'flip side to Faraday' is also easy; we take his law in its differential form above and integrate:

= ( E.dt ) / N webers
where E is the externally applied voltage and N is the number of turns.
We'll call this the integral form of Faraday's law.
Inductor with AC applied
Let's apply a sinusoidal voltage, frequency f, RMS amplitude a -
E = (√2)a.sin(2.f.t)
Substituting this into the integral form of Faraday:
= ( (√2)a.sin(2.f.t) .dt ) / N
= ((√2)a/N) sin(2.f.t) .dt
= (-(√2)a/(2.f.N)) [cos(2..f.t)]0t
The expression with the limits of integration will always be between -1 and +1 so that the peak value of flux is given by
pk = (√2)a/(2.f.N)
pk = a / (4.44 f.N) Wb
Example: If 230 volts at 50 Hz is applied to an inductor having 200 turns then what is the peak value of magnetic flux?
pk = 230 / (4.44 × 50 × 200)
pk = 5.18 mWb
One important general point: if your winding has to cope with a given signal amplitude then the core flux is proportional to the inverse of the frequency. This means, for example, that mains transformers operating at 50 or 60 Hz are larger than transformers in switching supplies (capable of handling the same power) working at 50 kHz.
Using inductance
If the material permeability is constant then the relation between flux and current is linear and, by the definition of inductance :
= L×I/N amps
where L is the inductance of the coil.
Substituting into the integral form of the law:
L×I/N = ( E.dt ) / N
I = ( E.dt ) / L amps
If E is a constant then this formula for the current simplifies to:
I = E×t/L amps
Example: If a 820 mH coil has 2 volts applied then find the current at the end of three seconds.
I = 2×3/0.82 = 7.32 amps

Conclusion:
Satisfy yourself that this result above is consistent with our original formulation of Faraday's law: voltage is proportional to the rate of change of flux. Consider also how useful this integration method is in practical inductor design; if you know the number of turns on a winding and the voltage waveform on it then you integrate wrt time and voilá you have found the amount of flux. What's more is that you found it without knowing about the inductance or the core: its permeabilty, size, shape, or even whether there was a core at all.
Caveat (there's always one): The coils decribed here are idealised.

H-Bridges: Theory and Practice

A number of web sites talk about H-bridges, they are a topic of great discussion in robotics clubs and they are the bane of many robotics hobbyists. I periodically chime in on discussions about them, and while not an expert by a long shot I've built a few over the years. Further, they were one of my personal stumbling blocks when I was first getting into robotics. This section of the notebook is devoted to the theory and practice of building H-bridges for controlling brushed DC motors (the most common kind you will find in hobby robotics ...) I've got an image of one below with both as a unit and "expanded" in an exploded view.

basic Theory

Let's start with the name, H-bridge. Sometimes called a "full bridge" the H-bridge is so named because it has four switching elements at the "corners" of the H and the motor forms the cross bar. The basic bridge is shown in the figure to the rightOf course the letter H doesn't have the top and bottom joined together, but hopefully the picture is clear. This is also something of a theme of this tutorial where I will state something, and then tell you it isn't really true :-).
The key fact to note is that there are, in theory, four switching elements within the bridge. These four elements are often called, high side left, high side right, low side right, and low side left (when traversing in clockwise order).
The switches are turned on in pairs, either high left and lower right, or lower left and high right, but never both switches on the same "side" of the bridge. If both switches on one side of a bridge are turned on it creates a short circuit between the battery plus and battery minus terminals. This phenomena is called shoot through in the Switch-Mode Power Supply (SMPS) literature. If the bridge is sufficiently powerful it will absorb that load and your batteries will simply drain quickly. Usually however the switches in question melt.




To power the motor, you turn on two switches that are diagonally opposed. In the picture to the right, imagine that the high side left and low side right switches are turned on. The current flow is shown in green.
The current flows and the motor begins to turn in a "positive" direction. What happens if you turn on the high side right and low side left switches? You guessed it, current flows the other direction through the motor and the motor turns in the opposite direction.
Pretty simple stuff right? Actually it is just that simple, the tricky part comes in when you decide what to use for switches. Anything that can carry a current will work, from four SPST switches, one DPDT switch, relays, transistors, to enhancement mode power MOSFETs.
One more topic in the basic theory section, quadrants. If each switch can be controlled independently then you can do some interesting things with the bridge, some folks call such a bridge a "four quadrant device" (4QD get it?). If you built it out of a single DPDT relay, you can really only control forward or reverse. You can build a small truth table that tells you for each of the switch's states, what the bridge will do. As each switch has one of two states, and there are four switches, there are 16 possible states. However, since any state that turns both switches on one side on is "bad" (smoke issues forth), there are in fact only four useful states (the four quadrants) where the transistors are turned on. .
The last two rows describe a maneuver where you "short circuit" the motor which causes the motors generator effect to work against itself. The turning motor generates a voltage which tries to force the motor to turn the opposite direction. This causes the motor to rapidly stop spinning and is called "braking" on a lot of H-bridge designs.
Of course there is also the state where all the transistors are turned off. In this case the motor coasts if it was spinning and does nothing if it was doing nothing.

Bipolar Junction (BJT) H-Bridges

The simplest type of H-bridge you can build uses Bipolar Junction Transistors (BJTs), just called transistors from here on out. If you've never built any sort of power controller then the circuits in this section are a good introduction. The circuits can be built cheaply, control a number of easily obtained motors, and even if you burn them up you will learn something!
The tutorial is quite long and I have broken it up into several parts. If you are familiar with transistors then you can skip the Transistor Theory part

Transistor Theory
The first part of this sub-section talks a bit about the theory of operation for Bipolar Junction Transistors

Selecting the Right Transistors
Now that you understand what the transistors do for us, lets use them. This section jumps into the details of selecting some transistors to build into an H-bridge

Implementing H-Bridge Elements with BJT Transistors
The transistors in hand, now it is just a matter of implementing the four corners of the "H" and adding some way to control it from a computer port

The Complete BJT based H-Bridge
Putting the pieces together to form a single unit. A little cleverness in our shopping and we've got a $5 H-bridge

Circuit Analysis and Bring-up
Designing a circuit is only half the fun, understanding how it works and why is the real prize. This section builds up a test harness that analyses the H-bridge we're building

Using this H-Bridge design in a Robot
This section discusses laying out a printed circuit board for use in mobile robots. Ergonomics, economics, and physics all play a role

Going Further
Kits of this design are available for a modest fee, contact waqas saleem this server for details. The kinds of motors that a power transistor H-bridge will control are generally DC gearhead motors and model motors in the 3 - 12V range. Motors that are compatible with this H-bridge can be purchased from the following vendors:

Microprocessor Control

To use this h-bridge with a microprocessor, you must connect the three control lines to output pins on the microprocessor. Using the BasicStamp II as an example, consider the following hookup diagram.
As you can see three pins from the Basic Stamp are connected to each H-bridge board. In this example they are P0, P1, and P2 to the board controlling the left motor and P4, P5, and P6 to the board controlling the right motor. One of the advantages of using three pins that are both right next to each other, and in the same group of four bits (called nybbles) is that you can use a single variable (one of OUTA, OUTB, OUTC, or OUTD) to write to four pins at once.
This is really only important on chips like the BASIC Stamp where their can be a millisecond or more between the execution of one instruction and the next. By connecting them this way you can cause both motors to start turning with a single instruction such as this assignment:

OUTL = $33
Whereas if you did two instructions :
OUTA = $03 OUTB = $03

You would find that the left motor started turning first, then the right motor. So on a robot that steered with two motors the motor would make a slight turn to the right, then go straight. If you turned them off in the same sequence you would find that the robot corrected its heading back to the original heading but would not have traveled "straight" ahead. For systems that use gear motors such as the 12V Brevel motors or the Globe motors, this won't be a noticeable problem, but higher performance motors will definitely suffer.
Alternatively you could use something like my ServoGizmo project to drive one or two of these boards. The AntWeight ESC code could be easily modified to drive this bridge circuit rather than the 754410, however you could even drive two of these at the same time with some additional code. When the 754410 is not mounted on the Gizmo board you get 6 outputs from the PIC. If I have time I'll write a dual motor control with serial input so that you could connect the Gizmo to just one pin of the BASIC stamp and send it serial commands to control two motors.
The easiest way to use PWM on the motor is to start with the direction and enable bits "high" or at a logic 1 value. This turns on the high side (source) transistor and leaves the sink side transistor off. You can then send "low" pulses out the ENA* line to turn the motor on and off. This would allow you to use a single 'PWM' output, such as the one that is available on the PIC16F628, to control the PWM duty cycle in hardware while the PIC managed other aspects of controlling the motor. The most common use would be to provide encoder feedback into the PIC that would allow a simple PID algorithm to be implemented. With two bits of encoder input, three bits of motor control, and two bits for serial I/O the 16F628 would be well engaged.

Summary
The previous pages have gone through the design of simple H-bridge using bipolar junction transistors. If you read through this tutorial and build the H-bridge, you will be able to use this information in many future robots. The H-bridge that is presented is well suited to a wide variety of hobby motors and because you should understand it completely, it should be easily repaired should something fail. The next step in building H-bridges is to build them out of MOSFETs.

Layout Considerations

Generally this circuit is fairly free of layout restrictions, however there are some things that you can do to make your life easier. A sample layout is shown below.One of the things to note is that the transistors are arranged "back to back" with their tabs facing each other. In my layout I have spaced them 3/8" apart which allows me to put a piece of 1" x 3/8" x copper bar stock down the middle and with #4-40 machine bolts to secure it. A 1 - 1/2" piece of this stock weighs about 3 oz. This basically doubles to current capacity of the bridge, and if you then bolt the copper bar to a metal enclosure you can triple the capacity to a full 6 amps continuous duty. Further, the two left transistors are the "upper" source transistors and the two right transistors are the lower "sink" transistors. That means that any thermal solution will have heat being injected from diagonal corners which further maximizes the benefit by spreading out the heat injection. The point here is to think about whether or not you are going to put heat sinks on the transistors and lay them out accordingly.
The layout in the zip file is very slightly different than the first run, I added more room for the over-voltage snubber and added a place to put a .01uf capacitor across the motor leads (cuts down on brush noise).
Alternatively you can build this bridge on a piece of perfboard and just solder it together. Be sure and use at least 18 ga wire on the legs of the transistors.

A Touch of Physics

In order to understand how diodes can have very different properties according to how they are manufactured, it is necessary to first delve a bit into the physics involved. Don't worry; we won't get any deeper in than we have to, but we have to use this method to understand the differences between different kinds of diodes, and why they can do the odd (and useful) things they do.
Consider the diagram to the right. This figure shows the important range of electron energy in four different kinds of materials. To understand this diagram, let's define a few terms.

Conduction Band
That range of electron energy where electrical conduction is possible. Electrons with this much energy are free of their parent atoms, and can move through the medium in which they exist.

Valence Band

That range of electron energy where electrical conduction is not possible. Electrons with this much energy are bound into the atomic structure of the material, and are unavailable to conduct an electrical current.

Forbidden Zone
That energy range between the valence band and the conduction band. Electrons cannot remain within this range of energy; they must either gain or lose energy so as to attain either the conduction band or the valence band.

Fermi Level
The highest energy level in the crystal that can remain populated by electrons at a temperature of Absolute Zero. Electrons with greater energy than this may be available for conduction; electrons with less energy are bound to the crystal structure.
Diagram A above represents a good conductor, such as copper or silver. Here, at temperatures above Absolute Zero, electrons are always available to conduct electrical current, even with no applied energy. In metals, the valence and conduction bands actually overlap.
Diagram B shows a typical insulator, such as glass. All electrons are pretty much locked into the atomic structure, and are unavailable as current carriers. It will take a lot of energy to break any electrons loose for conduction. It's not impossible (a lightning bolt can go through almost anything), but it takes a lot of applied energy.
Diagram C represents a crystal of N-type silicon (or germanium). The forbidden zone is still present, but much smaller than for an insulator. That's why this type of material is called a "semiconductor." With the crystal doped with N-type impurities, there are lots of electrons around with almost enough energy to roam freely, so the Fermi level gets pushed up close to the conduction band. If the doping level is heavy enough (large dosage of impurities), the Fermi level can actually enter the conduction band.
Diagram D represents a P-type semiconductor crystal. Here, the p-type impurities have left holes in the atomic structure, which tend to attract and hold free electrons. This pulls the Fermi level down until it gets close to the valence band. Similar to the highly-doped N-type crystal, a highly-doped P-type crystal will have its Fermi level within the valence band instead of just above it.
There are two important factors regarding the Fermi level in semiconductors. First, since the Fermi level is close to one of the working energy levels, it requires very little energy to push an electron over the edge and make it available for conduction. In an N-type crystal, only a very small applied voltage will kick the free electrons up into the conduction band to carry a current. In a P-type crystal, a small amount of energy will kick a bound electron just over the top of the valence band into the forbidden zone. This doesn't make the electron available for bulk conduction, but does allow the applied voltage to push the electron over into a hole, causing it to leave another hole behind it. In this way, a series of electrons can "hop" from bound state to bound state in a new location, allowing the hole to appear to move in the opposite direction. This is another way to think of hole conduction in semiconductor crystals.
The second factor to remember is that when a PN junction is formed in a single silicon or germanium crystal, the entire crystal as a whole has one Fermi level (see Diagram E above). The conduction and valence bands have differing energy levels across the crystal. As a result, the N-type conduction band is very close in energy to the P-type valence band. The transition region corresponds to the depletion region within the crystal. This is a major factor in the operation of all semiconductor devices, and helps to explain how we can get specific properties from a given device, according to just how we manufacture the semiconductor crystal.

Specialized Diodes

By adjusting the doping levels and gradients as well as the geometry of a semiconductor crystal, we can modify the behavior of the device. This page lists a wide range of diodes whose properties have been deliberately controlled to produce specific capabilities.
Each of these specialized diodes has its own schematic symbol, shown to the right of its description below. The symbols are all specific variations on the basic diode symbol, so that the nature and function of the device is clear on a schematic diagram.


Light Emitting Diode (LED)

One of the questions semiconductor manufacturers asked themselves was, "What happens if we increase the doping levels in the silicon crystal?" Trying this gave rise, among other things, to the tunnel diode. Then they took the process even further, to the point where they skipped the silicon completely, and produced what is called a "III-V" device, named after the fact that P-type dopants are from column III of the Periodic Table (aluminum, gallium, indium) and N-type dopants are from column V (phosphorus, arsenic).
The resulting Gallium Arsenide (GaAs) crystal had the interesting property of radiating significant amounts of infrared radiation from the junction. By adding Phosphorus to the equation, they shortened the wavelength of the emitted radiation until it became visible red light. Further refinements have given us yellow and green LEDs. More recently, blue LEDs have been produced, by putting nitrogen into the crystal structure. This makes full-color flat-screen LED displays possible.
The mechanism of emitting light is interesting. The atomic structure of the LED is carefully designed so that as free electrons cross the junction from the N-type side to the P-type side, the amount of energy each electron releases as it drops into a nearby hole corresponds to the energy of a photon of some particular color. Therefore, that photon is released as a visible photon of that color.

P-I-N Diode
The p-i-n diode doesn't actually have a junction at all. Rather, the middle part of the silicon crystal is left undoped. Hence the name for this device: p-intrinsic-n, or p-i-n. Because this device has an intrinsic middle section, it has a wide forbidden zone when unbiased. However, when a forward bias is applied, current carriers from the p- and n-type ends become available and conduct current even through the intrisic center region. The end regions are heavily doped to provide more current carriers.
The p-i-n diode is highly useful as a switch for very high frequencies. They are commonly used as microwave switches and limiters.

Tunnel Diode
As we mentioned in our discussion of semiconductor physics the addition of either P-type or N-type impurities causes the Fermi level in the silicon crystal to shift towards the valence band (P-type impurities) or the conduction band (N-type impurities). The higher the doping level, the greater the shift. In the tunnel diode, the doping levels are so high that the Fermi levels in both halves of the crystal have been pushed completely out of the forbidden zone and into the valence and conduction bands.
As a result, at very low forward voltages, electrons don't have to gain energy to get over the Fermi level or into the conduction band; they can simply "tunnel through" the junction and appear at the other side. Furthermore, as the forward bias increases, the applied voltage shifts the levels apart, and gradually back to the more usual diode energy pattern. Over this applied forward voltage range, diode current actually decreases as applied voltage increases. Thus, over part of its operating range, the tunnel diode exhibits a negative resistance effect. This makes it useful in very high frequency oscillators and related circuitry.

Varactor Diode
One characteristic of any PN junction is an inherent capacitance. When the junction is reverse biased, increasing the applied voltage will cause the depletion region to widen, thus increasing the effective distance between the two "plates" of the capacitor and decreasing the effective capacitance.
By adjusting the doping gradient and junction width, we can control the capacitance range and the way capacitance changes with applied reverse voltage. A four-to-one capacitance range is no problem; a typical varactor diode (sometimes called a "varicap diode") might vary from 60 picofarads (pf) at zero bias down to 15 pf at 20 volts. Very careful manufacturing can get a capacitance range of up to ten-to-one, although this seems at present to be a practical limit.
Varactor diodes are used in electronic tuning systems, to eliminate the use of and need for moving parts.

Zener Diode
When the reverse voltage applied to a diode exceeds the capability of the diode to withstand it, one of two things will happen, yielding essentially the same result in either case. If the junction is wide, a process called avalanche breakdown occurs, whereby the current through the diode increases as much as the external circuit will permit. A narrow junction will experience Zener breakdown, which is a different mechanism but has the same effect.
The useful feature here is that the voltage across the diode remains nearly constant even with large changes in current through the diode. In addition, manufacturing techniques allow diodes to be accurately manufactured with breakdown voltages ranging from a few volts up to several hundred volts. Such diodes find wide use in electronic circuits as voltage regulators.

Schottky Barrier Diode
When we get into high-speed applications for electronic circuits, one of the problems exhibited by semiconductor devices is a phenomenon called charge storage. This term refers to the fact that both free electrons and holes tend to accumulate inside a semiconductor crystal while it is conducting, and must be removed before the semiconductor device will turn off. This is not a major problem with free electrons, as they have high mobility and will rapidly leave the semiconductor device. However, holes are another story. They must be filled more gradually by electrons jumping from bond to bond. Thus, it takes time for a semiconductor device to completely stop conducting. This problem is even worse for a transistor in saturation, since then by definition the base region has an excess of minority carriers, which tend to promote conduction even when the external drive is removed.
The solution is to design a semiconductor diode with no P-type semiconductor region, and therefore no holes as current carriers. Such a diode, known as a Schottky Barrier Diode, places a rectifying metal contact on one side on an N-type semiconductor block. For example, an aluminum contact will act as the P-type connection, without requiring a significant P-type semiconductor region.
This diode construction has two advantages in certain types of circuits. First, they can operate at very high frequencies, because they can turn off as fast as they can turn on. Second, they have a very low forward voltage drop. This is used to advantage in a number of ways, including as an addition to TTL ICs. When a Schottky diode is placed across the collector-base junction of a transistor as shown to the right, it prevents the transistor from becoming saturated, by bypassing the excess base current around the transistor. Therefore, the transistor can turn off faster, thus increasing the switching speed of the IC. The full power versions of these TTL ICs are the 74S00 series, and have switching speeds similar to ECL, and similar power requirements. The low power versions, the 74LS00 series, have switching times comparable to standard TTL, but with a much lower power requirement.
Experimentation is always in progress, and new applications are invented regularly. As new diode types come to my attention, I will add them to the list above. If you should hear of a diode type not yet in the list, please contact and let me know there. I will research the device and add it as quickly as possible. Thanks.