Wednesday, May 6, 2009

Nondestructive Testing
The field of Nondestructive Testing (NDT) is a very broad, interdisciplinary field that plays a critical role in assuring that structural components and systems perform their function in a reliable and cost effective fashion. NDT technicians and engineers define and implement tests that locate and characterize material conditions and flaws that might otherwise cause planes to crash, reactors to fail, trains to derail, pipelines to burst, and a variety of less visible, but equally troubling events. These tests are performed in a manner that does not affect the future usefulness of the object or material. In other words, NDT allows parts and materials to be inspected and measured without damaging them. Because it allows inspection without interfering with a product's final use, NDT provides an excellent balance between quality control and cost-effectiveness. Generally speaking, NDT applies to industrial inspections. While technologies are used in NDT that are similar to those used in the medical industry, typically nonliving objects are the subjects of the inspections.
Nondestructive Evaluation
Nondestructive Evaluation (NDE) is a term that is often used interchangeably with NDT. However, technically, NDE is used to describe measurements that are more quantitative in nature. For example, a NDE method would not only locate a defect, but it would also be used to measure something about that defect such as its size, shape, and orientation. NDE may be used to determine material properties such as fracture toughness, formability, and other physical characteristics.
Take this link to learn about the background of NDT and NDE
NDT/NDE Methods
The number of NDT methods that can be used to inspect components and make measurements is large and continues to grow. Researchers continue to find new ways of applying physics and other scientific disciplines to develop better NDT methods. However, there are six NDT methods that are used most often. These methods are visual inspection, penetrant testing, magnetic particle testing, electromagnetic or eddy current testing, radiography, and ultrasonic testing. These methods and a few others are briefly described below.
Visual and Optical Testing (VT)
Visual inspection involves using an inspector's eyes to look for defects. The inspector may also use special tools such as magnifying glasses, mirrors, or borescopes to gain access and more closely inspect the subject area. Visual examiners follow procedures that range from simple to very complex.
Penetrant Testing (PT)
Test objects are coated with visible or fluorescent dye solution. Excess dye is then removed from the surface, and a developer is applied. The developer acts as blotter, drawing trapped penetrant out of imperfections open to the surface. With visible dyes, vivid color contrasts between the penetrant and developer make "bleedout" easy to see. With fluorescent dyes, ultraviolet light is used to make the bleedout fluoresce brightly, thus allowing imperfections to be readily seen.
Magnetic Particle Testing (MT)
This NDE method is accomplished by inducing a magnetic field in a ferromagnetic material and then dusting the surface with iron particles (either dry or suspended in liquid). Surface and near-surface imperfections distort the magnetic field and concentrate iron particles near imperfections, previewing a visual indication of the flaw.
Electromagnetic Testing (ET) or Eddy Current Testing
Electrical currents are generated in a conductive material by an induced alternating magnetic field. The electrical currents are called eddy currents because they flow in circles at and just below the surface of the material. Interruptions in the flow of eddy currents, caused by imperfections, dimensional changes, or changes in the material's conductive and permeability properties, can be detected with the proper equipment.
Radiography (RT)
Radiography involves the use of penetrating gamma or X-radiation to examine parts and products for imperfections. An X-ray generator or radioactive isotope is used as a source of radiation. Radiation is directed through a part and onto film or other imaging media. The resulting shadowgraph shows the dimensional features of the part. Possible imperfections are indicated as density changes on the film in the same manner as a medical X-ray shows broken bones.
Ultrasonic Testing (UT)
Ultrasonics use transmission of high-frequency sound waves into a material to detect imperfections or to locate changes in material properties. The most commonly used ultrasonic testing technique is pulse echo, wherein sound is introduced into a test object and reflections (echoes) are returned to a receiver from internal imperfections or from the part's geometrical surfaces.
Acoustic Emission Testing (AE)
When a solid material is stressed, imperfections within the material emit short bursts of acoustic energy called "emissions." As in ultrasonic testing, acoustic emissions can be detected by special receivers. Emission sources can be evaluated through the study of their intensity, rate, and location.
Leak Testing (LT)
Several techniques are used to detect and locate leaks in pressure containment parts, pressure vessels, and structures. Leaks can be detected by using electronic listening devices, pressure gauge measurements, liquid and gas penetrant techniques, and/or a simple soap-bubble test.


NDT Method Summary
No single NDT method will work for all flaw detection or measurement applications. Each of the methods has advantages and disadvantages when compared to other methods. The table below summarizes the scientific principles, common uses and the advantages and disadvantages for some of the most often used NDT methods.
Penetrant
Testing Magnetic Particle Testing Ultrasonic
Testing Eddy Current
Testing Radiographic
Testing

Scientific Principles
Penetrant solution is applied to the surface of a precleaned component. The liquid is pulled into surface-breaking defects by capillary action. Excess penetrant material is carefully cleaned from the surface. A developer is applied to pull the trapped penetrant back to the surface where it is spread out and forms an indication. The indication is much easier to see than the actual defect. A magnetic field is established in a component made from ferromagnetic material. The magnetic lines of force travel through the material, and exit and reenter the material at the poles. Defects such as crack or voids cannot support as much flux, and force some of the flux outside of the part. Magnetic particles distributed over the component will be attracted to areas of flux leakage and produce a visible indication. High frequency sound waves are sent into a material by use of a transducer. The sound waves travel through the material and are received by the same transducer or a second transducer. The amount of energy transmitted or received and the time the energy is received are analyzed to determine the presence of flaws. Changes in material thickness, and changes in material properties can also be measured. Alternating electrical current is passed through a coil producing a magnetic field. When the coil is placed near a conductive material, the changing magnetic field induces current flow in the material. These currents travel in closed loops and are called eddy currents. Eddy currents produce their own magnetic field that can be measured and used to find flaws and characterize conductivity, permeability, and dimensional features. X-rays are used to produce images of objects using film or other detector that is sensitive to radiation. The test object is placed between the radiation source and detector. The thickness and the density of the material that X-rays must penetrate affects the amount of radiation reaching the detector. This variation in radiation produces an image on the detector that often shows internal features of the test object.
Main Uses
Used to locate cracks, porosity, and other defects that break the surface of a material and have enough volume to trap and hold the penetrant material. Liquid penetrant testing is used to inspect large areas very efficiently and will work on most nonporous materials. Used to inspect ferromagnetic materials (those that can be magnetized) for defects that result in a transition in the magnetic permeability of a material. Magnetic particle inspection can detect surface and near surface defects. Used to locate surface and subsurface defects in many materials including metals, plastics, and wood. Ultrasonic inspection is also used to measure the thickness of materials and otherwise characterize properties of material based on sound velocity and attenuation measurements. Used to detect surface and near-surface flaws in conductive materials, such as the metals. Eddy current inspection is also used to sort materials based on electrical conductivity and magnetic permeability, and measures the thickness of thin sheets of metal and nonconductive coatings such as paint. Used to inspect almost any material for surface and subsurface defects. X-rays can also be used to locates and measures internal features, confirm the location of hidden parts in an assembly, and to measure thickness of materials.
Main Advantages
Large surface areas or large volumes of parts/materials can be inspected rapidly and at low cost.
Parts with complex geometry are routinely inspected.
Indications are produced directly on surface of the part providing a visual image of the discontinuity.
Equipment investment is minimal. Large surface areas of complex parts can be inspected rapidly.
Can detect surface and subsurface flaws.
Surface preparation is less critical than it is in penetrant inspection.
Magnetic particle indications are produced directly on the surface of the part and form an image of the discontinuity.
Equipment costs are relatively low. Depth of penetration for flaw detection or measurement is superior to other methods.
Only single sided access is required.
Provides distance information.
Minimum part preparation is required.
Method can be used for much more than just flaw detection. Detects surface and near surface defects.
Test probe does not need to contact the part.
Method can be used for more than flaw detection.
Minimum part preparation is required. Can be used to inspect virtually all materials.
Detects surface and subsurface defects.
Ability to inspect complex shapes and multi-layered structures without disassembly.
Minimum part preparation is required.
Disadvantages
Detects only surface breaking defects.
Surface preparation is critical as contaminants can mask defects.
Requires a relatively smooth and nonporous surface.
Post cleaning is necessary to remove chemicals.
Requires multiple operations under controlled conditions.
Chemical handling precautions are necessary (toxicity, fire, waste). Only ferromagnetic materials can be inspected.
Proper alignment of magnetic field and defect is critical.
Large currents are needed for very large parts.
Requires relatively smooth surface.
Paint or other nonmagnetic coverings adversely affect sensitivity.
Demagnetization and post cleaning is usually necessary. Surface must be accessible to probe and couplant.
Skill and training required is more extensive than other technique.
Surface finish and roughness can interfere with inspection.
Thin parts may be difficult to inspect.
Linear defects oriented parallel to the sound beam can go undetected.
Reference standards are often needed. Only conductive materials can be inspected.
Ferromagnetic materials require special treatment to address magnetic permeability.
Depth of penetration is limited.
Flaws that lie parallel to the inspection probe coil winding direction can go undetected.
Skill and training required is more extensive than other techniques.
Surface finish and roughness may interfere.
Reference standards are needed for setup. Extensive operator training and skill required.
Access to both sides of the structure is usually required.
Orientation of the radiation beam to non-volumetric defects is critical.
Field inspection of thick section can be time consuming.
Relatively expensive equipment investment is required.
Possible radiation hazard for personnel.
Penetrant
Testing Magnetic Particle Testing Ultrasonic
Testing Eddy Current
Testing Radiographic
Testing

Standards and Specifications
A standard as something that is established for use as basis of comparison. There are standards for practically everything that can be measured or evaluated ... from time to materials to processes. Congress created the National Institute of Standards and Technology in 1901 at the start of the industrial revolution to provide the measurements and standards needed to resolve and prevent disputes over trade and to encourage standardization. NIST develops technologies, measurement methods and standards that help US companies compete in the global marketplace. NDT personnel are sometimes required to use calibration standards that are traceable back to a standard held by NIST. This might be a conductivity standard, which can be shown to have the same electrical conductivity as a NIST standard; or it could be a setup standard that was measured with a micrometer that was calibrated using a NIST standard.
A notable development of the twentieth century is the preparation and use of standard specifications to improve the consistency of manufacturing materials and processes, and the resulting products. A specification is a detailed description as to how to produce something or how to perform a particular task. Anytime a product is marked as meeting a specification or a contract requires use of a specification, the product or service must meet the requirements of document. A standard specification is the result of agreement among the involved parties and usually involves acceptance for use by some organization. Standard specifications do not, however, necessarily imply a degree of permanence (like dimensional or volumetric standards), because technical advances in a given field usually calls for periodic revisions to the requirements.
Properly prepared, standards can be of great value to industry. Some of the advantages are:
• They usually represent the combined knowledge of large group of individuals including producers, consumers and other interested parties, and, thus, reduce the possibility of misinterpretation.
• They give the manufacturer a standard of production and, therefore, tend to result in a more uniform process or product.
• They lower unit cost by making standard processes and mass production possible.
• They permit the consumer to use a specification that has been tried and is enforceable.
• They set standards of testing and measurement and hence permit the comparison of results.
The disadvantage of standard specifications is they tend to "freeze" practices sometimes based on little data or knowledge, and slow the development of better practices.
Standards always represent an effort by some organized group of people. Any such organization, be it public or private, becomes the standardizing agency. Various levels of these agencies exist, ranging from a single business to local government to national groups to international organizations. The professional and industrial organizations in the United States that lead the development of standards relative to the field of NDT include: the ASTM International, the Society of Automotive Engineers (SAE), the American Iron and Steel Institution (AISI), the American Welding Society (AWS) and the ASME International. Many specifications have also been developed by US government agencies such as the Department of Defense (DOD). However, the US government is downscaling its specification efforts and many military specifications are being converted to specification controlled by industry groups. For example, MIL-I-25135 has historically been the controlling document for both military and civilian penetrant material uses. The recent change in military specification management has lead to the requirement of the Mil specification be incorporated into SAE's AMS 2644 and industry is transition towards the use of this specification.
Generally, the desired tendency is for a given standard to become more uniformly used and accepted. One method of increasing standardization is for a large agency to adopt a standard developed by a smaller one. In the US, thousands of standard specifications are recognized by the American National Standards Institute (ANSI), which is a national, yet private, coordinating agency. At the international level, the International Organization for Standardization (ISO) performs this function. The ISO was formed in 1947 as a non-governmental federation of standardization bodies from over 60 countries. The Unites States is represented within the ISO by the ANSI.
Additional information and links to the standards and specification organizations previously mentioned are provided below.
ASTM International
Partial list of ASTM standards relative to NDT

ASTM
Founded in 1898, ASTM International is a not-for-profit organization that provides a global forum for the development and publication of voluntary consensus standards for materials, products, systems, and services. Formerly known as the American Society for Testing and Materials, ASTM International provides standards that are accepted and used in research and development, product testing, quality systems, and commercial transactions around the globe. Over 30,000 individuals from 100 nations are the members of ASTM International, who are producers, users, consumers, and representatives of government and academia. In over 130 varied industry areas, ASTM standards serve as the basis for manufacturing, procurement, and regulatory activities.
Each year, ASTM publishes the Annual Book of ASTM Standards, which consists of approximately 70 volumes. Most of the NDT related documents can be found in Volume 03.03, Nondestructive Testing. E-03.03 is under the jurisdiction of ASTM Committee E-7. Each standard practice or guide is the direct responsibility of a subcommittee. For example, document E-94 is the responsibility of subcommittee E07.01 on Radiology (x and gamma) Methods. This committee, comprised of technical experts from many different industries, must review the document every five years and if not revised, it must be reapproved or withdrawn.
The Society of Automotive Engineers (SAE)
Partial list of SAE standards relative to NDT

SAE
The Society of Automotive Engineers is a professional society that serves as resource for technical information and expertise used in designing, building, maintaining, and operating self-propelled vehicles for use on land or sea, in air or space. Over 83,000 engineers, business executives, educators, and students from more than 97 countries form the membership who share information and exchange ideas for advancing the engineering of mobility systems. SAE is responsible for developing several different documents for the aerospace community. These documents include: Aerospace Standards (AS), Aerospace Material Specifications (AMS), Aerospace Recommended Practices (ARP), Aerospace Information Reports (AIR) and Ground Vehicle Standards (J-Standards). The documents are developed by SAE Committee K members, which are technical experts from the aerospace community.

ASME International
More information on the Boiler & Pressure Vessel Code

ASME
ASME International was founded in 1880 as the American Society of Mechanical Engineers. It is a nonprofit educational and technical organization serving a worldwide membership of 125,000. ASME maintains and distributes 600 codes and standards used around the world for the design, manufacturing and installation of mechanical devices. One of these codes is called the Boiler and Pressure Vessel Code. This code controls the design, inspection, and repair of pressure vessels. Inspection plays a big part in keeping the components operating safely. More information about the B&PV Code can be found at the links to the left.

The American Welding Society
Partial list of AWS Standards and Documents relative to NDT

AWS
The American Welding Society (AWS) was founded in 1919 as a multifaceted, nonprofit organization with a goal to advance the science, technology and application of welding and related joining disciplines. AWS serves 50,000 members worldwide. Membership consists of engineers, scientists, educators, researchers, welders, inspectors, welding foremen, company executives and officers, and sales associates.

The International Organization for Standardization (ISO)
Partial list of ISO standards and documents relative to NDT

ISO
The International Organization for Standardization (ISO) was formed in 1947 as a non-governmental federation of standardization bodies from over 60 countries. The ISO is headquartered in Geneva, Switzerland. The Unites States is represented by the ANSI.
The Air Transport Association (ATA)

ATA
Founded by a group of 14 airlines in 1936, the ATA was the first, and today remains, the only trade organization for the principal US airlines. The purpose of the ATA is to support and assist its members by promoting the air transport industry and the safety, cost effectiveness, and technological advancement of its operations; advocating common industry positions before state and local governments; conducting designated industry-wide programs; and assuring governmental and public understanding of all aspects of air transport. There are two ATA documents that serve as guidelines for the training of inspection personnel.
• ATA Specification 105, Guidelines for Training and Qualifying Personnel in Non-Destructive Testing
Methods. This document serves as a guideline for the development of a training program for personnel who accomplish nondestructive testing tasks. While partially derived from more universal training standards such as ASNT SNT-TC-1A and NAS 410, this document is dedicated to preparing a curriculum for an airline's maintenance training program and qualifying individuals to conduct aircraft inspections.
• ATA Specification 107, Visual Inspection Personnel Training and Qualification Guide for FAR Part 121
Air Carriers. This document addresses training and qualification needs of the aircraft inspection technician and recommends a minimum list of required inspection items.

The Aerospace Industries Association

AIA
The Aerospace Industries Association represents the nation's major manufacturers of commercial, military and business aircraft, helicopters, aircraft engines, missiles, spacecraft, materials, and related components and equipment. The AIA has been a aerospace industry trade association since 1919. It was originally known as the Aeronautical Chamber of Commerce (ACCA). The AIA is responsible for two NDT related documents, which are:
• NAS 410, Certification & Qualification Of Nondestructive Test Personnel. This document is a widely used document in the aerospace industry as it replaces MIL-STD-410E: Military Standard, Nondestructive Testing Personnel Qualification and Certification..
• NAS 999, Nondestructive Inspection of Advanced Composite Structure.

The American National Standards Institute (ANSI)

ANSI.
ANSI is a private, nonprofit organization that administers and coordinates the US voluntary standardization and conformity assessment system. The Institute's mission is to enhance both the global competitiveness of US business and the US quality of life by promoting and facilitating voluntary consensus standards and conformity assessment systems, and safeguarding their integrity.
US Department of Defense Specifications - A list of DOD specifications (Mil Specs, NAV, Etc.) was not prepared since the trend is to move away from their use and more documents are being canceled or made inactive everyday. Information on DOD specifications can be found at the following web site.
The Department of Defense Single Stock Point for Military Specifications, Standards and Related Publications



NDT Method Selection
Each NDT method has its own set of advantages and disadvantages and, therefore, some are better suited than others for a particular application. The NDT technician or engineer must select the method that will detect the defect or make the measurement with the highest sensitivity and reliability. The cost effectiveness of the technique must also be taken into consideration. The following table provides some guidance in the selection of NDT methods for common flaw detection and measurement applications.
Introduction to Materials
This section will provide a basic introduction to materials and material fabrication processing. It is important that NDT personnel have some background in material science for a couple of reasons. First, nondestructive testing almost always involves the interaction of energy of some type (mechanics, sound, electricity, magnetism or radiation) with a material. To understand how energy interacts with a material, it is necessary to know a little about the material. Secondly, NDT often involves detecting manufacturing defects and service induced damage and, therefore, it is necessary to understand how defects and damage occur.
This section will begin with an introduction to the four common types of engineering materials. The structure of materials at the atomic level will then be considered, along with some atomic level features that give materials their characteristic properties. Some of the properties that are important for the structural performance of a material and methods for modifying these properties will also be covered.
In the second half of this text, methods used to shape and form materials into useful shapes will be discussed. Some of the defects that can occur during the manufacturing process, as well as service induced damage will be highlighted. This section will conclude with a summary of the role that NDT plays in ensuring the structural integrity of a component.

Ionic Bonds
Ionic bonding occurs between charged particles. These may be atoms or groups of atoms, but this discuss will be conducted in terms of single atoms. Ionic bonding occurs between metal atoms and nonmetal atoms. Metals usually have 1, 2, or 3 electrons in their outermost shell. Nonmetals have 5, 6, or 7 electrons in their outer shell. Atoms with outer shells that are only partially filled are unstable. To become stable, the metal atom wants to get rid of one or more electrons in its outer shell. Losing electrons will either result in an empty outer shell or get it closer to having an empty outer shell. It would like to have an empty outer shell because the next lower energy shell is a stable shell with eight electrons.


Since electrons have a negative charge, the atom that gains electrons becomes a negatively charged ions (aka anion) because it now has more electrons than protons. Alternately, an atom that loses electrons becomes a positively charged ion (aka cations). The particles in an ionic compound are held together because there are oppositely charged particles that are attracted to one another.
The images above schematically show the process that takes place during the formation of an ionic bond between sodium and chlorine atoms. Note that sodium has one valence electron that it would like to give up so that it would become stable with a full outer shell of eight. Also note that chlorine has seven valence electrons and it would like to gain an electron in order to have a full shell of eight. The transfer of the electron causes the previously neutral sodium atom to become a positively charged ion (cation), and the previously neutral chlorine atom to become a negatively charged ion (anion). The attraction for the cation and the anion is called the ionic bond.
Some Common Features of Materials with Ionic Bonds:
• Hard
• Good insulators
• Transparent
Brittle or cleave rather than deform
Covalent Bonding
Where a compound only contains nonmetal atoms, a covalent bond is formed by atoms sharing two or more electrons. Nonmetals have 4 or more electrons in their outer shells (except boron). With this many electrons in the outer shell, it would require more energy to remove the electrons than would be gained by making new bonds. Therefore, both the atoms involved share a pair of electrons. Each atom gives one of its outer electrons to the electron pair, which then spends some time with each atom. Consequently, both atoms are held near each other since both atoms have a share in the electrons.


More than one electron pair can be formed with half of the electrons coming from one atom and the rest from the other atom. An important feature of this bond is that the electrons are tightly held and equally shared by the participating atoms. The atoms can be of the same element or different elements. In each molecule, the bonds between the atoms are strong but the bonds between molecules are usually weak. This makes many solid materials with covalent bonds brittle. Many ceramic materials have covalent bonds.
Compounds with covalent bonds may be solid, liquid or gas at room temperature depending on the number of atoms in the compound. The more atoms in each molecule, the higher a compound’s melting and boiling temperature will be. Since most covalent compounds contain only a few atoms and the forces between molecules are weak, most covalent compounds have low melting and boiling points. However, some, like carbon compounds, can be very large. An example is the diamond in which carbon atoms each share four electrons to form giant lattices.
Some Common Features of Materials with Covalent Bonds:
• Hard
• Good insulators
• Transparent
• Brittle or cleave rather than deform


• Atomic Bonding
(Metallic, Ionic, Covalent, and van der Waals Bonds)
• From elementary chemistry it is known that the atomic structure of any element is made up of a positively charged nucleus surrounded by electrons revolving around it. An element’s atomic number indicates the number of positively charged protons in the nucleus. The atomic weight of an atom indicates how many protons and neutrons in the nucleus. To determine the number of neutrons in an atom, the atomic number is simply subtracted from the atomic weight.
• Atoms like to have a balanced electrical charge. Therefore, they usually have negatively charged electrons surrounding the nucleus in numbers equal to the number of protons. It is also known that electrons are present with different energies and it is convenient to consider these electrons surrounding the nucleus in energy “shells.” For example, magnesium, with an atomic number of 12, has two electrons in the inner shell, eight in the second shell and two in the outer shell.
• All chemical bonds involve electrons. Atoms will stay close together if they have a shared interest in one or more electrons. Atoms are at their most stable when they have no partially-filled electron shells. If an atom has only a few electrons in a shell, it will tend to lose them to empty the shell. These elements are metals. When metal atoms bond, a metallic bond occurs. When an atom has a nearly full electron shell, it will try to find electrons from another atom so that it can fill its outer shell. These elements are usually described as nonmetals. The bond between two nonmetal atoms is usually a covalent bond. Where metal and nonmetal atom come together an ionic bond occurs. There are also other, less common, types of bond but the details are beyond the scope of this material. On the next few pages, the Metallic, Covalent and Ionic bonds will be covered in more detail.
• Van der Waals Bond
• The van der Waal bonds occur to some extent in all materials but are particularly important in plastics and polymers. These materials are made up of a long string molecules consisting of carbon atoms covalently bonded with other atoms, such as hydrogen, nitrogen, oxygen, fluorine. The covalent bonds within the molecules are very strong and rupture only under extreme conditions. The bonds between the molecules that allow sliding and rupture to occur are called van der Waal forces.
• When ionic and covalent bonds are present, there is some imbalance in the electrical charge of the molecule. Take water as an example. Research has determined the hydrogen atoms are bonded to the oxygen atoms at an angle of 104.5°. This angle produces a positive polarity at the hydrogen-rich end of the molecule and a negative polarity at the other end. A result of this charge imbalance is that water molecules are attracted to each other. This is the force that holds the molecules together in a drop of water.
• This same concept can be carried on to plastics, except that as molecules become larger, the van der Waal forces between molecules also increases. For example, in polyethylene the molecules are composed of hydrogen and carbon atoms in the same ratio as ethylene gas. But there are more of each type of atom in the polyethylene molecules and as the number of atoms in a molecule increases, the matter passes from a gas to a liquid and finally to a solid.
Polymers are often classified as being either a thermoplastic or a thermosetting material. Thermoplastic materials can be easily remelted for forming or recycling and thermosetting material cannot be easily remelted. In thermoplastic materials consist of long chainlike molecules. Heat can be used to break the van der Waal forces between the molecules and change the form of the material from a solid to a liquid. By contrast, thermosetting materials have a three-dimensional network of covalent bonds. These bonds cannot be easily broken by heating and, therefore, can not be remelted and formed as easily as thermoplastics.

General Material Classifications
There are thousands of materials available for use in engineering applications. Most materials fall into one of three classes that are based on the atomic bonding forces of a particular material. These three classifications are metallic, ceramic and polymeric. Additionally, different materials can be combined to create a composite material. Within each of these classifications, materials are often further organized into groups based on their chemical composition or certain physical or mechanical properties. Composite materials are often grouped by the types of materials combined or the way the materials are arranged together. Below is a list of some of the commonly classification of materials within these four general groups of materials.
Metals
• Ferrous metals and alloys (irons, carbon steels, alloy steels, stainless steels, tool and die steels)
• Nonferrous metals and alloys (aluminum, copper, magnesium, nickel, titanium, precious metals, refractory metals, superalloys) Polymeric
• Thermoplastics plastics
• Thermoset plastics
• Elastomers




Ceramics
• Glasses
• Glass ceramics
• Graphite
• Diamond Composites
• Reinforced plastics
• Metal-matrix composites
• Ceramic-matrix composites
• Sandwich structures
• Concrete

Polymers
A polymeric solid can be thought of as a material that contains many chemically bonded parts or units which themselves are bonded together to form a solid. The word polymer literally means "many parts." Two industrially important polymeric materials are plastics and elastomers. Plastics are a large and varied group of synthetic materials which are processed by forming or molding into shape. Just as there are many types of metals such as aluminum and copper, there are many types of plastics, such as polyethylene and nylon. Elastomers or rubbers can be elastically deformed a large amount when a force is applied to them and can return to their original shape (or almost) when the force is released.
Polymers have many properties that make them attractive to use in certain conditions. Many polymers:
• are less dense than metals or ceramics,
• resist atmospheric and other forms of corrosion,
• offer good compatibility with human tissue, or
• exhibit excellent resistance to the conduction of electrical current.
The polymer plastics can be divided into two classes, thermoplastics and thermosetting plastics, depending on how they are structurally and chemically bonded. Thermoplastic polymers comprise the four most important commodity materials – polyethylene, polypropylene, polystyrene and polyvinyl chloride. There are also a number of specialized engineering polymers. The term ‘thermoplastic’ indicates that these materials melt on heating and may be processed by a variety of molding and extrusion techniques. Alternately, ‘thermosetting’ polymers can not be melted or remelted. Thermosetting polymers include alkyds, amino and phenolic resins, epoxies, polyurethanes, and unsaturated polyesters.
Rubber is a natural occurring polymer. However, most polymers are created by engineering the combination of hydrogen and carbon atoms and the arrangement of the chains they form. The polymer molecule is a long chain of covalent-bonded atoms and secondary bonds then hold groups of polymer chains together to form the polymeric material. Polymers are primarily produced from petroleum or natural gas raw products but the use of organic substances is growing. The super-material known as Kevlar is a man-made polymer. Kevlar is used in bullet-proof vests, strong/lightweight frames, and underwater cables that are 20 times stronger than steel.



Home - General Resources - Material Properties Tables - UT

-
Plastics, Resins, and Phenolics Longitudinal Velocity Shear
Velocity Density
g/cm3 Acoustic Impedance g/cm2-secx105
cm/s in/s cm/s in/s
Acrylic Resin .267 .1051 .112 .0441 1.18 3.151
Bakelite .259 .102 N/A N/A 1.40 3.626
Bakelite (cloth filled) .271 .1067 N/A N/A N/A N/A
Cellulose Acetate .245 .0965 N/A N/A 1.30 3.185
Hysol .277 .1091 N/A N/A N/A N/A
Kel-F .179 .0705 N/A N/A N/A N/A

Lucite .268 .1055 .126 .0496 1.18 3.1624
Micarta Linen base .3 .1181 N/A N/A N/A N/A
Nylon 6,6 .168 .0661 N/A N/A N/A N/A
Nylon .262 .1031 N/A N/A N/A N/A
Phenolic .142 .0559 N/A N/A 1.34 1.903
Plexiglass (a) UVA .276 .1087 N/A N/A 1.27 3.505
(b) UVAII .273 .1075 .143 .0563 1.18 3.221

Polyethylene .267 .1051 N/A N/A 1.10 2.937
Polyethylene TCI .16 .063 N/A N/A N/A N/A
Polyimide (Vespel SP-1) .244 .0961 N/A N/A 1.48 3.61
Polystyrene .267 .1051 N/A N/A 1.10 2.937
Polystyrol .15 .0591 N/A N/A N/A N/A
Refrasil .375 .1476 N/A N/A 1.73 6.488
Teflon .135 .0531 N/A N/A 2.20 2.97
Data compiled by Xactex Corporation. Sources of original data are unknown.


Metals
Metals account for about two thirds of all the elements and about 24% of the mass of the planet. Metals have useful properties including strength, ductility, high melting points, thermal and electrical conductivity, and toughness. From the periodic table, it can be seen that a large number of the elements are classified as being a metal. A few of the common metals and their typical uses are presented below.
Common Metallic Materials
• Iron/Steel - Steel alloys are used for strength critical applications
• Aluminum - Aluminum and its alloys are used because they are easy to form, readily available, inexpensive, and recyclable.
• Copper - Copper and copper alloys have a number of properties that make them useful, including high electrical and thermal conductivity, high ductility, and good corrosion resistance.
• Titanium - Titanium alloys are used for strength in higher temperature (~1000° F) application, when component weight is a concern, or when good corrosion resistance is required
• Nickel - Nickel alloys are used for still higher temperatures (~1500-2000° F) applications or when good corrosion resistance is required.
• Refractory materials are used for the highest temperature (> 2000° F) applications.

The key feature that distinguishes metals from non-metals is their bonding. Metallic materials have free electrons that are free to move easily from one atom to the next. The existence of these free electrons has a number of profound consequences for the properties of metallic materials. For example, metallic materials tend to be good electrical conductors because the free electrons can move around within the metal so freely. More on the structure of metals will be discussed later.
Eddy Current Inspection Formula
Ohm's Law Impedance Phase Angle Magnetic
Permeability

Where:
I = Current (amp)
V = Voltage (volt)
Z = Impedance (ohm)
More Information
Ohm's Law Calculator


Where:
Z = Impedance (ohm)
R = Resistance (ohm)
XL = Inductance Reactance(ohm)
More Information

Where:
 = Phase Angle (deg)
XL = Inductance Reactance(ohm)
R = Resistance (ohm)
More Information
Phase Angle Calculator

Where:
 = Magnetic Permeability (Henries/meter)
B = Magnetic Flux Density (Tesla)
H = Magnetizing Force (Am/meter)

Relative
Magnetic Permeability Conductivity & Resistivity Electrical
Conductivity
(%IACS) Electrical
Conductivity
(%IACS)

Where:
r = Relative Magnetic Permeability (dimensionless)
 = Any Given Magnetic Permeability (H/m)
o = Magnetic Permeability in Free Space (H/m), which is1.257 x 10-6 H/m
More I

Where:
 = Electrical Conductivity
Siemens/m
 = Electrical Resistivity
(ohm-m)


When resistivity is known
Where:
IACS = Electrical Conductivity
(% IACS)
 = Electrical Resistivity
(ohm-cm)
S/cm = Electrical Conductivity
(Siemens/cm)


When conductivity in
S/m or S/cm is known or

Where:
IACS = Electrical Conductivity
(% IACS)
S/m = Electrical Conductivity
(Siemens/meter)
S/cm = Electrical Conductivity
(Siemens/cm)

Current Density Standard Depth
of Penetration Standard Depth
of Penetration Standard Depth
of Penetration


Where:
Jx = Current Density (amps/m2)
Jo = Current Density at Surface (amps/m2)
e = Base Natural Log = 2.71828
x = Distance Below Surface
 = Standard Depth of Penetration
More I
When electrical conductivity (S/m) is known.


Where:
 = Standard Depth of Penetration (m)
 = 3.14
f = Test Frequency (Hz)
 = Magnetic Permeability (H/m)(1.257 x 10-6 H/m for nonmagnetic mat'ls)
 = Electrical Conductivity
(Siemens/m)
When electrical conductivity
(IACS) is known.
In
mm
In inches
Where:
 = Standard Depth of Penetration (mm or in)
f = Test Frequency (Hz)
r = Relative Magnetic Permeability (dimensionless)
 = Electrical Conductivity (%IACS)
When electrical resistivity
(ohm-cm) is known.
In
mm
In inches
Where:
 = Standard Depth of Penetration (mm or in)
 = Electrical Resistivity
(ohm-cm)
f = Test Frequency (Hz)
r = Relative Magnetic Permeability (dimensionless)

Standard Depth of Penetration Versus Frequency Chart


Eddy Current
Field Phase Lag Eddy Current
Field Phase Lag Eddy Current
Field Phase Lag Eddy Current
Field Phase Lag
In
Radians
In
Degrees


Where:
 = Phase Lag (Rad or Degrees)
x = Distance Below Surface (in or mm)
 = Standard Depth of Penetration (in or mm)
More Information
When electrical conductivity (S/m) is known.



Where:
 = Phase Lag (degrees)
x = Distance Below Surface (m)
 = 3.14
f = Test Frequency (Hz)
r = Relative Magnetic Permeability
 = Electrical Conductivity
(Siemens/m)
More Information When electrical conductivity
(IACS) is known.
In
mm
In inches

Where:
 = Phase Lag (degrees)
x = Distance Below Surface (mm)
f = Test Frequency (Hz)
r = Relative Magnetic Permeability (dimensionless)
 = Electrical Conductivity (%IACS)
More Information When electrical resistivity
(ohm-cm) is known.
In
mm
In inches
Where:
 = Phase Lag (degrees)
x = Distance Below Surface (inch)
 = Electrical Resistivity
(ohm-cm)
f = Test Frequency (Hz)
r = Relative Magnetic Permeability (dimensionless)
More Information
Standard Depth of Penetration and
Phase Angle Material Thickness Requirement for
Resistivity or Conductivity Measurement Frequency Selection
for Thickness Measurement of Thin Materials Frequency Selection
for Flaw Detection
and Nonconductive
Coating Thickness Measurements
Std.
Depth Relative
Strength
of EC Phase Lag
0 e0=100% 0 rad = 0o
d e-1=37% 1 rad = 57.3o
2d e-2=14% 2 rad = 114.6 o
3d e-3=5% 3 rad = 171.9 o
4d e-4=2% 4 rad = 229.2 o
5d e-5=0.7% 5 rad = 286.5 o
When measuring resistivity or conductivity, the thickness of the material should be at least 3 times the depth of penetration to minimize material thickness effects

Where:
t = Material Thickness
 = Standard Depth of Penetration
Selecting a frequency that produces a standard depth of penetration that exceeds the material thickness by 25% will produce a phase angle of approximately 90obetween the liftoff signal and the material thickness change signal.


Defect Detection
A test frequency that puts the standard depth of penetration at about the expected depth of the defect will provide good phase separation between the defect and liftoff signals.

Nonconductive Coating Thickness Measurement
To minimize effects from the base metal the highest practical frequency should be used.

Composites
A composite is commonly defined as a combination of two or more distinct materials, each of which retains its own distinctive properties, to create a new material with properties that cannot be achieved by any of the components acting alone. Using this definition, it can be determined that a wide range of engineering materials fall into this category. For example, concrete is a composite because it is a mixture of Portland cement and aggregate. Fiberglass sheet is a composite since it is made of glass fibers imbedded in a polymer.
Composite materials are said to have two phases. The reinforcing phase is the fibers, sheets, or particles that are embedded in the matrix phase. The reinforcing material and the matrix material can be metal, ceramic, or polymer. Typically, reinforcing materials are strong with low densities while the matrix is usually a ductile, or tough, material.
Some of the common classifications of composites are:
• Reinforced plastics
• Metal-matrix composites
• Ceramic-matrix composites
• Sandwich structures
• Concrete
Composite materials can take many forms but they can be separated into three categories based on the strengthening mechanism. These categories are dispersion strengthened, particle reinforced and fiber reinforced. Dispersion strengthened composites have a fine distribution of secondary particles in the matrix of the material. These particles impede the mechanisms that allow a material to deform. (These mechanisms include dislocation movement and slip, which will be discussed later). Many metal-matrix composites would fall into the dispersion strengthened composite category. Particle reinforced composites have a large volume fraction of particle dispersed in the matrix and the load is shared by the particles and the matrix. Most commercial ceramics and many filled polymers are particle-reinforced composites. In fiber-reinforced composites, the fiber is the primary load-bearing component. Fiberglass and carbon fiber composites are examples of fiber-reinforced composites.
If the composite is designed and fabricated correctly, it combines the strength of the reinforcement with the toughness of the matrix to achieve a combination of desirable properties not available in any single conventional material. Some composites also offer the advantage of being tailorable so that properties, such as strength and stiffness, can easily be changed by changing amount or orientation of the reinforcement material. The downside is that such composites are often more expensive than conventional materials.
Ceramics
A ceramic has traditionally been defined as “an inorganic, nonmetallic solid that is prepared from powdered materials, is fabricated into products through the application of heat, and displays such characteristic properties as hardness, strength, low electrical conductivity, and brittleness." The word ceramic comes the from Greek word "keramikos", which means "pottery." They are typically crystalline in nature and are compounds formed between metallic and nonmetallic elements such as aluminum and oxygen (alumina-Al2O3), calcium and oxygen (calcia - CaO), and silicon and nitrogen (silicon nitride-Si3N4).
Depending on their method of formation, ceramics can be dense or lightweight. Typically, they will demonstrate excellent strength and hardness properties; however, they are often brittle in nature. Ceramics can also be formed to serve as electrically conductive materials or insulators. Some ceramics, like superconductors, also display magnetic properties. They are also more resistant to high temperatures and harsh environments than metals and polymers. Due to ceramic materials wide range of properties, they are used for a multitude of applications.
The broad categories or segments that make up the ceramic industry can be classified as:
• Structural clay products (brick, sewer pipe, roofing and wall tile, flue linings, etc.)
• Whitewares (dinnerware, floor and wall tile, electrical porcelain, etc.)
• Refractories (brick and monolithic products used in metal, glass, cements, ceramics, energy conversion, petroleum, and chemicals industries)
• Glasses (flat glass (windows), container glass (bottles), pressed and blown glass (dinnerware), glass fibers (home insulation), and advanced/specialty glass (optical fibers))
• Abrasives (natural (garnet, diamond, etc.) and synthetic (silicon carbide, diamond, fused alumina, etc.) abrasives are used for grinding, cutting, polishing, lapping, or pressure blasting of materials)
• Cements (for roads, bridges, buildings, dams, and etc.)
• Advanced ceramics
o Structural (wear parts, bioceramics, cutting tools, and engine components)
o Electrical (capacitors, insulators, substrates, integrated circuit packages, piezoelectrics, magnets and superconductors)
o Coatings (engine components, cutting tools, and industrial wear parts)
o Chemical and environmental (filters, membranes, catalysts, and catalyst supports)
The atoms in ceramic materials are held together by a chemical bond which will be discussed a bit later. Briefly though, the two most common chemical bonds for ceramic materials are covalent and ionic. Covalent and ionic bonds are much stronger than in metallic bonds and, generally speaking, this is why ceramics are brittle and metals are ductile.

Calculators
Eddy Current Testing
Depth of Penetration Calculator
This calculator allows the user to input the test frequency, material conductivity and magnetic permeability to calculate the the value of one standard depth of penetration in inches or milliliters. The phase lag is also shown graphically.
Impedance and Ohm's Law Calculator
In this applet, users can see how the current and voltage of a circuit is affected by impedance. The applet allows the user to vary inductance (L), resistance (R), voltage (V) and the current (I).
Maxwell-Wien Bridge Calculator
This Maxwell-Wien Bridge calculator often is used to measure unknown inductance in terms of calibrated resistance and capacitance.
Ohm's Law Calculator
In this calculator, users are able to determine the voltage, resistance, and current of a circuit according to Ohm's Law.
Phase Angle Calculator
This calculator can be used to determine the phase angle between the resistive component and the inductive and capacitance components of the impedance in an AC circuit.
Resonance Frequency Calculator
This calculator can be used to determine the resonant frequency of an eddy current probe.
Simple Probe Design Calculator
The following applet may be used to calculate the effect of the inner and outer diameters of a simple probe design on the probe's self inductance. Dimensional units are in millimeters.

Ultrasonic Testing
Acoustic Impedance Calculator
This applet allow the user to compare two materials and "see" how they reflect and transmit sound energy. The red arrow represents energy of the reflected sound, while the blue arrow represents energy of the transmitted sound. The reflected energy is square of the difference divided by the sum of the acoustic impedances of the two materials.
Beam Spread Calculator
This calculator allows the user to calculate the beam spread angle which represents a falling of of sound pressure (intensity) to the side of the acoustic axis of one half (-6 dB) as a function of transducer parameters radius and frequency and as a function of acoustic velocity in a medium.
Crack Tip Diffraction (Down for Repairs)
The height a of cracks can be determined by the tip diffraction method. The principle echo comes from the base of the crack and can easily be found and used to locate the position of the flaw. A second, much weaker echo comes from the tip of the crack and is displaced forward in time from the main echo by delta-t. Once the difference in time is know (dt), it can be plugged into the equation to arrive at the length of the crack.
Field Zone Calculator
For a piston source transducer of radius (a), frequency (f), and velocity (V) of a liquid or solid medium, the applet allows the calculation of the near/far field zone.
Reflection and Transmission Coefficients
Formulation for acoustic reflection and transmission coefficients (pressure) are shown in this calculator. Different materials may be selected or you may alter the material velocity or density to change the acoustic impedance of one or both materials. The red arrow represents reflected sound, while the blue arrow represents transmitted sound.
Snell's Law
Snell's Law is a basic equation that describes how sound is refracted and converted to different wave modes as it passes from one material to another. This calculator can be used to calculate refracted angles the critical angles for various material combinations.
Ultrasonic Measurement of Texture
The following applet may be used to calculate ODC's W400, W420, and W400 from Lamb wave velocities propagating at 0°, 45° and 90° with respect to the rolling direction. First choose the material. This assigns the correct elastic constants c11, c12, c44, and density for the cubic material being investigated. Next enter the "measured" Lamb wave velocities.
Wavelength, Frequency and Velocity
This applet shows a longitudinal and transverse wave. The direction of wave propagation is from left-to-right and the movement of the lines indicate the direction of particle oscillation. The equation relating ultrasonic wavelength, frequency, and propagation velocity is included at the bottom of the applet in a reorganized form.
Angle Beam Inspection Calculations
When performing an angle beam inspection, it is important to know where the sound beam is encountering an interface and reflecting. The reflection points are sometimes referred to as nodes. The location of the nodes can be obtained by using the trigonometric functions or by using the trig-based formulas which are given below.

• Nodes - surface points where sound waves reflect.
• Skip Distance - surface distance of two successive nodes.
• Leg 1 (L1) - sound path in material to 1st node.
• Leg 2 (L2) - sound path in material from 1st to 2nd node.
• R - refracted sound wave angle.








Skip Distance Formulas Surface Distance Formulas

Leg 1 and Leg 2 Formulas

Flaw Depth (1st Leg) Flaw Depth (2nd Leg)


Radiography Formula
-
Inverse Square Law Exposure - Distance Reciprocity Law

Where:
I1 = Intensity 1 at D1
I2 = Intensity 2 at D2
D1 = Distance 1 from source
D2 = Distance 2 from source

Inverse Square Law
Calculation

Where:
E1 = Exposure at D1
E2 = Exposure at D2
D1 = Distance 1 from source
D2 = Distance 2 from source
:
Exposure-Distance Calculation

Where:
C1 = Current 1
C2 = Current 2
T1 = Time 1 at C1
T2 = Time 2 at C2

Current-Time Reciprocity Calculation

Film Density - Exposure Geometric
Magnification Geometric Unsharpness

Note: This is an approximation and is only valid for the straight line portion of the film characteristic curve. See more info section.
Where:
E1 = Exposure 1
E2 = Exposure 2
FD1 = Film Density at Exposure 1
FD2 = Film Density with Exposure 2

Film Density - Exposure Calculation



Where:
M = Magnification
a = Distance from source to object
b = Distance from object to detector


Geometric Magnification Calculation



Where:
Ug = Geometric unsharpness
f = Focal spot size
a = Distance from source to object
b = Distance from object to detector

Geometric Unsharpness Calculation

Indication Depth Attenuation Half-Value Layer

Where:
D = Defect depth from the film side of the part
t = Thickness of material
SD = Shift of defect image on the film
SM = Shift of surface marker on the film

Where:
Ix = transmitted intensity
Io = Original intensity
m = Linear attenuation coefficient
x = Distance in material or material thickness

Where:
HVL = Half-value layer
m = Linear attenuation coefficient








Emulsifiers
When removal of the penetrant from a defect due to over-washing of the part is a concern, a post-emulsifiable penetrant system can be used. Post-emulsifiable penetrants require a separate emulsifier to break the penetrant down and make it water-washable. Most penetrant inspection specifications classify penetrant systems into four methods of excess penetrant removal. These are listed below:
1. Method A: Water-Washable
2. Method B: Post-Emulsifiable, Lipophilic
3. Method C: Solvent Removable
4. Method D: Post-Emulsifiable, Hydrophilic
Method C relies on a solvent cleaner to remove the penetrant from the part being inspected. Method A has emulsifiers built into the penetrant liquid that makes it possible to remove the excess penetrant with a simple water wash. Method B and D penetrants require an additional processing step where a separate emulsification agent is applied to make the excess penetrant more removable with a water wash. Lipophilic emulsification systems are oil-based materials that are supplied in ready-to-use form. Hydrophilic systems are water-based and supplied as a concentrate that must be diluted with water prior to use .
Lipophilic emulsifiers (Method B) were introduced in the late 1950's and work with both a chemical and mechanical action. After the emulsifier has coated the surface of the object, mechanical action starts to remove some of the excess penetrant as the mixture drains from the part. During the emulsification time, the emulsifier diffuses into the remaining penetrant and the resulting mixture is easily removed with a water spray.
Hydrophilic emulsifiers (Method D) also remove the excess penetrant with mechanical and chemical action but the action is different because no diffusion takes place. Hydrophilic emulsifiers are basically detergents that contain solvents and surfactants. The hydrophilic emulsifier breaks up the penetrant into small quantities and prevents these pieces from recombining or reattaching to the surface of the part. The mechanical action of the rinse water removes the displaced penetrant from the part and causes fresh remover to contact and lift newly exposed penetrant from the surface.
The hydrophilic post-emulsifiable method (Method D) was introduced in the mid 1970's. Since it is more sensitive than the lipophilic post emulsifiable method it has made the later method virtually obsolete. The major advantage of hydrophilic emulsifiers is that they are less sensitive to variation in the contact and removal time. While emulsification time should be controlled as closely as possible, a variation of one minute or more in the contact time will have little effect on flaw detectability when a hydrophilic emulsifier is used. However, a variation of as little as 15 to 30 seconds can have a significant effect when a lipophilic system is used.
References:
-- Boisvert, B.W., Hardy, G., Dorgan, J.F., and Selner, R.H., The Fluorescent Penetrant Hydrophilic Remover Process, Materials Evaluation, February 1983, pp. 134-137.
-- Sherwin, A. G., Overremoval Propensities of the Prewash Hydrophilic Emulsifier Fluorescent Penetrant Process, Materials Evaluation, March 1993, pp. 294-299.
Developers
The role of the developer is to pull the trapped penetrant material out of defects and spread it out on the surface of the part so it can be seen by an inspector. The fine developer particles both reflect and refract the incident ultraviolet light, allowing more of it to interact with the penetrant, causing more efficient fluorescence. The developer also allows more light to be emitted through the same mechanism. This is why indications are brighter than the penetrant itself under UV light. Another function that some developers perform is to create a white background so there is a greater degree of contrast between the indication and the surrounding background.
Developer Forms
The AMS 2644 and Mil-I-25135 classify developers into six standard forms. These forms are listed below:
1. Form a - Dry Powder
2. Form b - Water Soluble
3. Form c - Water Suspendable
4. Form d - Nonaqueous Type 1 Fluorescent (Solvent Based)
5. Form e - Nonaqueous Type 2 Visible Dye (Solvent Based)
6. Form f - Special Applications
The developer classifications are based on the method that the developer is applied. The developer can be applied as a dry powder, or dissolved or suspended in a liquid carrier. Each of the developer forms has advantages and disadvantages.
Dry Powder
Dry powder developer is generally considered to be the least sensitive but it is inexpensive to use and easy to apply. Dry developers are white, fluffy powders that can be applied to a thoroughly dry surface in a number of ways. The developer can be applied by dipping parts in a container of developer, or by using a puffer to dust parts with the developer. Parts can also be placed in a dust cabinet where the developer is blown around and allowed to settle on the part. Electrostatic powder spray guns are also available to apply the developer. The goal is to allow the developer to come in contact with the whole inspection area.
Unless the part is electrostatically charged, the powder will only adhere to areas where trapped penetrant has wet the surface of the part. The penetrant will try to wet the surface of the penetrant particle and fill the voids between the particles, which brings more penetrant to the surface of the part where it can be seen. Since dry powder developers only stick to the area where penetrant is present, the dry developer does not provide a uniform white background as the other forms of developers do. Having a uniform light background is very important for a visible inspection to be effective and since dry developers do not provide one, they are seldom used for visible inspections. When a dry developer is used, indications tend to stay bright and sharp since the penetrant has a limited amount of room to spread.
Water Soluble
As the name implies, water soluble developers consist of a group of chemicals that are dissolved in water and form a developer layer when the water is evaporated away. The best method for applying water soluble developers is by spraying it on the part. The part can be wet or dry. Dipping, pouring, or brushing the solution on to the surface is sometimes used but these methods are less desirable. Aqueous developers contain wetting agents that cause the solution to function much like dilute hydrophilic emulsifier and can lead to additional removal of entrapped penetrant. Drying is achieved by placing the wet but well drained part in a recirculating, warm air dryer with the temperature held between 70 and 75°F. If the parts are not dried quickly, the indications will will be blurred and indistinct. Properly developed parts will have an even, pale white coating over the entire surface.
Water Suspendable
Water suspendable developers consist of insoluble developer particles suspended in water. Water suspendable developers require frequent stirring or agitation to keep the particles from settling out of suspension. Water suspendable developers are applied to parts in the same manner as water soluble developers. Parts coated with a water suspendable developer must be forced dried just as parts coated with a water soluble developer are forced dried. The surface of a part coated with a water suspendable developer will have a slightly translucent white coating.
Nonaqueous
Nonaqueous developers suspend the developer in a volatile solvent and are typically applied with a spray gun. Nonaqueous developers are commonly distributed in aerosol spray cans for portability. The solvent tends to pull penetrant from the indications by solvent action. Since the solvent is highly volatile, forced drying is not required. A nonaqueous developer should be applied to a thoroughly dried part to form a slightly translucent white coating.
Special Applications
Plastic or lacquer developers are special developers that are primarily used when a permanent record of the inspection is required

-
Introduction to Penetrant Testing
Introduction
History
Improving Detection
—Visual Acuity
—Contrast Sensitivity
—Eye's Response to Light
Principles
Steps for Liquid PI
Common Uses for PI
Pros and Cons of PI
PT Materials
Penetrant Testing Matl's
Penetrants
—Surface Energy
—Specific Gravity
—Viscosity
—Color and Fluorescence
—Why things Fluoresce
—Dimensional Threshold
—Stability of Penetrants
—Removability
Emulsifiers
Developers
Methods & Techniques
Preparation
—Cleaning Methods
—Metal Smear
Technique Selection
Application Technique
Penetrant Removal
Selecting Developer
Quality & Process Control
Temperature
Penetrant
Dwell
Emulsifier
Wash
Drying
Developer
Lighting
System Performance Check
Other Considerations
Defect Nature
Health & Safety
References
Quizzes
-


Preparation of Part
One of the most critical steps in the penetrant inspection process is preparing the part for inspection. All coatings, such as paints, varnishes, plating, and heavy oxides must be removed to ensure that defects are open to the surface of the part. If the parts have been machined, sanded, or blasted prior to the penetrant inspection, it is possible that a thin layer of metal may have smeared across the surface and closed off defects. It is even possible for metal smearing to occur as a result of cleaning operations such as grit or vapor blasting. This layer of metal smearing must be removed before inspection.
Contaminants
Coatings, such as paint, are much more elastic than metal and will not fracture even though a large defect may be present just below the coating. The part must be thoroughly cleaned as surface contaminates can prevent the penetrant from entering a defect. Surface contaminants can also lead to a higher level of background noise since the excess penetrant may be more difficult to remove.
Common coatings and contaminates that must be removed include: paint, dirt, flux, scale, varnish, oil, etchant, smut, plating, grease, oxide, wax, decals, machining fluid, rust, and residue from previous penetrant inspections.
Some of these contaminants would obviously prevent penetrant from entering defects, so it is clear they must be removed. However, the impact of other contaminants such as the residue from previous penetrant inspections is less clear, but they can have a disastrous effect on the inspection. Take the link below to review some of the research that has been done to evaluate the effects of contaminants on LPI sensitivity.
Click here to learn more about possible problems with Cleaning Practices.
A good cleaning procedure will remove all contamination from the part and not leave any residue that may interfere with the inspection process. It has been found that some alkaline cleaners can be detrimental to the penetrant inspection process if they have silicates in concentrations above 0.5 percent. Sodium metasilicate, sodium silicate, and related compounds can adhere to the surface of parts and form a coating that prevents penetrant entry into cracks. Researchers in Russia have also found that some domestic soaps and commercial detergents can clog flaw cavities and reduce the wettability of the metal surface, thus reducing the sensitivity of the penetrant. Conrad and Caudill found that media from plastic media blasting was partially responsible for loss of LPI indication strength. Microphotographs of cracks after plastic media blasting showed media entrapment in addition to metal smearing.
It is very important that the material being inspected has not been smeared across its own surface during machining or cleaning operations. It is well recognized that machining, honing, lapping, hand sanding, hand scraping, grit blasting, tumble deburring, and peening operations can cause some materials to smear. It is perhaps less recognized that some cleaning operations, such as steam cleaning, can also cause metal smearing in the softer materials. Take the link below to learn more about metal smearing and its affects on LPI
Click here to learn more about metal smearing.

Selection of a Penetrant Technique
The selection of a liquid penetrant system is not a straightforward task. There are a variety of penetrant systems and developer types that are available for use, and one set of penetrant materials will not work for all applications. Many factors must be considered when selecting the penetrant materials for a particular application. These factors include the sensitivity required, materials cost, number of parts, size of area requiring inspection, and portability.
When sensitivity is the primary consideration for choosing a penetrant system, the first decision that must be made is whether to use fluorescent penetrant or visible dye penetrant. Fluorescent penetrants are generally more capable of producing a detectable indication from a small defect. Also, the human eye is more sensitive to a light indication on a dark background and the eye is naturally drawn to a fluorescent indication.
The graph below presents a series of curves that show the contrast ratio required for a spot of a certain diameter to be seen. The ordinate is the spot diameter, which was viewed from one foot. The abscissa is the contrast ratio between the spot brightness and the background brightness. To the left of the contrast ratio of one, the spot is darker than the background (representative of visible dye penetrant testing); and to the right of one, the spot is brighter than the background (representative of fluorescent penetrant inspection). Each of the three curves right or left of the contrast ratio of one are for different background brightness (in foot-Lamberts), but simply consider the general trend of each group of curves right or left of the contrast ratio of one. The curves show that for indication larger than 0.076 mm (0.003 inch) in diameter, it does not really matter if it is a dark spot on a light background or a light spot on a dark background. However, when a dark indication on a light background is further reduced in size, it is no longer detectable even though contrast is increased. Furthermore, with a light indication on a dark background, indications down to 0.003 mm (0.0001 inch) were detectable when the contrast between the flaw and the background was high.
From this data, it can be seen why a fluorescent penetrant offers an advantage over a visible penetrant for finding very small defects. Data presented by De Graaf and De Rijk supports this statement. They inspected "identical" fatigue cracked specimens using a red dye penetrant and a fluorescent dye penetrant. The fluorescent penetrant found 60 defects while the visible dye was only able to find 39 of the defects.
Ref: De Graaf, E. and De Rijk, P., Comparison Between Reliability, Sensitivity, and Accuracy of Nondestructive Inspection Methods, 13th Symposium on Nondestructive Evaluation Proceedings, San Antonio, TX, published by NTIAC, Southwest Research Institute, San Antonio, TX, April 1981, pp. 311-322.

Ref: Thomas, W.E., An Analytic Approach to Penetrant Performance, 1963 Lester Honor Lecture, Nondestructive Testing, Vol. 21, No. 6, Nov.-Dec. 1963, pp. 354-368.
Under certain conditions, the visible penetrant may be a better choice. When fairly large defects are the subject of the inspection, a high sensitivity system may not be warranted and may result in a large number of irrelevant indications. Visible dye penetrants have also been found to give better results when surface roughness is high or when flaws are located in areas such as weldments.
Since visible dye penetrants do not require a darkened area for the use of an ultraviolet light, visible systems are more easy to use in the field. Solvent removable penetrants, when properly applied, can have the highest sensitivity and are very convenient to use. However, they are usually not practical for large area inspection or in high-volume production settings.
Another consideration in the selection of a penetrant system is whether water washable, post-emulsifiable or solvent removable penetrants will be used. Post-emulsifiable systems are designed to reduce the possibility of over-washing, which is one of the factors known to reduce sensitivity. However, these systems add another step, and thus cost, to the inspection process.
Penetrants are evaluated by the US Air Force according to the requirements in MIL-I-25135 and each penetrant system is classified into one of five sensitivity levels. This procedure uses titanium and Inconel specimens with small surface cracks produced in low cycle fatigue bending to classify penetrant systems. The brightness of the indications produced after processing a set of specimens with a particular penetrant system is measured using a photometer. A procedure for producing and evaluating the penetrant qualification specimens was reported on by Moore and Larson at the 1997 ASNT Fall Conference. Most commercially available penetrant materials are listed in the Qualified Products List of MIL-I-25135 according to their type, method and sensitivity level. Visible dye and dual-purpose penetrants are not classified into sensitivity levels as fluorescent penetrants are. The sensitivity of a visible dye penetrant is regarded as level 1 and largely dependent on obtaining good contrast between the indication and the background.
Penetrant Application and Dwell Time
The penetrant material can be applied in a number of different ways, including spraying, brushing, or immersing the parts in a penetrant bath. The method of penetrant application has little effect on the inspection sensitivity but an electrostatic spraying method is reported to produce slightly better results than other methods. Once the part is covered in penetrant it must be allowed to dwell so the penetrant has time to enter any defect present.
There are basically two dwell mode options, immersion-dwell (keeping the part immersed in the penetrant during the dwell period) and drain-dwell (letting the part drain during the dwell period). Prior to a study by Sherwin, the immersion-dwell mode was generally considered to be more sensitive but recognized to be less economical because more penetrant was washed away and emulsifiers were contaminated more rapidly. The reasoning for thinking this method was more sensitive was that the penetrant was more migratory and more likely to fill flaws when kept completely fluid and not allowed to lose volatile constituents by evaporation. However, Sherwin showed that if the specimens are allowed to drain-dwell, the sensitivity is higher because the evaporation increases the dyestuff concentration of the penetrant on the specimen. As pointed-out in the section on penetrant materials, sensitivity increases as the dyestuff concentration increases. Sherwin also cautions that the samples being inspected should be placed outside the penetrant tank wall so that vapors from the tank do not accumulate and dilute the dyestuff concentration of the penetrant on the specimen.
-- Vaerman, J., Fluorescent Penetrant Inspection, Quantified Evolution of the Sensitivity Versus Process Deviations, Proceedings of the 4th European Conference on Nondestructive Testing, Pergamon Press, Maxwell House, Fairview Park, Elmsford, New York, Volume 4, September 1987, pp. 2814-2823.
-- Sherwin, A.G., Establishing Liquid Penetrant Dwell Modes, Materials Evaluation, Vol. 32, No. 3, March 1974, pp. 63-67.
Penetrant Dwell Time
Penetrant dwell time is the total time that the penetrant is in contact with the part surface. The dwell time is important because it allows the penetrant the time necessary to seep or be drawn into a defect. Dwell times are usually recommended by the penetrant producers or required by the specification being followed. The time required to fill a flaw depends on a number of variables which include the following:
• The surface tension of the penetrant.
• The contact angle of the penetrant.
• The dynamic shear viscosity of the penetrant, which can vary with the diameter of the capillary. The viscosity of a penetrant in microcapillary flaws is higher than its viscosity in bulk, which slows the infiltration of the tight flaws.
• The atmospheric pressure at the flaw opening.
• The capillary pressure at the flaw opening.
• The pressure of the gas trapped in the flaw by the penetrant.
• The radius of the flaw or the distance between the flaw walls.
• The density or specific gravity of the penetrant.
• Microstructural properties of the penetrant.
The ideal dwell time is often determined by experimentation and is often very specific to a particular application. For example, AMS 2647A requires that the dwell time for all aircraft and engine parts be at least 20 minutes, while ASTM E1209 only requires a five minute dwell time for parts made of titanium and other heat resistant alloys. Generally, there is no harm in using a longer penetrant dwell time as long as the penetrant is not allowed to dry.
The following tables summarize the dwell time requirements of several commonly used specifications. The information provided below is intended for general reference and no guarantee is made about its correctness. Please consult the specifications for the actual dwell time requirements.








Some Research Results on Dwell Time
An interesting point that Deutsch makes about dwell time is that if the elliptical flaw has a length to width ratio of 100, it will take the penetrant nearly ten times longer to fill than it will a cylindrical flaw with the same volume.
-- Deutsch, S. A, Preliminary Study of the Fluid Mechanics of Liquid Penetrant Testing, Journal of Research of the National Bureau of Standards, Vol. 84, No. 4, July-August 1979, pp. 287-291.
Lord and Holloway looked for the optimum penetrant dwell time required for detecting several types of defects in titanium. Both a level 2 post-emulsifiable fluorescent penetrant (Magnaflux ZL-2A penetrant and ZE-3 emulsifier) and a level 2 water washable penetrant (Tracer-Tech P-133A penetrant) were included in the study. The effect of the developer was a variable in the study and nonaqueous wet, aqueous wet, and dry developers were included. Specimens were also processed using no developer. The specimen defects included stress corrosion cracks, fatigue cracks and porosity. As expected, the researchers found that the optimal dwell time varied with the type of defect and developer used. The following table summarizes some of the findings.

-- Lord, R. J. and Holloway, J. A., Choice of Penetrant Parameters for Inspecting Titanium, Materials Evaluation, October 1975, pp. 249-256.
Penetrant Removal Process
The penetrant removal procedure must effectively remove the penetrant from the surface of the part without removing an appreciable amount of entrapped penetrant from the defect. If the removal process extracts penetrant from the flaw, the flaw indication will be reduced by a proportional amount. If the penetrant is not effectively removed from the part surface, the contrast between the indication and the background will be reduced. As discussed in the Contrast Sensitivity Section, as the contrast increases, so does visibility of the indication.
Removal Method
Penetrant systems are classified into four methods of excess penetrant removal. These include the following:
1. Method A: Water-Washable
2. Method B: Post-Emulsifiable, Lipophilic
3. Method C: Solvent Removable
4. Method D: Post-Emulsifiable, Hydrophilic
Method C, Solvent Removable, is used primarily for inspecting small localized areas. This method requires hand wiping the surface with a cloth moistened with the solvent remover, and is, therefore, too labor intensive for most production situations. Of the three production penetrant inspection methods, Method A, Water-Washable, is the most economical to apply. Water-washable or self-emulsifiable penetrants contain an emulsifier as an integral part of the formulation. The excess penetrant may be removed from the object surface with a simple water rinse. These materials have the property of forming relatively viscous gels upon contact with water, which results in the formation of gel-like plugs in surface openings. While they are completely soluble in water, given enough contact time, the plugs offer a brief period of protection against rapid wash removal. Thus, water-washable penetrant systems provide ease of use and a high level of sensitivity.
When removal of the penetrant from the defect due to over-washing of the part is a concern, a post-emulsifiable penetrant system can be used. Post-emulsifiable penetrants require a separate emulsifier to breakdown the penetrant and make it water washable. The part is usually immersed in the emulsifier but hydrophilic emulsifiers may also be sprayed on the object. Spray application is not recommended for lipophilic emulsifiers because it can result in non-uniform emulsification if not properly applied. Brushing the emulsifier on to the part is not recommended either because the bristles of the brush may force emulsifier into discontinuities, causing the entrapped penetrant to be removed. The emulsifier is allowed sufficient time to react with the penetrant on the surface of the part but not given time to make its way into defects to react with the trapped penetrant. The penetrant that has reacted with the emulsifier is easily cleaned away. Controlling the reaction time is of essential importance when using a post-emulsifiable system. If the emulsification time is too short, an excessive amount of penetrant will be left on the surface, leading to high background levels. If the emulsification time is too long, the emulsifier will react with the penetrant entrapped in discontinuities, making it possible to deplete the amount needed to form an indication.
The hydrophilic post-emulsifiable method (Method D) is more sensitive than the lipophilic post-emulsifiable method (Method B). Since these methods are generally only used when very high sensitivity is needed, the hydrophilic method renders the lipophilic method virtually obsolete. The major advantage of hydrophilic emulsifiers is that they are less sensitive to variation in the contact and removal time. While emulsification time should be controlled as closely as possible, a variation of one minute or more in the contact time will have little effect on flaw detectability when a hydrophilic emulsifier is used. On the contrary, a variation of as little as 15 to 30 seconds can have a significant effect when a lipophilic system is used. Using an emulsifier involves adding a couple of steps to the penetrant process, slightly increases the cost of an inspection. When using an emulsifier, the penetrant process includes the following steps (extra steps in bold): 1. pre-clean part, 2. apply penetrant and allow to dwell, 3. pre-rinse to remove first layer of penetrant, 4. apply hydrophilic emulsifier and allow contact for specified time, 5. rinse to remove excess penetrant, 6. dry part, 7. apply developer and allow part to develop, and 8. inspect.
Rinse Method and Time for Water-Washable Penetrants
The method used to rinse the excess penetrant from the object surface and the time of the rinse should be controlled so as to prevent over-washing. It is generally recommended that a coarse spray rinse or an air-agitated, immersion wash tank be used. When a spray is being used, it should be directed at a 45° angle to the part surface so as to not force water directly into any discontinuities that may be present. The spray or immersion time should be kept to a minimum through frequent inspections of the remaining background level.
Hand Wiping of Solvent Removable Penetrants
When a solvent removable penetrant is used, care must also be taken to carefully remove the penetrant from the part surface while removing as little as possible from the flaw. The first step in this cleaning procedure is to dry wipe the surface of the part in one direction using a white, lint-free, cotton rag. One dry pass in one direction is all that should be used to remove as much penetrant as possible. Next, the surface should be wiped with one pass in one direction with a rag moistened with cleaner. One dry pass followed by one damp pass is all that is recommended. Additional wiping may sometimes be necessary; but keep in mind that with every additional wipe, some of the entrapped penetrant will be removed and inspection sensitivity will be reduced.
To study the effects of the wiping process, Japanese researchers manufactured a test specimen out of acrylic plates that allowed them to view the movement of the penetrant in a narrow cavity. The sample consisted of two pieces of acrylic with two thin sheets of vinyl clamped between as spaces. The plates were clamped in the corners and all but one of the edges sealed. The unsealed edge acted as the flaw. The clearance between the plates varied from 15 microns (0.059 inch) at the clamping points to 30 microns (0.118 inch) at the midpoint between the clamps. The distance between the clamping points was believed to be 30 mm (1.18 inch).
Although the size of the flaw represented by this specimen is large, an interesting observation was made. They found that when the surface of the specimen was wiped with a dry cloth, penetrant was blotted and removed from the flaw at the corner areas where the clearance between the plate was the least. When the penetrant at the side areas was removed, penetrant moved horizontally from the center area to the ends of the simulated crack where capillary forces are stronger. Therefore, across the crack length, the penetrant surface has a parabola-like shape where the liquid is at the surface in the corners but depressed in the center. This shows that each time the cleaning cloth touches the edge of a crack, penetrant is lost from the defect. This also explains why the bleedout of an indication is often largest at the corners of cracks.
Use and Selection of a Developer
The use of developer is almost always recommended. One study reported that the output from a fluorescent penetrant could be multiplied by up to seven times when a suitable powder developer was used. Another study showed that the use of developer can have a dramatic effect on the probability of detection (POD) of an inspection. When a Haynes Alloy 188, flat panel specimen with a low-cycle fatigue crack was inspected without a developer, a 90 % POD was never reached with crack lengths as long as 19 mm (0.75 inch). The operator detected only 86 of 284 cracks and had 70 false-calls. When a developer was used, a 90 % POD was reached at 2 mm (0.077 inch), with the inspector identifying 277 of 311 cracks with no false-calls. However, some authors have reported that in special situations, the use of a developer may actually reduce sensitivity. These situations primarily occur when large, well defined defects are being inspected on a surface that contains many nonrelevant indications that cause excessive bleedout.
Type of Developer Used and Method of Application
Nonaqueous developers are generally recognized as the most sensitive when properly applied. There is less agreement on the performance of dry and aqueous wet developers, but the aqueous developers are usually considered more sensitive. Aqueous wet developers form a finer matrix of particles that is more in contact with the part surface. However, if the thickness of the coating becomes too great, defects can be masked. Also, aqueous wet developers can cause leaching and blurring of indications when used with water-washable penetrants. The relative sensitivities of developers and application techniques as ranked in Volume II of the Nondestructive Testing Handbook are shown in the table below. There is general industry agreement with this table, but some industry experts feel that water suspendable developers are more sensitive than water-soluble developers.
Sensitivity ranking of developers per the Nondestructive Testing Handbook.
Sensitivity Ranking (highest to lowest) Developer Form Application Technique.
Ranking
1
2
3
4
5
6
7
8
9
10 Developer Form
Nonaqueous, Wet Solvent
Plastic Film
Water-Soluble
Water-Suspendable
Water-Soluble
Water-Suspendable
Dry
Dry
Dry
Dry Method of Application
Spray
Spray
Spray
Spray
Immersion
Immersion
Dust Cloud (Electrostatic)
Fluidized Bed
Dust Cloud (Air Agitation)
Immersion (Dip)
The following table lists the main advantages and disadvantages of the various developer types.
Developer Advantages Disadvantages
Dry Indications tend to remain brighter and more distinct over time
Easily to apply Does not form contrast background so cannot be used with visible systems
Difficult to assure entire part surface has been coated
Soluble Ease of coating entire part
White coating for good contrast can be produced which work well for both visible and fluorescent systems Coating is translucent and provides poor contrast (not recommended for visual systems)
Indications for water washable systems are dim and blurred
Suspendable Ease of coating entire part
Indications are bright and sharp
White coating for good contrast can be produced which work well for both visible and fluorescent systems Indications weaken and become diffused after time
Nonaqueous Very portable
Easy to apply to readily accessible surfaces
White coating for good contrast can be produced which work well for both visible and fluorescent systems
Indications show-up rapidly and are well defined
Provides highest sensitivity Difficult to apply evenly to all surfaces
More difficult to clean part after inspection

Process Control of Temperature
The temperature of the penetrant materials and the part being inspected can have an effect on the results. Temperatures from 27 to 49oC (80 to 120oF) are reported in the literature to produce optimal results. Many specifications allow testing in the range of 4 to 52oC (40 to 125oF). A tip to remember is that surfaces that can be touched for an extended period of time without burning the skin are generally below 52oC (125oF).
Since the surface tension of most materials decrease as the temperature increases, raising the temperature of the penetrant will increase the wetting of the surface and the capillary forces. Of course, the converse is also true, so lowering the temperature will have a negative effect on the flow characteristics. Raising the temperature will also raise the speed of evaporation of penetrants, which can have a positive or negative effect on sensitivity. The impact will be positive if the evaporation serves to increase the dye concentration of the penetrant trapped in a flaw up to the concentration quenching point and not beyond. Higher temperatures and more rapid evaporation will have a negative effect if the dye concentration exceeds the concentration quenching point, or the flow characteristics are changed to the point where the penetrant does not readily flow.
The method of processing a hot part was once commonly employed. Parts were either heated or processed hot off the production line. In its day, this served to increase inspection sensitivity by increasing the viscosity of the penetrant. However, the penetrant materials used today have 1/2 to 1/3 the viscosity of the penetrants on the market in the 1960's and 1970's. Heating the part prior to inspection is no longer necessary and no longer recommended
Quality Control of Penetrant
The quality of a penetrant inspection is highly dependent on the quality of the penetrant materials used. Only products meeting the requirements of an industry specification, such as AMS 2644, should be used. Deterioration of new penetrants primarily results from aging and contamination. Virtually all organic dyes deteriorate over time, resulting in a loss of color or fluorescent response, but deterioration can be slowed with proper storage. When possible, keep the materials in a closed container and protect from freezing and exposure to high heat. Freezing can cause separation to occur and exposure to high temperature for a long period of time can affect the brightness of the dyes.
Contamination can occur during storage and use. Of course, open tank systems are much more susceptible to contamination than are spray systems. Contamination by another liquid will change the surface tension and contact angle of the solution. Water is the most common contaminant. Water-washable penetrants have a definite tolerance limit for water, and above this limit they do not function properly. Cloudiness and viscosity both increase with increasing water content. In self-emulsifiable penetrants, water contamination can produce a gel break or emulsion inversion when the water concentration becomes high enough. The formation of the gel is an important feature during the washing processes, but must be avoided until that stage in the process. Data indicates that the water contamination must be significant (greater than 10%) for gel formation to occur. Most specifications limit water contamination to around 5% to be conservative. Water does not readily mix with the oily solution of lipophilic post-emulsifiable systems and it generally settles to the bottom of the tank. However, the inspection of parts that travel to the bottom of the tank and encounter the water could be adversely affected.
Most other common contaminates, such as cleaning solvents, oils, acids, caustics and chromates, must be present in significant quantities to affect the performance of the penetrant. Organic contaminants can dilute the dye and absorb the ultraviolet radiation before it reaches the dye, and also change the viscosity. Acids, caustics, and chromates cause the loss of fluorescence in water-soluble penetrants.
Regular checks must be performed to ensure that the material performance has not degraded. When the penetrant is first received from the manufacturer, a sample of the fresh solution should be collected and stored as a standard for future comparison. The standard specimen should be stored in a sealed, opaque glass or metal container. Penetrants that are in-use should be compared regularly to the standard specimen to detect changes in color, odor and consistency. When using fluorescent penetrants, a brightness comparison per the requirements of ASTM E 1417 is also often required. This check involves placing a drop of the standard and the in-use penetrants on a piece of Whatman #4 filter paper and making a side by side comparison of the brightness of the two spots under UV light.
Additionally, the water content of water washable penetrants must be checked regularly. Water-based, water washable penetrants are checked with a refractometer. The rejection criteria is different for different penetrants, so the requirements of the qualifying specification or the manufacturer's instructions must be consulted. Non-water-based, water washable penetrants are checked using the procedure specified in ASTM D95 or ASTM E 1417.
Application of the Penetrant
The application of the penetrant is the step of the process that requires the least amount of control. As long as the surface being inspected receives a generous coating of penetrant, it really doesn't matter how the penetrant is applied. Generally, the application method is an economic or convenience decision.
It is important that the part be thoroughly cleaned and dried. Any contaminates or moisture on the surface of the part or within a flaw can prevent the penetrant material from entering the defect. The part should also be cool to the touch. The recommended range of temperature is 4 to 52oC (39 to 125oF).
Quality Control of Wash Temperature and Pressure
The wash temperature, pressure and time are three parameters that are typically controlled in penetrant inspection process specification. A coarse spray or an immersion wash tank with air agitation is often used. When the spray method is used, the water pressure is usually limited to 276 kN/m2 (40 psi). The temperature range of the water is usually specified as a wide range (e.g.. 10 to 38oC (50 to 100oF) in AMS 2647A.) A low-pressure, coarse water spray will force less water into flaws to dilute and/or remove trapped penetrant and weaken the indication. The temperature will have an effect on the surface tension of the water and warmer water will have more wetting action than cold water. Warmer water temperatures may also make emulsifiers and detergent more effective. The wash time should only be as long as necessary to decrease the background to an acceptable level. Frequent visual checks of the part should be made to determine when the part has be adequately rinsed.
Summary of Research on Wash Method Variables
Vaerman evaluated the effect that rinse time had on one high sensitivity water-washable penetrant and two post-emulsifiable penetrants (one medium and one high sensitivity). The evaluation was conducted using TESCO panels with numerous cracks ranging in depth from five to 100 microns deep. A 38% decrease in sensitivity for the water-washable penetrant was seen when the rinse time was increased from 25 to 60 seconds. When the rinse times of two post-emulsifiable penetrants were increased from 20 to 60 seconds, a loss in sensitivity was seen in both cases, although much reduced from the loss seen with the water-washable system. The relative sensitivity loss over the range of crack depths was 13% for the penetrant with medium sensitivity.
-- Vaerman, J., Fluorescent Penetrant Inspection, Quantified Evolution of the Sensitivity Versus Process Deviations, Proceedings of the 4th European Conference on Non-Destructive Testing, Pergamon Press, Maxwell House, Fairview Park, Elmsford, New York, Volume 4, September 1987, pp. 2814-2823.
In a 1972 paper by N.H. Hyam, the effects of the rinse time on the sensitivity of two level 4 water-washable penetrants were examined. It was reported that sensitivity decreased as spray-rinse time increased and that one of the penetrants was more affected by rinse time than the others. Alburger, points out that some conventional fluorescent dyes are slightly soluble in water and can be leached out during the washing processes.
-- Hyam, N. H., Quantitative Evaluation of Factors Affecting the Sensitivity of Penetrant Systems, Materials Evaluation, Vol. 30, No. 2, February 1972, pp. 31-38.
Brittian evaluated the effect of wash time on a water-washable, level 4 penetrant (Ardrox 970P25) and found that indication brightness decreases rapidly in the first minute of wash and then slows. The brightness value dropped from a relative value of 1100 to approximately 500 in the first minute and then continued to decrease nearly linearly to a value of 200 after five minutes of wash. Brittian concluded that wash time for water-washable systems should be kept to a minimum.
-- Brittain, P.I., Assessment of Penetrant Systems by Fluorescent Intensity, Proceedings of the 4th European Conference on Nondestructive Testing, Vol. 4, Published by Perganon Press, 1988, pp. 2814-2823.
Robinson and Schmidt used a Turner fluorometer to evaluate the variability that some of the processing steps can produce in the brightness of indications. To find out how much effect the wash procedure had on sensitivity, Tesco cracked, chrome-plated panels, were processed a number of times using the same materials but three different wash methods. The washing methods included spraying the specimens with a handheld nozzle, holding the specimens under a running tap, and using a washing machine that controlled the water pressure, temperature, spray pattern and wash time. The variation in indication brightness readings between five trials was reported. The variation was 16% for the running tap water, 14% for the handheld spray nozzle and 4.5% for the machine wash.
Quality Control of Drying Process
The temperature used to dry parts after the application of an aqueous wet developer or prior to the application of a dry powder or a nonaqueous wet developer, must be controlled to prevent "cooking" of the penetrant in the defect. High drying temperature can affect penetrants in a couple of ways. First, some penetrants can fade at high temperatures due to dye vaporization or sublimation. Second, high temperatures can cause the penetrant to dry in the the flaw, preventing it from migrating to the surface to produce an indication. To prevent harming the penetrant material, drying temperature should be kept to under 71oC.
The drying should be limited to the minimum length of time necessary to thoroughly dry the component being inspected.
Quality Control of Developer
The function of the developer is very important in a penetrant inspection. It must draw out of the discontinuity a sufficient amount of penetrant to form an indication, and it must spread the penetrant out on the surface to produce a visible indication. In a fluorescent penetrant inspection, the amount of penetrant brought to the surface must exceed the dye's thin film threshold of fluorescence, or the indication will not fluoresce. Additionally, the developer makes fluorescent indications appear brighter than indications produced with the same amount of dye but without the developer.
In order to accomplish these functions, a developer must adhere to the part surface and result in a uniform, highly porous layer with many paths for the penetrant to be moved due to capillary action. Developers are either applied wet or dry, but the desired end result is always a uniform, highly porous, surface layer. Since the quality control requirements for each of the developer types is slightly different, they will be covered individually.
Dry Powder Developer
A dry powder developer should be checked daily to ensure that it is fluffy and not caked. It should be similar to fresh powdered sugar and not granulated like powdered soap. It should also be relatively free from specks of fluorescent penetrant material from previous inspection. This check is performed by spreading a sample of the developer out and examining it under UV light. If there are ten or more fluorescent specks in a 10 cm diameter area, the batch should be discarded.
Apply a light coat of the developer by immersing the test component or dusting the surface. After the development time, excessive powder can be removed by gently blowing on the surface with air not exceeding 35 kPa or 5 psi.
Wet Soluble/Suspendable Developer
Wet soluble developer must be completely dissolved in the water and wet suspendable developer must be thoroughly mixed prior to application. The concentration of powder in the carrier solution must be controlled in these developers. The concentration should be checked at least weekly using a hydrometer to make sure it meets the manufacturer's specification. To check for contamination, the solution should be examined weekly using both white light and UV light. If a scum is present or the solution fluoresces, it should be replaced. Some specifications require that a clean aluminum panel be dipped in the developer, dried, and examined for indications of contamination by fluorescent penetrant materials.
These developers are applied immediately after the final wash. A uniform coating should be applied by spraying, flowing or immersing the component. They should never be applied with a brush. Care should be taken to avoid a heavy accumulation of the developer solution in crevices and recesses. Prolonged contact of the component with the developer solution should be avoided in order to minimize dilution or removal of the penetrant from discontinuities.
Solvent Suspendable (AKA Nonaqueous Wet)
Solvent suspendable developers are typically supplied in an sealed aerosol spray can. Since the developer solution is in a sealed vessel, direct check of the solution is not possible. However, the way that the developer is dispensed must be monitored. The spray developer should produce a fine, even coating on the surface of the part. Make sure the can is well shaken and apply a thin coating to a test article. If the spray produces spatters or an uneven coating, the can should be discarded.
When applying a solvent suspendable developer, it is up to the inspector to control the thickness of the coating. with a visible penetrant system, the developer coating must be thick enough to provide a white contrasting background but not heavy enough to mask indications. When using a fluorescent penetrant system, a very light coating should be used. The developer should be applied under white light and should appear evenly transparent.
Development Time
Parts should be allowed to develop for a minimum of 10 minutes and no more than 2 hours before inspecting.

Quality Control of Lighting
After a component has been properly processed, it is ready for inspection. While automated vision inspection systems are sometimes used, the focus here will be on inspections performed visually by a human inspector, as this is the dominant method. Proper lighting is of great importance when visually inspecting a surface for a penetrant indication. Obviously, the lighting requirements are different for an inspection conducted using a visible dye penetrant than they are for an inspection conducted using a fluorescent dye penetrant. The lighting requirements for each of these techniques, as well as how light measurements are made, are discussed below.
Lighting for Visible Dye Penetrant Inspections
When using a visible penetrant, the intensity of the white light is of principal importance. Inspections can be conducted using natural lighting or artificial lighting. When using natural lighting, it is important to keep in mind that daylight varies from hour to hour, so inspectors must stay constantly aware of the lighting conditions and make adjustments when needed. To improve uniformity in lighting from one inspection to the next, the use of artificial lighting is recommended. Artificial lighting should be white whenever possible and white flood or halogen lamps are most commonly used. The light intensity is required to be 100 foot-candles at the surface being inspected. It is advisable to choose a white light wattage that will provide sufficient light, but avoid excessive reflected light that could distract from the inspection.
Lighting for Fluorescent Penetrant Inspections
When a fluorescent penetrant is being employed, the ultraviolet (UV) illumination and the visible light inside the inspection booth is important. Penetrant dyes are excited by UV light of 365nm wavelength and emit visible light somewhere in the green-yellow range between 520 and 580nm. The source of ultraviolet light is often a mercury arc lamp with a filter. The lamps emit many wavelengths and a filter is used to remove all but the UV and a small amount of visible light between 310 and 410nm. Visible light of wavelengths above 410nm interferes with contrast, and UV emissions below 310nm include some hazardous wavelengths.
Standards and procedures require verification of lens condition and light intensity. Black lights should never be used with a cracked filter as output of white light and harmful black light will be increased. The cleanliness of the filter should also be checked as a coating of solvent carrier, oils, or other foreign materials can reduce the intensity by up to as much as 50%. The filter should be checked visually and cleaned as necessary before warm-up of the light.
Since fluorescent brightness is linear with respect to ultraviolet excitation, a change in the intensity of the light (from age or damage) and a change in the distance of the light source from the surface being inspected will have a direct impact on the inspection. For UV lights used in component evaluations, the normally accepted intensity is 1000 microwatt per square centimeter when measured at 15 inches from the filter face (requirements can vary from 800 to 1200 µW/cm2). The required check should be performed when a new bulb is installed, at startup of the inspection cycle, if a change in intensity is noticed, or every eight hours of continuous use. Regularly checking the intensity of UV lights is very important because bulbs lose intensity over time. In fact, a bulb that is near the end of its operating life will often have an intensity of only 25% of its original output.
Black light intensity will also be affected by voltage variations. A bulb that produces acceptable intensity at 120 volts will produce significantly less at 110 volts. For this reason it is important to provide constant voltage to the light. Also, most UV light must be warmed up prior to use and should be on for at least 15 minutes before beginning an inspection.
When performing a fluorescent penetrant inspection, it is important to keep white light to a minimum as it will significantly reduce the inspectors ability to detect fluorescent indications. Light levels of less than 2 fc are required by most procedures with some procedures requiring less than 0.5 fc at the inspection surface. Procedures require a check and documentation of ambient white light in the inspection area. When checking black light intensity at 15 inches a reading of the white light produced by the black light may be required to verify white light is being removed by the filter.
Light Measurement
Light intensity measurements are made using a radiometer. A radiometer is an instrument that translate light energy into an electrical current. Light striking a silicon photodiode detector causes a charge to build up between internal layers. When an external circuit is
connected to the cell, an electrical current is produced. This current is linear with respect to incident light. Some radiometers have the ability to measure both black and white light, while others require a separate sensor for each measurement. Whichever type is used, the sensing area should be clean and free of any materials that could reduce or obstruct light reaching the sensor. Radiometers are relatively unstable instruments and readings often change considerable over time. Therefore, they should be calibrated at least every six months.
Ultraviolet light measurements should be taken using a fixture to maintain a minimum distance of 15 inches from the filter face to the sensor. The sensor should be centered in the light field to obtain and record the highest reading. UV spot lights are often focused, so intensity readings will vary considerable over a small area. White lights are seldom focused and depending on the wattage, will often produce in excess of the 100 fc at 15 inches. Many specifications do not require the white light intensity check to be conducted at a specific distance
System Performance Check
System performance checks involve processing a test specimen with known defects to determine if the process will reveal discontinuities of the size required. The specimen must be processed following the same procedure used to process production parts. A system performance check is typically required daily, at the reactivation of a system after maintenance or repairs, or any time the system is suspected of being out of control. As with penetrant inspections in general, results are directly dependent on the skill of the operator and, therefore, each operator should process a panel.
The ideal specimen is a production item that has natural defects of the minimum acceptable size. Some specification delineate the type and size of the defects that must be present in the specimen and detected. Surface finish is will affect washability so the check specimen should have the same surface finish as the production parts being processed. If penetrant systems with different sensitivity levels are being used, there should be a separate specimen for each system.
There are some universal test specimens that can be used if a standard part is not available. The most commonly used test specimen is the TAM or PSM panel. These panel are usually made of stainless steel that has been chrome plated on one half and surfaced finished on the other half to produced the desired roughness. The chrome plated section is impacted from the back side to produce a starburst set of cracks in the chrome. There are five impacted areas to produce range of crack sizes. Each panel has a characteristic “signature” and variances in that signature are indications of process variance. Panel patterns as well as brightness are indicators of process consistency or variance.
Care of system performance check specimens is critical. Specimens should be handled carefully to avoid damage. They should be cleaned thoroughly between uses and storage in a solvent is generally recommended. Before processing a specimen, it should be inspected under UV light to make sure that it is clean and not already producing an indication.
Nature of the Defect
The nature of the defect can have a large affect on sensitivity of a liquid penetrant inspection. Sensitivity is defined as the smallest defect that can be detected with a high degree of reliability. Typically, the crack length at the sample surface is used to define size of the defect. A survey of any probability-of-detection curve for penetrant inspection will quickly lead one to the conclusion that crack length has a definite affect on sensitivity. However, the crack length alone does not determine whether a flaw will be seen or go undetected. The volume of the defect is likely to be the more important feature. The flaw must be of sufficient volume so that enough penetrant will bleed back out to a size that is detectable by the eye or that will satisfy the dimensional thresholds of fluorescence.

Above is an example of fluorescent penetrant inspection probability of detection (POD) curve from the Nondestructive Evaluation (NDE) Capabilities Data Book. Please note that this curve is specific to one set of inspection conditions and should not be interpreted to apply to other inspection situations.
In general, penetrant inspections are more effective at finding
• small round defects than small linear defects. Small round defects are generally easier to detect for several reasons. First, they are typically volumetric defects that can trap significant amounts of penetrant. Second, round defects fill with penetrant faster than linear defects. One research effort found that elliptical flaw with length to width ratio of 100, will take the penetrant nearly 10 times longer to fill than a cylindrical flaw with the same volume.
• deeper flaws than shallow flaws. Deeper flaws will trap more penetrant than shallow flaws, and they are less prone to over washing.
• flaws with a narrow opening at the surface than wide open flaws. Flaws with narrow surface openings are less prone to over washing.
• flaws on smooth surfaces than on rough surfaces. The surface roughness of the part primarily affects the removability of a penetrant. Rough surfaces tend to trap more penetrant in the various tool marks, scratches, and pits that make up the surface. Removing the penetrant from the surface of the part is more difficult and a higher level of background fluorescence or over washing may occur.
• flaws with rough fracture surfaces than smooth fracture surfaces. The surface roughness that the fracture faces is a factor in the speed at which a penetrant enters a defect. In general, the penetrant spreads faster over a surface as the surface roughness increases. It should be noted that a particular penetrant may spread slower than others on a smooth surface but faster than the rest on a rougher surface.
• flaws under tensile or no loading than flaws under compression loading. In a 1987 study at the University College London, the effect of crack closure on detectability was evaluated. Researchers used a four-point bend fixture to place tension and compression loads on specimens that were fabricated to contain fatigue cracks. All cracks were detected with no load and with tensile loads placed on the parts. However, as compressive loads were placed on the parts, the crack length steadily decreased as load increased until a load was reached when the crack was no longer detectable.
References:
Rummel, W.D. and Matzkanin, G. A., Nondestructive Evaluation (NDE) Capabilities Data Book, Published by the Nondestructive Testing Information Analysis Center (NTIAC), NTIAC #DB-95-02, May 1996.
Alburger, J.R., Dimensional Transition Effects in Visible Color and Fluorescent Dye Liquids, Proceedings, 23rd Annual Conference, Instrument Society of America, Vol. 23, Part I, Paper No. 564.
Deutsch, S. A, Preliminary Study of the Fluid Mechanics of Liquid Penetrant Testing, Journal of Research of the National Bureau of Standards, Vol. 84, No. 4, July-August 1979, pp. 287-291.
Kauppinen, P. and Sillanpaa, J., Reliability of Surface Inspection Methods, Proceedings of the 12th World Conference on Nondestructive Testing, Amsterdam, Netherlands, Vol. 2, Elsevier Science Publishing, Amsterdam, 1989, pp. 1723-1728.
Vaerman, J. F., Fluorescent Penetrant Inspection Process, Automatic Method for Sensitivity Quantification, Proceedings of 11th World Conference on Nondestructive Testing, Volume III, Las Vegas, NV, November 1985, pp. 1920-1927.
Thomas, W.E., An Analytic Approach to Penetrant Performance, 1963 Lester Honor Lecture, Nondestructive Testing, Vol. 21, No. 6, Nov.-Dec. 1963, pp. 354-368.
Clark, R., Dover, W.D., and Bond, L.J., The Effect of Crack Closure on the Reliability of NDT Predictions of Crack Size, NDT International, Vol. 20, No. 5, Guildford, United Kingdom, Butterworth Scientific Limited, October 1987, pp. 269-275.
Health and Safety Precautions in Liquid Penetrant Inspection
When proper health and safety precautions are followed, liquid penetrant inspection operations can be completed without harm to inspection personnel. However, there are a number of health and safety related issues that must be addressed. Since each inspection operation will have its own unique set of health and safety concerns that must be addressed, only a few of the most common concerns will be discussed here.
Chemical Safety
Whenever chemicals must be handled, certain precautions must be taken as directed by the material safety data sheets (MSDS) for the chemicals. Before working with a chemical of any kind, it is highly recommended that the MSDS be reviewed so that proper chemical safety and hygiene practices can be followed. Some of the penetrant materials are flammable and, therefore, should be used and stored in small quantities. They should only be used in a well ventilated area and ignition sources avoided. Eye protection should always be worn to prevent contact of the chemicals with the eyes. Many of the chemicals used contain detergents and solvents that can dermatitis. Gloves and other protective clothing should be worn to limit contact with the chemicals.
Ultraviolet Light Safety
Ultraviolet (UV) light or "black light" as it is sometimes called, has wavelengths ranging from 180 to 400 nanometers. These wavelengths place UV light in the invisible part of the electromagnetic spectrum between visible light and X-rays. The most familiar source of UV radiation is the the sun and is necessary in small doses for certain chemical processes to occur in the body. However, too much exposure can be harmful to the skin and eyes. Excessive UV light exposure can cause painful sunburn, accelerate wrinkling and increase the risk of skin cancer. UV light can cause eye inflammation, cataracts, and retinal damage.
Because of their close proximity, laboratory devices, like UV lamps, deliver UV light at a much higher intensity than the sun and, therefore, can cause injury much more quickly. The greatest threat with UV light exposure is that the individual is generally unaware that the damage is occurring. There is usually no pain associated with the injury until several hours after the exposure. Skin and eye damage occurs at wavelengths around 320 nm and shorter which is well below the 365 nm wavelength, where penetrants are designed to fluoresce. Therefore, UV lamps sold for use in LPI application are almost always filtered to remove the harmful UV wavelengths. The lamps produce radiation at the harmful wavelengths so it is essential that they be used with the proper filter in place and in good condition.


The pentrants that are used to detect the smallest defects:


Should only be used on aerospace parts
Will also produce the largest amount of irrelevant indications
Can only be used on small parts less than 10 inches in surface area
Should not be used in the field






2
When removal of penetrant from the defect due to overwashing of the part is a concern, which method would most often be used?


Fluorescent water washable method
Visible dye solvent removable method
Visible dye water washable method
Fluorescent post emulsified method






3
Application of the emulsifier should not be performed with a:


Spray
Brush
Dip
Both A and B






4
Which of the following is an advantage to LPI?


Large areas can be inspected
Parts with complex shapes can be inspected
It is portable
All of the above is an advantage






5
Which method of penetrant removal is post emulsified, lipophilic?


Method A
Method B
Method C
Method D






6
How often should the UV light intensity be performed?


When a new bulb is installed
At startup of inspection cycle
Every 8 hours
All of the above






7
The threshold of visual acuity for a person with 20/20 vision is about:


0.003 inches
0.03 inches
0.03 mm
0.3cm






8
When performing a liquid penetrant test, the surface of the part under inspection should be:


Slightly damp
Clean and smooth to the touch
Free of oil, grease, water and other contaminants
All of the above






9
Which emulsifier system is oil based?


Hydrophilic emulsifier
Lipophilic emulsifier
Solvent removable emulsifier
All of the above have an oil base






10
Which emulsifier is most sensitive to contact time when applied to the parts surface?


Hydrophilic emulsifier
Lipophilic emulsifier
Fluorescent emulsifier
Visible dye emulsifier






11
Which of the following will produce higher sensitivity of a penetrant test?


Leaving the part immersed in the penetrant for the entire dwell time
Leaving the part immersed in the wet developer for the entire developer time
Using a nonaqueous wet developer
Allowing the specimen to drain-dwell during its dwell time






12
The performance of a penetrant:


Will remain consistent as long is it is stored in a temperature range of 50 to 100o F
Will only degrade of the temperature exceeds 120o F
Can be affected by contamination and aging
Can be adjusted with the dwell time






13
Which of the following should be removed in order to obtain a good penetrant test?


Varnish
Oxides
Plating
All of the above






14
Developer times are usually in the range of:


10 minutes
10 seconds
20-30 minutes
5-60 minutes






15
The source of ultraviolet light (UV) if often a:


Mercury arc lamp with filter
Wave shift arc lamp
UV lamp with filter
Filter over a minimum 100 watt light bulb






16
Penetrant can be applied by:


Dipping
Brushing
Spraying
All of the above






17
Developer is required to:


Draw out the penetrant from the discontinuity
Provide contrast between the penetrant and the parts background color
Increase the pentrants fluorescence
Both A and B






18
Nonaqeous developer is typically applied:


By dusting the surface of the part
By dipping the part is a mixed batch of developer
By splashing the surface with a brush
By aerosol spraying






19
When a permanent record is required which type of developer can be used:


Lacquer developer
Nonaqueous developer
Layered developer
Peeling developer






20
Large defects can be hidden under a paint surface because:


The paint will fill in the cracks and change the fluorescence of the penetrant
Paint is more elastic than metal and will not fracture
The penetrant will adhere to the paint resulting in maximum fluorescence
All of the above apply






21
Surface contaminants can lead to:


A shift in the fluorescent wavelength to a lower angstrom level
The part needing to be redipped in order to produce good results
Higher background fluorescence
All of the above






22
Which method of penetrant removal is solvent removable?


Method A
Method B
Method C
Method D






23
The total time the penetrant is in contact with the part surface is called the:


Penetrant dwell time
Developer time
Emulsifier time
Penetrant evaporation time






24
Wet developers are applied:


After the part has been dryed
Immediately after the excess penetrant has been removed from the parts surface
After the emulsifer dwell time
After the part has been dipped in cleaner/remover






25
The advantage that liquid penetrant testing has over an unaided visual inspection is that:


The actual size of the discontinuity can be measured
The depth of the defect can be measured
The cause of the impact can be seen
It makes defects easier to see for the inspector






26
Which emulsifier system is water based?


Hydrophilic emulsifier
Lipophilic emulsifier
Type I emulsifier
Form A emulsifier






27
When the excess penetrant is removed from the surface of the part, a course water spray should be directed at an angle of:


20 degrees
45 degrees
90 degrees
It does not matter what angle the spray is applied






28
Penetrants are designed to:


Perform equally
Perform the same no matter who manufacturers them
Shift in grade and value when the temperature changes
Remain fluid so it can be drawn back to the surface of the part






29
A penetrant must:


Change viscosity in order to spread over the surface of the part
Spread easily over the surface of the material
Have a low flash point
Be able to change color in order to fluoresce






30
When using a fluorescent penetrant, the brighness comparison is performed to:


ASTM 410
API 410
ASNT TC-1A
ASTM E 1417






31
Dry developer can be applied:


To a wet part
To a partially wet part but needs to be placed in a dryer immediately
To a dry part
All of the above






32
Which type of penetrant is a fluorescent penetrant?


Type I
Type II
Type III
Type IV






33
Water based, water washable penetrant are checked with a:


Centrifuge
Refractometer
Centrifuge scope
Crack block






34
When removing excess penetrant with water, the wash time should be:


As long as the specifications allow
Based on the temperature of the part
As long as necessary to decrease the background to an acceptable level
Longer if the water temperature increases






35
Dry developer should be checked ______ in order to ensure it is fluffy and not caked:


Daily
Weekly
Monthly
Every 500 parts run through it






36
Which method of penetrant removal is water washable?


Method A
Method B
Method C
Method D






37
The water content of water washable penetrant:s:


Should be performed daily
Should be performed weekly
Should be performed monthly
Must be checked regularly






38
Which type of developer is considered the most sensitive?


Water suspendable
Water soluble
Dry powder
Nonaqueous wet






39
Water suspendible developers consist of a group of chemicals that are:


Saturated in water and experience a chemical shift allowing it to fluoresce on the parts surface
Only used on rough porous surfaces
Dissolved in water
Insoluble in water but can be suspended in the water after mixing or agitation





40
Which of the following is a disadvantage of LPI?


Only surface breaking flaws can be detected
Surface finish and roughness can affect inspection sensitivity
Post cleaning is required
All of the above






41
Developers come in a variety of forms and can be applied by:


Dusting
Dipping
Spraying
All of the above






42
When fluorescent penetrant inspection is performed, the penetrant materials are formulated to glow brightly and to give off light at a wavelength:


Close to infrared light
Close to the wavelength of x-rays
That the eye is most sensitive to under dim lighting conditions
In the red spectrum






43
What industry and military specifications control a penetrants?


Toxicity
Flash point
Corrosiveness
All of the above






44
Post emulsified penetrants:


Are most often used in the field
Should never be used in the field
Require a separate emulsifier to break the penetrant down and make it water washable
Require a separate emulsifier to break down the cleaner and make it solvent removable





45
A good cleaning procedure will:


Remove all contamination from the part and not leave any reside that may interfere with the inspection process
Remove a small amount of metal from the surface of the part
Should leave the part slightly flourescent in order to identify any discontinuities
Should etch the part slightly only if it is made from 4041 aluminum





46
It is well recognized that machining, honing, lapping and hand sanding will result:


In a better penetrant inspection
In a longer dwell time in order to produce adequate penetration of the penetrant
Longer dwell times
Metal smearing






47
Which penetrant method is easiest to use in the field?


Fluorescent, post-emulsifiable
Visible dye, water washable
Visible dye, solvent removable
Fluorescent, water washable






48
Minimum penetrant dwell times are usually:


1-5 minutes
1-30 minutes
5-60 minutes
60-100 minutes






49
Which method of penetrant removal is post emulsified, hydrophilic?


Method A
Method B
Method C
Method D






50
Once the surface of the part has been cleaned properly, penetrant can be applied by:


Spraying
Brushing
Dipping
All of the above





Basic Principles of Ultrasonic Testing
Ultrasonic Testing (UT) uses high frequency sound energy to conduct examinations and make measurements. Ultrasonic inspection can be used for flaw detection/evaluation, dimensional measurements, material characterization, and more. To illustrate the general inspection principle, a typical pulse/echo inspection configuration as illustrated below will be used.
A typical UT inspection system consists of several functional units, such as the pulser/receiver, transducer, and display devices. A pulser/receiver is an electronic device that can produce high voltage electrical pulses. Driven by the pulser, the transducer generates high frequency ultrasonic energy. The sound energy is introduced and propagates through the materials in the form of waves. When there is a discontinuity (such as a crack) in the wave path, part of the energy will be reflected back from the flaw surface. The reflected wave signal is transformed into an electrical signal by the transducer and is displayed on a screen. In the applet below, the reflected signal strength is displayed versus the time from signal generation to when a echo was received. Signal travel time can be directly related to the distance that the signal traveled. From the signal, information about the reflector location, size, orientation and other features can sometimes be gained.

Ultrasonic Inspection is a very useful and versatile NDT method. Some of the advantages of ultrasonic inspection that are often cited include:
• It is sensitive to both surface and subsurface discontinuities.
• The depth of penetration for flaw detection or measurement is superior to other NDT methods.
• Only single-sided access is needed when the pulse-echo technique is used.
• It is highly accurate in determining reflector position and estimating size and shape.
• Minimal part preparation is required.
• Electronic equipment provides instantaneous results.
• Detailed images can be produced with automated systems.
• It has other uses, such as thickness measurement, in addition to flaw detection.
As with all NDT methods, ultrasonic inspection also has its limitations, which include:
• Surface must be accessible to transmit ultrasound.
• Skill and training is more extensive than with some other methods.
• It normally requires a coupling medium to promote the transfer of sound energy into the test specimen.
• Materials that are rough, irregular in shape, very small, exceptionally thin or not homogeneous are difficult to inspect.
• Cast iron and other coarse grained materials are difficult to inspect due to low sound transmission and high signal noise.
• Linear defects oriented parallel to the sound beam may go undetected.
• Reference standards are required for both equipment calibration and the characterization of flaws.
The above introduction provides a simplified introduction to the NDT method of ultrasonic testing. However, to effectively perform an inspection using ultrasonics, much more about the method needs to be known. The following pages present information on the science involved in ultrasonic inspection, the equipment that is commonly used, some of the measurement techniques used, as well as other information.









History of Ultrasonics
Ultrasonic Formula

Longitudinal Wave Velocity Shear Wave Velocity Wavelength


Where:
VL = Longitudinal Wave Velocity
E = Modulus of Elasticity
 = Density
 = Poisson's Ratio
more information:
Longitudinal Wave Velocity Calculations

Where:
Vs = Shear Wave Velocity
E = Modulus of Elasticity
 = Density
 = Poisson's Ratio
G = Shear Modulus
more information:
Shear Wave Velocity Calculations

Where:
 = Wavelength
V = Velocity
F = Frequency


more information:
Wavelength Calculations
Refraction
(Snell's Law) Acoustic Impedance Reflection Coefficient

Where:
 = Angle of the Incident Wave
R = Angle of the Reflected Wave
V1 = Velocity of Incident Wave
V2 = Velocity of Reflected Wave
more information:
Refracted Angle Calculations


Where:
Z = Acoustic Impedance
 = Density
V = Velocity




more information:
Acoustic Impedance Calculations
Where:
R = Reflection Coefficient
Z1 = Acoustic Impedance of Medium 1
Z2 = Acoustic Impedance of Medium 2


more information:
Reflection Coefficient Calculations
Near Field Beam Spread
Half Angle Decibel (dB)
Gain or Loss


Where:
N = Near Field
D = Transducer Diameter
 = Wavelength
V = Velocity
more
Near Field Calculations
Near Field Calculator


Where:
 = Wavelength
D = Transducer Diameter
V = Velocity
F = Frequency


Beam Spread Calculations
Beam Divergence Calculator


Where:
dB = Decibel
P1 = Presure Amplitude 1
P2 = Presure Amplitude 2


dB Gain or Loss Calculations



Prior to World War II, sonar, the technique of sending sound waves through water and observing the returning echoes to characterize submerged objects, inspired early ultrasound investigators to explore ways to apply the concept to medical diagnosis. In 1929 and 1935, Sokolov studied the use of ultrasonic waves in detecting metal objects. Mulhauser, in 1931, obtained a patent for using ultrasonic waves, using two transducers to detect flaws in solids. Firestone (1940) and Simons (1945) developed pulsed ultrasonic testing using a pulse-echo technique.
Shortly after the close of World War II, researchers in Japan began to explore the medical diagnostic capabilities of ultrasound. The first ultrasonic instruments used an A-mode presentation with blips on an oscilloscope screen. That was followed by a B-mode presentation with a two dimensional, gray scale image.
Japan's work in ultrasound was relatively unknown in the United States and Europe until the 1950s. Researchers then presented their findings on the use of ultrasound to detect gallstones, breast masses, and tumors to the international medical community. Japan was also the first country to apply Doppler ultrasound, an application of ultrasound that detects internal moving objects such as blood coursing through the heart for cardiovascular investigation.
Ultrasound pioneers working in the United States contributed many innovations and important discoveries to the field during the following decades. Researchers learned to use ultrasound to detect potential cancer and to visualize tumors in living subjects and in excised tissue. Real-time imaging, another significant diagnostic tool for physicians, presented ultrasound images directly on the system's CRT screen at the time of scanning. The introduction of spectral Doppler and later color Doppler depicted blood flow in various colors to indicate the speed and direction of the flow..
The United States also produced the earliest hand held "contact" scanner for clinical use, the second generation of B-mode equipment, and the prototype for the first articulated-arm hand held scanner, with 2-D images.
Beginnings of Nondestructive Evaluation (NDE)
Nondestructive testing has been practiced for many decades, with initial rapid developments in instrumentation spurred by the technological advances that occurred during World War II and the subsequent defense effort. During the earlier days, the primary purpose was the detection of defects. As a part of "safe life" design, it was intended that a structure should not develop macroscopic defects during its life, with the detection of such defects being a cause for removal of the component from service. In response to this need, increasingly sophisticated techniques using ultrasonics, eddy currents, x-rays, dye penetrants, magnetic particles, and other forms of interrogating energy emerged.
In the early 1970's, two events occurred which caused a major change in the NDT field. First, improvements in the technology led to the ability to detect small flaws, which caused more parts to be rejected even though the probability of component failure had not changed. However, the discipline of fracture mechanics emerged, which enabled one to predict whether a crack of a given size will fail under a particular load when a material's fracture toughness properties are known. Other laws were developed to predict the growth rate of cracks under cyclic loading (fatigue). With the advent of these tools, it became possible to accept structures containing defects if the sizes of those defects were known. This formed the basis for the new philosophy of "damage tolerant" design. Components having known defects could continue in service as long as it could be established that those defects would not grow to a critical, failure producing size.
A new challenge was thus presented to the nondestructive testing community. Detection was not enough. One needed to also obtain quantitative information about flaw size to serve as an input to fracture mechanics based predictions of remaining life. The need for quantitative information was particularly strongly in the defense and nuclear power industries and led to the emergence of quantitative nondestructive evaluation (QNDE) as a new engineering/research discipline. A number of research programs around the world were started, such as the Center for Nondestructive Evaluation at Iowa State University (growing out of a major research effort at the Rockwell International Science Center); the Electric Power Research Institute in Charlotte, North Carolina; the Fraunhofer Institute for Nondestructive Testing in Saarbrucken, Germany; and the Nondestructive Testing Centre in Harwell, England
Present State of Ultrasonics
Ultrasonic testing (UT) has been practiced for many decades. Initial rapid developments in instrumentation spurred by the technological advances from the 1950's continue today. Through the 1980's and continuing through the present, computers have provided technicians with smaller and more rugged instruments with greater capabilities.
Thickness gauging is an example application where instruments have been refined make data collection easier and better. Built-in data logging capabilities allow thousands of measurements to be recorded and eliminate the need for a "scribe." Some instruments have the capability to capture waveforms as well as thickness readings. The waveform option allows an operator to view or review the A-scan signal of thickness measurement long after the completion of an inspection. Also, some instruments are capable of modifying the measurement based on the surface conditions of the material. For example, the signal from a pitted or eroded inner surface of a pipe would be treated differently than a smooth surface. This has led to more accurate and repeatable field measurements.
Many ultrasonic flaw detectors have a trigonometric function that allows for fast and accurate location determination of flaws when performing shear wave inspections. Cathode ray tubes, for the most part, have been replaced with LED or LCD screens. These screens, in most cases, are extremely easy to view in a wide range of ambient lighting. Bright or low light working conditions encountered by technicians have little effect on the technician's ability to view the screen. Screens can be adjusted for brightness, contrast, and on some instruments even the color of the screen and signal can be selected. Transducers can be programmed with predetermined instrument settings. The operator only has to connect the transducer and the instrument will set variables such as frequency and probe drive.
Along with computers, motion control and robotics have contributed to the advancement of ultrasonic inspections. Early on, the advantage of a stationary platform was recognized and used in industry. Computers can be programmed to inspect large, complex shaped components, with one or multiple transducers collecting information. Automated systems typically consisted of an immersion tank, scanning system, and recording system for a printout of the scan. The immersion tank can be replaced with a squirter systems, which allows the sound to be transmitted through a water column. The resultant C-scan provides a plan or top view of the component. Scanning of components is considerably faster than contact hand scanning, the coupling is much more consistent. The scan information is collected by a computer for evaluation, transmission to a customer, and archiving.
Today, quantitative theories have been developed to describe the interaction of the interrogating fields with flaws. Models incorporating the results have been integrated with solid model descriptions of real-part geometries to simulate practical inspections. Related tools allow NDE to be considered during the design process on an equal footing with other failure-related engineering disciplines. Quantitative descriptions of NDE performance, such as the probability of detection (POD), have become an integral part of statistical risk assessment. Measurement procedures initially developed for metals have been extended to engineered materials such as composites, where anisotropy and inhomogeneity have become important issues. The rapid advances in digitization and computing capabilities have totally changed the faces of many instruments and the type of algorithms that are used in processing the resulting data. High-resolution imaging systems and multiple measurement modalities for characterizing a flaw have emerged. Interest is increasing not only in detecting, characterizing, and sizing defects, but also in characterizing the materials. Goals range from the determination of fundamental microstructural characteristics such as grain size, porosity, and texture (preferred grain orientation), to material properties related to such failure mechanisms as fatigue, creep, and fracture toughness. As technology continues to advance, applications of ultrasound also advance. The high-resolution imaging systems in the laboratory today will be tools of the technician tomorrow.
Future Direction of Ultrasonic Inspection
Looking to the future, those in the field of NDE see an exciting new set of opportunities. The defense and nuclear power industries have played a major role in the emergence of NDE. Increasing global competition has led to dramatic changes in product development and business cycles. At the same time, aging infrastructure, from roads to buildings and aircraft, present a new set of measurement and monitoring challenges for engineers as well as technicians.
Among the new applications of NDE spawned by these changes is the increased emphasis on the use of NDE to improve the productivity of manufacturing processes. Quantitative nondestructive evaluation (QNDE) both increases the amount of information about failure modes and the speed with which information can be obtained and facilitates the development of in-line measurements for process control.
The phrase, "you cannot inspect in quality, you must build it in," exemplifies the industry's focus on avoiding the formation of flaws. Nevertheless, manufacturing flaws will never be completely eliminated and material damage will continue to occur in-service so continual development of flaw detection and characterization techniques is necessary.
Advanced simulation tools that are designed for inspectability and their integration into quantitative strategies for life management will contribute to increase the number and types of engineering applications of NDE. With growth in engineering applications for NDE, there will be a need to expand the knowledge base of technicians performing the evaluations. Advanced simulation tools used in the design for inspectability may be used to provide technical students with a greater understanding of sound behavior in materials. UTSIM, developed at Iowa State University, provides a glimpse into what may be used in the technical classroom as an interactive laboratory tool.
As globalization continues, companies will seek to develop, with ever increasing frequency, uniform international practices. In the area of NDE, this trend will drive the emphasis on standards, enhanced educational offerings, and simulations that can be communicated electronically. The coming years will be exciting as NDE will continue to emerge as a full-fledged engineering discipline.
Wave Propagation
Ultrasonic testing is based on time-varying deformations or vibrations in materials, which is generally referred to as acoustics. All material substances are comprised of atoms, which may be forced into vibrational motion about their equilibrium positions. Many different patterns of vibrational motion exist at the atomic level, however, most are irrelevant to acoustics and ultrasonic testing. Acoustics is focused on particles that contain many atoms that move in unison to produce a mechanical wave. When a material is not stressed in tension or compression beyond its elastic limit, its individual particles perform elastic oscillations. When the particles of a medium are displaced from their equilibrium positions, internal (electrostatic) restoration forces arise. It is these elastic restoring forces between particles, combined with inertia of the particles, that leads to the oscillatory motions of the medium.
In solids, sound waves can propagate in four principle modes that are based on the way the particles oscillate. Sound can propagate as longitudinal waves, shear waves, surface waves, and in thin materials as plate waves. Longitudinal and shear waves are the two modes of propagation most widely used in ultrasonic testing. The particle movement responsible for the propagation of longitudinal and shear waves is illustrated below.

In longitudinal waves, the oscillations occur in the longitudinal direction or the direction of wave propagation. Since compressional and dilational forces are active in these waves, they are also called pressure or compressional waves. They are also sometimes called density waves because their particle density fluctuates as they move. Compression waves can be generated in liquids, as well as solids because the energy travels through the atomic structure by a series of comparison and expansion (rarefaction) movements.

In the transverse or shear wave, the particles oscillate at a right angle or transverse to the direction of propagation. Shear waves require an acoustically solid material for effective propagation, and therefore, are not effectively propagated in materials such as liquids or gasses. Shear waves are relatively weak when compared to longitudinal waves. In fact, shear waves are usually generated in materials using some of the energy from longitudinal waves.
Modes of Sound Wave Propagation
In air, sound travels by the compression and rarefaction of air molecules in the direction of travel. However, in solids, molecules can support vibrations in other directions, hence, a number of different types of sound waves are possible. Waves can be characterized in space by oscillatory patterns that are capable of maintaining their shape and propagating in a stable manner. The propagation of waves is often described in terms of what are called “wave modes.”
As mentioned previously, longitudinal and transverse (shear) waves are most often used in ultrasonic inspection. However, at surfaces and interfaces, various types of elliptical or complex vibrations of the particles make other waves possible. Some of these wave modes such as Rayleigh and Lamb waves are also useful for ultrasonic inspection.
The table below summarizes many, but not all, of the wave modes possible in solids.
Wave Types in Solids Particle Vibrations
Longitudinal Parallel to wave direction
Transverse (Shear) Perpendicular to wave direction
Surface - Rayleigh Elliptical orbit - symmetrical mode
Plate Wave - Lamb Component perpendicular to surface (extensional wave)
Plate Wave - Love Parallel to plane layer, perpendicular to wave direction
Stoneley (Leaky Rayleigh Waves) Wave guided along interface
Sezawa Antisymmetric mode
Longitudinal and transverse waves were discussed on the previous page, so let's touch on surface and plate waves here.
Surface (or Rayleigh) waves travel the surface of a relatively thick solid material penetrating to a depth of one wavelength. Surface waves combine both a longitudinal and transverse motion to create an elliptic orbit motion as shown in the image and animation below. The major axis of the ellipse is perpendicular to the surface of the solid. As the depth of an individual atom from the surface increases the width of its elliptical motion decreases. Surface waves are generated when a longitudinal wave intersects a surface near the second critical angle and they travel at a velocity between .87 and .95 of a shear wave. Rayleigh waves are useful because they are very sensitive to surface defects (and other surface features) and they follow the surface around curves. Because of this, Rayleigh waves can be used to inspect areas that other waves might have difficulty reaching.

Plate waves are similar to surface waves except they can only be generated in materials a few wavelengths thick. Lamb waves are the most commonly used plate waves in NDT. Lamb waves are complex vibrational waves that propagate parallel to the test surface throughout the thickness of the material. Propagation of Lamb waves depends on the density and the elastic material properties of a component. They are also influenced a great deal by the test frequency and material thickness. Lamb waves are generated at an incident angle in which the parallel component of the velocity of the wave in the source is equal to the velocity of the wave in the test material. Lamb waves will travel several meters in steel and so are useful to scan plate, wire, and tubes.
With Lamb waves, a number of modes of particle vibration are possible, but the two most common are symmetrical and asymmetrical. The complex motion of the particles is similar to the elliptical orbits for surface waves. Symmetrical Lamb waves move in a symmetrical fashion about the median plane of the plate. This is sometimes called the extensional mode because the wave is “stretching and compressing” the plate in the wave motion direction. Wave motion in the symmetrical mode is most efficiently produced when the exciting force is parallel to the plate. The asymmetrical Lamb wave mode is often called the “flexural mode” because a large portion of the motion moves in a normal direction to the plate, and a little motion occurs in the direction parallel to the plate. In this mode, the body of the plate bends as the two surfaces move in the same direction.
The generation of waves using both piezoelectric transducers and electromagnetic acoustic transducers (EMATs) are discussed in later sections.

Properties of Acoustic Plane Wave
Wavelength, Frequency and Velocity
Among the properties of waves propagating in isotropic solid materials are wavelength, frequency, and velocity. The wavelength is directly proportional to the velocity of the wave and inversely proportional to the frequency of the wave. This relationship is shown by the following equation.

The applet below shows a longitudinal and transverse wave. The direction of wave propagation is from left to right and the movement of the lines indicate the direction of particle oscillation. The equation relating ultrasonic wavelength, frequency, and propagation velocity is included at the bottom of the applet in a reorganized form. The values for the wavelength, frequency, and wave velocity can be adjusted in the dialog boxes to see their effects on the wave. Note that the frequency value must be kept between 0.1 to 1 MHz (one million cycles per second) and the wave velocity must be between 0.1 and 0.7 cm/us.

As can be noted by the equation, a change in frequency will result in a change in wavelength. Change the frequency in the applet and view the resultant wavelength. At a frequency of .2 and a material velocity of 0.585 (longitudinal wave in steel) note the resulting wavelength. Adjust the material velocity to 0.480 (longitudinal wave in cast iron) and note the resulting wavelength. Increase the frequency to 0.8 and note the shortened wavelength in each material.
In ultrasonic testing, the shorter wavelength resulting from an increase in frequency will usually provide for the detection of smaller discontinuities. This will be discussed more in following sections.
Wavelength and Defect Detection
In ultrasonic testing, the inspector must make a decision about the frequency of the transducer that will be used. As we learned on the previous page, changing the frequency when the sound velocity is fixed will result in a change in the wavelength of the sound. The wavelength of the ultrasound used has a significant effect on the probability of detecting a discontinuity. A general rule of thumb is that a discontinuity must be larger than one-half the wavelength to stand a reasonable chance of being detected.
Sensitivity and resolution are two terms that are often used in ultrasonic inspection to describe a technique's ability to locate flaws. Sensitivity is the ability to locate small discontinuities. Sensitivity generally increases with higher frequency (shorter wavelengths). Resolution is the ability of the system to locate discontinuities that are close together within the material or located near the part surface. Resolution also generally increases as the frequency increases.
The wave frequency can also affect the capability of an inspection in adverse ways. Therefore, selecting the optimal inspection frequency often involves maintaining a balance between the favorable and unfavorable results of the selection. Before selecting an inspection frequency, the material's grain structure and thickness, and the discontinuity's type, size, and probable location should be considered. As frequency increases, sound tends to scatter from large or course grain structure and from small imperfections within a material. Cast materials often have coarse grains and other sound scatters that require lower frequencies to be used for evaluations of these products. Wrought and forged products with directional and refined grain structure can usually be inspected with higher frequency transducers.
Since more things in a material are likely to scatter a portion of the sound energy at higher frequencies, the penetrating power (or the maximum depth in a material that flaws can be located) is also reduced. Frequency also has an effect on the shape of the ultrasonic beam. Beam spread, or the divergence of the beam from the center axis of the transducer, and how it is affected by frequency will be discussed later.
It should be mentioned, so as not to be misleading, that a number of other variables will also affect the ability of ultrasound to locate defects. These include the pulse length, type and voltage applied to the crystal, properties of the crystal, backing material, transducer diameter, and the receiver circuitry of the instrument. These are discussed in more detail in the material on signal-to-noise ratio.
Sound Propagation in Elastic Materials
In the previous pages, it was pointed out that sound waves propagate due to the vibrations or oscillatory motions of particles within a material. An ultrasonic wave may be visualized as an infinite number of oscillating masses or particles connected by means of elastic springs. Each individual particle is influenced by the motion of its nearest neighbor and both inertial and elastic restoring forces act upon each particle.
A mass on a spring has a single resonant frequency determined by its spring constant k and its mass m. The spring constant is the restoring force of a spring per unit of length. Within the elastic limit of any material, there is a linear relationship between the displacement of a particle and the force attempting to restore the particle to its equilibrium position. This linear dependency is described by Hooke's Law.
In terms of the spring model, Hooke's Law says that the restoring force due to a spring is proportional to the length that the spring is stretched, and acts in the opposite direction. Mathematically, Hooke's Law is written as F =-kx, where F is the force, k is the spring constant, and x is the amount of particle displacement. Hooke's law is represented graphically it the right. Please note that the spring is applying a force to the particle that is equal and opposite to the force pulling down on the particle.
The Speed of Sound
Hooke's Law, when used along with Newton's Second Law, can explain a few things about the speed of sound. The speed of sound within a material is a function of the properties of the material and is independent of the amplitude of the sound wave. Newton's Second Law says that the force applied to a particle will be balanced by the particle's mass and the acceleration of the the particle. Mathematically, Newton's Second Law is written as F = ma. Hooke's Law then says that this force will be balanced by a force in the opposite direction that is dependent on the amount of displacement and the spring constant (F = -kx). Therefore, since the applied force and the restoring force are equal, ma = -kx can be written. The negative sign indicates that the force is in the opposite direction.
Since the mass m and the spring constant k are constants for any given material, it can be seen that the acceleration a and the displacement x are the only variables. It can also be seen that they are directly proportional. For instance, if the displacement of the particle increases, so does its acceleration. It turns out that the time that it takes a particle to move and return to its equilibrium position is independent of the force applied. So, within a given material, sound always travels at the same speed no matter how much force is applied when other variables, such as temperature, are held constant.
What properties of material affect its speed of sound?
Of course, sound does travel at different speeds in different materials. This is because the mass of the atomic particles and the spring constants are different for different materials. The mass of the particles is related to the density of the material, and the spring constant is related to the elastic constants of a material. The general relationship between the speed of sound in a solid and its density and elastic constants is given by the following equation:

Where V is the speed of sound, C is the elastic constant, and p is the material density. This equation may take a number of different forms depending on the type of wave (longitudinal or shear) and which of the elastic constants that are used. The typical elastic constants of a materials include:
• Young's Modulus, E: a proportionality constant between uniaxial stress and strain.
• Poisson's Ratio, n: the ratio of radial strain to axial strain
• Bulk modulus, K: a measure of the incompressibility of a body subjected to hydrostatic pressure.
• Shear Modulus, G: also called rigidity, a measure of a substance's resistance to shear.
• Lame's Constants, l and m: material constants that are derived from Young's Modulus and Poisson's Ratio.
When calculating the velocity of a longitudinal wave, Young's Modulus and Poisson's Ratio are commonly used. When calculating the velocity of a shear wave, the shear modulus is used. It is often most convenient to make the calculations using Lame's Constants, which are derived from Young's Modulus and Poisson's Ratio.
It must also be mentioned that the subscript ij attached to C in the above equation is used to indicate the directionality of the elastic constants with respect to the wave type and direction of wave travel. In isotropic materials, the elastic constants are the same for all directions within the material. However, most materials are anisotropic and the elastic constants differ with each direction. For example, in a piece of rolled aluminum plate, the grains are elongated in one direction and compressed in the others and the elastic constants for the longitudinal direction are different than those for the transverse or short transverse directions.
Examples of approximate compressional sound velocities in materials are:
• Aluminum - 0.632 cm/microsecond
• 1020 steel - 0.589 cm/microsecond
• Cast iron - 0.480 cm/microsecond.
Examples of approximate shear sound velocities in materials are:
• Aluminum - 0.313 cm/microsecond
• 1020 steel - 0.324 cm/microsecond
• Cast iron - 0.240 cm/microsecond.
When comparing compressional and shear velocities, it can be noted that shear velocity is approximately one half that of compressional velocity. The sound velocities for a variety of materials can be found in the ultrasonic properties tables in the general resources section of this site.
Attenuation of Sound Waves
When sound travels through a medium, its intensity diminishes with distance. In idealized materials, sound pressure (signal amplitude) is only reduced by the spreading of the wave. Natural materials, however, all produce an effect which further weakens the sound. This further weakening results from scattering and absorption. Scattering is the reflection of the sound in directions other than its original direction of propagation. Absorption is the conversion of the sound energy to other forms of energy. The combined effect of scattering and absorption is called attenuation. Ultrasonic attenuation is the decay rate of the wave as it propagates through material.
Attenuation of sound within a material itself is often not of intrinsic interest. However, natural properties and loading conditions can be related to attenuation. Attenuation often serves as a measurement tool that leads to the formation of theories to explain physical or chemical phenomenon that decreases the ultrasonic intensity.
The amplitude change of a decaying plane wave can be expressed as:

In this expression A0 is the unattenuated amplitude of the propagating wave at some location. The amplitude A is the reduced amplitude after the wave has traveled a distance z from that initial location. The quantity is the attenuation coefficient of the wave traveling in the z-direction. The dimensions of are nepers/length, where a neper is a dimensionless quantity. The term e is the exponential (or Napier's constant) which is equal to approximately 2.71828.
The units of the attenuation value in Nepers per meter (Np/m) can be converted to decibels/length by dividing by 0.1151. Decibels is a more common unit when relating the amplitudes of two signals.
Attenuation is generally proportional to the square of sound frequency. Quoted values of attenuation are often given for a single frequency, or an attenuation value averaged over many frequencies may be given. Also, the actual value of the attenuation coefficient for a given material is highly dependent on the way in which the material was manufactured. Thus, quoted values of attenuation only give a rough indication of the attenuation and should not be automatically trusted. Generally, a reliable value of attenuation can only be obtained by determining the attenuation experimentally for the particular material being used.
Attenuation can be determined by evaluating the multiple backwall reflections seen in a typical A-scan display like the one shown in the image at the top of the page. The number of decibels between two adjacent signals is measured and this value is divided by the time interval between them. This calculation produces a attenuation coefficient in decibels per unit time Ut. This value can be converted to nepers/length by the following equation.

Where v is the velocity of sound in meters per second and Ut is in decibels per second
Acoustic Impedance
Sound travels through materials under the influence of sound pressure. Because molecules or atoms of a solid are bound elastically to one another, the excess pressure results in a wave propagating through the solid.
The acoustic impedance (Z) of a material is defined as the product of its density (p) and acoustic velocity (V).
Z = pV
Acoustic impedance is important in
1. the determination of acoustic transmission and reflection at the boundary of two materials having different acoustic impedances.
2. the design of ultrasonic transducers.
3. assessing absorption of sound in a medium.
The following applet can be used to calculate the acoustic impedance for any material, so long as its density (p) and acoustic velocity (V) are known. The applet also shows how a change in the impedance affects the amount of acoustic energy that is reflected and transmitted. The values of the reflected and transmitted energy are the fractional amounts of the total energy incident on the interface. Note that the fractional amount of transmitted sound energy plus the fractional amount of reflected sound energy equals one. The calculation used to arrive at these values will be discussed on the next page.


Reflection and Transmission Coefficients (Pressure)
Ultrasonic waves are reflected at boundaries where there is a difference in acoustic impedances (Z) of the materials on each side of the boundary. (See preceding page for more information on acoustic impedance.) This difference in Z is commonly referred to as the impedance mismatch. The greater the impedance mismatch, the greater the percentage of energy that will be reflected at the interface or boundary between one medium and another.
The fraction of the incident wave intensity that is refracted can be derived because particle velocity and local particle pressures must be continuous across the boundary. When the acoustic impedances of the materials on both sides of the boundary are known, the fraction of the incident wave intensity that is reflected can be calculated with the equation below. The value produced is known as the reflection coefficient. Multiplying the reflection coefficient by 100 yields the amount of energy reflected as a percentage of the original energy.

Since the amount of reflected energy plus the transmitted energy must equal the total amount of incident energy, the transmission coefficient is calculated by simply subtracting the reflection coefficient from one.
Formulations for acoustic reflection and transmission coefficients (pressure) are shown in the interactive applet below. Different materials may be selected or the material velocity and density may be altered to change the acoustic impedance of one or both materials. The red arrow represents reflected sound and the blue arrow represents transmitted sound
Refraction and Snell's Law
When an ultrasounic wave passes through an interface between two materials at an oblique angle, and the materials have different indices of refraction, both reflected and refracted waves are produced. This also occurs with light, which is why objects seen across an interface appear to be shifted relative to where they really are. For example, if you look straight down at an object at the bottom of a glass of water, it looks closer than it really is. A good way to visualize how light and sound refract is to shine a flashlight into a bowl of slightly cloudy water noting the refraction angle with respect to the incident angle.
Refraction takes place at an interface due to the different velocities of the acoustic waves within the two materials. The velocity of sound in each material is determined by the material properties (elastic modulus and density) for that material. In the animation below, a series of plane waves are shown traveling in one material and entering a second material that has a higher acoustic velocity. Therefore, when the wave encounters the interface between these two materials, the portion of the wave in the second material is moving faster than the portion of the wave in the first material. It can be seen that this causes the wave to bend.
Snell's Law describes the relationship between the angles and the velocities of the waves. Snell's law equates the ratio of material velocities V1 and V2 to the ratio of the sine's of incident () and refracted () angles, as shown in the following equation.

Where:
VL1 is the longitudinal wave velocity in material 1.
VL2 is the longitudinal wave velocity in material 2.

Note that in the diagram, there is a reflected longitudinal wave (VL1' ) shown. This wave is reflected at the same angle as the incident wave because the two waves are traveling in the same material, and hence have the same velocities. This reflected wave is unimportant in our explanation of Snell's Law, but it should be remembered that some of the wave energy is reflected at the interface. In the applet below, only the incident and refracted longitudinal waves are shown. The angle of either wave can be adjusted by clicking and dragging the mouse in the region of the arrows. Values for the angles or acoustic velocities can also be entered in the dialog boxes so the that applet can be used as a Snell's Law calculator.

When a longitudinal wave moves from a slower to a faster material, there is an incident angle that makes the angle of refraction for the wave 90o. This is know as the first critical angle. The first critical angle can be found from Snell's law by putting in an angle of 90° for the angle of the refracted ray. At the critical angle of incidence, much of the acoustic energy is in the form of an inhomogeneous compression wave, which travels along the interface and decays exponentially with depth from the interface. This wave is sometimes referred to as a "creep wave." Because of their inhomogeneous nature and the fact that they decay rapidly, creep waves are not used as extensively as Rayleigh surface waves in NDT. However, creep waves are sometimes more useful than Rayleigh waves because they suffer less from surface irregularities and coarse material microstructure due to their longer wavelengths.
Mode Conversion
When sound travels in a solid material, one form of wave energy can be transformed into another form. For example, when a longitudinal waves hits an interface at an angle, some of the energy can cause particle movement in the transverse direction to start a shear (transverse) wave. Mode conversion occurs when a wave encounters an interface between materials of different acoustic impedances and the incident angle is not normal to the interface. From the ray tracing movie below, it can be seen that since mode conversion occurs every time a wave encounters an interface at an angle, ultrasonic signals can become confusing at times.

In the previous section, it was pointed out that when sound waves pass through an interface between materials having different acoustic velocities, refraction takes place at the interface. The larger the difference in acoustic velocities between the two materials, the more the sound is refracted. Notice that the shear wave is not refracted as much as the longitudinal wave. This occurs because shear waves travel slower than longitudinal waves. Therefore, the velocity difference between the incident longitudinal wave and the shear wave is not as great as it is between the incident and refracted longitudinal waves. Also note that when a longitudinal wave is reflected inside the material, the reflected shear wave is reflected at a smaller angle than the reflected longitudinal wave. This is also due to the fact that the shear velocity is less than the longitudinal velocity within a given material.
Snell's Law holds true for shear waves as well as longitudinal waves and can be written as follows.

Where:
VL1 is the longitudinal wave velocity in material 1.
VL2 is the longitudinal wave velocity in material 2.
VS1 is the shear wave velocity in material 1.
VS2 is the shear wave velocity in material 2.
In the applet below, the shear (transverse) wave ray path has been added. The ray paths of the waves can be adjusted by clicking and dragging in the vicinity of the arrows. Values for the angles or the wave velocities can also be entered into the dialog boxes. It can be seen from the applet that when a wave moves from a slower to a faster material, there is an incident angle which makes the angle of refraction for the longitudinal wave 90 degrees. As mentioned on the previous page, this is known as the first critical angle and all of the energy from the refracted longitudinal wave is now converted to a surface following longitudinal wave. This surface following wave is sometime referred to as a creep wave and it is not very useful in NDT because it dampens out very rapidly.
Beyond the first critical angle, only the shear wave propagates into the material. For this reason, most angle beam transducers use a shear wave so that the signal is not complicated by having two waves present. In may cases there is also an incident angle that makes the angle of refraction for the shear wave 90 degrees. This is know as the second critical angle and at this point, all of the wave energy is reflected or refracted into a surface following shear wave or shear creep wave. Slightly beyond the second critical angle, surface waves will be generated.

Note that the applet defaults to compressional velocity in the second material. The refracted compressional wave angle will be generated for given materials and angles. To find the angle of incidence required to generate a shear wave at a given angle complete the following:
1. Set V1 to the longitudinal wave velocity of material 1. This material could be the transducer wedge or the immersion liquid.
2. Set V2 to the shear wave velocity (approximately one-half its compressional velocity) of the material to be inspected.
3. Set Q2 to the desired shear wave angle.
4. Read Q1, the correct angle of incidence.
Signal-to-Noise Ratio
In a previous page, the effect that frequency and wavelength have on flaw detectability was discussed. However, the detection of a defect involves many factors other than the relationship of wavelength and flaw size. For example, the amount of sound that reflects from a defect is also dependent on the acoustic impedance mismatch between the flaw and the surrounding material. A void is generally a better reflector than a metallic inclusion because the impedance mismatch is greater between air and metal than between two metals.
Often, the surrounding material has competing reflections. Microstructure grains in metals and the aggregate of concrete are a couple of examples. A good measure of detectability of a flaw is its signal-to-noise ratio (S/N). The signal-to-noise ratio is a measure of how the signal from the defect compares to other background reflections (categorized as "noise"). A signal-to-noise ratio of 3 to 1 is often required as a minimum. The absolute noise level and the absolute strength of an echo from a "small" defect depends on a number of factors, which include:
• The probe size and focal properties.
• The probe frequency, bandwidth and efficiency.
• The inspection path and distance (water and/or solid).
• The interface (surface curvature and roughness).
• The flaw location with respect to the incident beam.
• The inherent noisiness of the metal microstructure.
• The inherent reflectivity of the flaw, which is dependent on its acoustic impedance, size, shape, and orientation.
• Cracks and volumetric defects can reflect ultrasonic waves quite differently. Many cracks are "invisible" from one direction and strong reflectors from another.
• Multifaceted flaws will tend to scatter sound away from the transducer.
The following formula relates some of the variables affecting the signal-to-noise ratio (S/N) of a defect:

Rather than go into the details of this formulation, a few fundamental relationships can be pointed out. The signal-to-noise ratio (S/N), and therefore, the detectability of a defect:
• Increases with increasing flaw size (scattering amplitude). The detectability of a defect is directly proportional to its size.
• Increases with a more focused beam. In other words, flaw detectability is inversely proportional to the transducer beam width.
• Increases with decreasing pulse width (delta-t). In other words, flaw detectability is inversely proportional to the duration of the pulse produced by an ultrasonic transducer. The shorter the pulse (often higher frequency), the better the detection of the defect. Shorter pulses correspond to broader bandwidth frequency response. See the figure below showing the waveform of a transducer and its corresponding frequency spectrum.
• Decreases in materials with high density and/or a high ultrasonic velocity. The signal-to-noise ratio (S/N) is inversely proportional to material density and acoustic velocity.
• Generally increases with frequency. However, in some materials, such as titanium alloys, both the "Aflaw" and the "Figure of Merit (FOM)" terms in the equation change at about the same rate with changing frequency. So, in some cases, the signal-to-noise ratio (S/N) can be somewhat independent of frequency.

Wave Interaction or Interference
Before we move into the next section, the subject of wave interaction must be covered since it is important when trying to understand the performance of an ultrasonic transducer. On the previous pages, wave propagation was discussed as if a single sinusoidal wave was propagating through the material. However, the sound that emanates from an ultrasonic transducer does not originate from a single point, but instead originates from many points along the surface of the piezoelectric element. This results in a sound field with many waves interacting or interfering with each other.
When waves interact, they superimpose on each other, and the amplitude of the sound pressure or particle displacement at any point of interaction is the sum of the amplitudes of the two individual waves. First, let's consider two identical waves that originate from the same point. When they are in phase (so that the peaks and valleys of one are exactly aligned with those of the other), they combine to double the displacement of either wave acting alone. When they are completely out of phase (so that the peaks of one wave are exactly aligned with the valleys of the other wave), they combine to cancel each other out. When the two waves are not completely in phase or out of phase, the resulting wave is the sum of the wave amplitudes for all points along the wave.









When the origins of the two interacting waves are not the same, it is a little harder to picture the wave interaction, but the principles are the same. Up until now, we have primarily looked at waves in the form of a 2D plot of wave amplitude versus wave position. However, anyone that has dropped something in a pool of water can picture the waves radiating out from the source with a circular wave front. If two objects are dropped a short distance apart into the pool of water, their waves will radiate out from their sources and interact with each other. At every point where the waves interact, the amplitude of the particle displacement is the combined sum of the amplitudes of the particle displacement of the individual waves.
With an ultrasonic transducer, the waves propagate out from the transducer face with a circular wave front. If it were possible to get the waves to propagate out from a single point on the transducer face, the sound field would appear as shown in the upper image to the right. Consider the light areas to be areas of rarefaction and the dark areas to be areas of compression.
However, as stated previously, sound waves originate from multiple points along the face of the transducer. The lower image to the right shows what the sound field would look like if the waves originated from just two points. It can be seen that where the waves interact, there are areas of constructive and destructive interference. The points of constructive interference are often referred to as nodes. Of course, there are more than two points of origin along the face of a transducer. The image below shows five points of sound origination. It can be seen that near the face of the transducer, there are extensive fluctuations or nodes and the sound field is very uneven. In ultrasonic testing, this in known as the near field (near zone) or Fresnel zone. The sound field is more uniform away from the transducer in the far field, or Fraunhofer zone, where the beam spreads out in a pattern originating from the center of the transducer. It should be noted that even in the far field, it is not a uniform wave front. However, at some distance from the face of the transducer and central to the face of the transducer, a uniform and intense wave field develops.

Multiple points of sound origination along the face of the transducer
Strong,
uniform
sound field
The curvature and the area over which the sound is being generated, the speed that the sound waves travel within a material and the frequency of the sound all affect the sound field. Use the Java applet below to experiment with these variables and see how the sound field is affected
Piezoelectric Transducers
The conversion of electrical pulses to mechanical vibrations and the conversion of returned mechanical vibrations back into electrical energy is the basis for ultrasonic testing. The active element is the heart of the transducer as it converts the electrical energy to acoustic energy, and vice versa. The active element is basically a piece of polarized material (i.e. some parts of the molecule are positively charged, while other parts of the molecule are negatively charged) with electrodes attached to two of its opposite faces. When an electric field is applied across the material, the polarized molecules will align themselves with the electric field, resulting in induced dipoles within the molecular or crystal structure of the material. This alignment of molecules will cause the material to change dimensions. This phenomenon is known as electrostriction. In addition, a permanently-polarized material such as quartz (SiO2) or barium titanate (BaTiO3) will produce an electric field when the material changes dimensions as a result of an imposed mechanical force. This phenomenon is known as the piezoelectric effect. Additional information on why certain materials produce this effect can be found in the linked presentation material, which was produced by the Valpey Fisher Corporation.
Piezoelectric Effect (PPT, 89kb) Piezoelectric Elements (PPT, 178kb)
The active element of most acoustic transducers used today is a piezoelectric ceramic, which can be cut in various ways to produce different wave modes. A large piezoelectric ceramic element can be seen in the image of a sectioned low frequency transducer. Preceding the advent of piezoelectric ceramics in the early 1950's, piezoelectric crystals made from quartz crystals and magnetostrictive materials were primarily used. The active element is still sometimes referred to as the crystal by old timers in the NDT field. When piezoelectric ceramics were introduced, they soon became the dominant material for transducers due to their good piezoelectric properties and their ease of manufacture into a variety of shapes and sizes. They also operate at low voltage and are usable up to about 300oC. The first piezoceramic in general use was barium titanate, and that was followed during the 1960's by lead zirconate titanate compositions, which are now the most commonly employed ceramic for making transducers. New materials such as piezo-polymers and composites are also being used in some applications.
The thickness of the active element is determined by the desired frequency of the transducer. A thin wafer element vibrates with a wavelength that is twice its thickness. Therefore, piezoelectric crystals are cut to a thickness that is 1/2 the desired radiated wavelength. The higher the frequency of the transducer, the thinner the active element. The primary reason that high frequency contact transducers are not produced is because the element is very thin and too fragile.
Characteristics of Piezoelectric Transducers
The transducer is a very important part of the ultrasonic instrumentation system. As discussed on the previous page, the transducer incorporates a piezoelectric element, which converts electrical signals into mechanical vibrations (transmit mode) and mechanical vibrations into electrical signals (receive mode). Many factors, including material, mechanical and electrical construction, and the external mechanical and electrical load conditions, influence the behavior of a transducer. Mechanical construction includes parameters such as the radiation surface area, mechanical damping, housing, connector type and other variables of physical construction. As of this writing, transducer manufacturers are hard pressed when constructing two transducers that have identical performance characteristics.

A cut away of a typical contact transducer is shown above. It was previously learned that the piezoelectric element is cut to 1/2 the desired wavelength. To get as much energy out of the transducer as possible, an impedance matching is placed between the active element and the face of the transducer. Optimal impedance matching is achieved by sizing the matching layer so that its thickness is 1/4 of the desired wavelength. This keeps waves that were reflected within the matching layer in phase when they exit the layer (as illustrated in the image to the right). For contact transducers, the matching layer is made from a material that has an acoustical impedance between the active element and steel. Immersion transducers have a matching layer with an acoustical impedance between the active element and water. Contact transducers also incorporate a wear plate to protect the matching layer and active element from scratching.
The backing material supporting the crystal has a great influence on the damping characteristics of a transducer. Using a backing material with an impedance similar to that of the active element will produce the most effective damping. Such a transducer will have a wider bandwidth resulting in higher sensitivity. As the mismatch in impedance between the active element and the backing material increases, material penetration increases but transducer sensitivity is reduced.
Transducer Efficiency, Bandwidth and Frequency
Some transducers are specially fabricated to be more efficient transmitters and others to be more efficient receivers. A transducer that performs well in one application will not always produce the desired results in a different application. For example, sensitivity to small defects is proportional to the product of the efficiency of the transducer as a transmitter and a receiver. Resolution, the ability to locate defects near the surface or in close proximity in the material, requires a highly damped transducer.
It is also important to understand the concept of bandwidth, or range of frequencies, associated with a transducer. The frequency noted on a transducer is the central or center frequency and depends primarily on the backing material. Highly damped transducers will respond to frequencies above and below the central frequency. The broad frequency range provides a transducer with high resolving power. Less damped transducers will exhibit a narrower frequency range and poorer resolving power, but greater penetration. The central frequency will also define the capabilities of a transducer. Lower frequencies (0.5MHz-2.25MHz) provide greater energy and penetration in a material, while high frequency crystals (15.0MHz-25.0MHz) provide reduced penetration but greater sensitivity to small discontinuities. High frequency transducers, when used with the proper instrumentation, can improve flaw resolution and thickness measurement capabilities dramatically. Broadband transducers with frequencies up to 150 MHz are commercially available.
Transducers are constructed to withstand some abuse, but they should be handled carefully. Misuse, such as dropping, can cause cracking of the wear plate, element, or the backing material. Damage to a transducer is often noted on the A-scan presentation as an enlargement of the initial pulse.
Radiated Fields of Ultrasonic Transducers
The sound that emanates from a piezoelectric transducer does not originate from a point, but instead originates from most of the surface of the piezoelectric element. Round transducers are often referred to as piston source transducers because the sound field resembles a cylindrical mass in front of the transducer. The sound field from a typical piezoelectric transducer is shown below. The intensity of the sound is indicated by color, with lighter colors indicating higher intensity.

Since the ultrasound originates from a number of points along the transducer face, the ultrasound intensity along the beam is affected by constructive and destructive wave interference as discussed in a previous page on wave interference. These are sometimes also referred to as diffraction effects. This wave interference leads to extensive fluctuations in the sound intensity near the source and is known as the near field. Because of acoustic variations within a near field, it can be extremely difficult to accurately evaluate flaws in materials when they are positioned within this area.
The pressure waves combine to form a relatively uniform front at the end of the near field. The area beyond the near field where the ultrasonic beam is more uniform is called the far field. In the far field, the beam spreads out in a pattern originating from the center of the transducer. The transition between the near field and the far field occurs at a distance, N, and is sometimes referred to as the "natural focus" of a flat (or unfocused) transducer. The near/far field distance, N, is significant because amplitude variations that characterize the near field change to a smoothly declining amplitude at this point. The area just beyond the near field is where the sound wave is well behaved and at its maximum strength. Therefore, optimal detection results will be obtained when flaws occur in this area.

For a piston source transducer of radius (a), frequency (f), and velocity (V) in a liquid or solid medium, the applet below allows the calculation of the near/far field transition point.

Spherical or cylindrical focusing changes the structure of a transducer field by "pulling" the N point nearer the transducer. It is also important to note that the driving excitation normally used in NDT applications are either spike or rectangular pulsars, not a single frequency. This can significantly alter the performance of a transducer. Nonetheless, the supporting analysis is widely used because it represents a reasonable approximation and a good starting point
Transducer Beam Spread
As discussed on the previous page, round transducers are often referred to as piston source transducers because the sound field resembles a cylindrical mass in front of the transducer. However, the energy in the beam does not remain in a cylinder, but instead spreads out as it propagates through the material. The phenomenon is usually referred to as beam spread but is sometimes also referred to as beam divergence or ultrasonic diffraction. It should be noted that there is actually a difference between beam spread and beam divergence. Beam spread is a measure of the whole angle from side to side of the main lobe of the sound beam in the far field. Beam divergence is a measure of the angle from one side of the sound beam to the central axis of the beam in the far field. Therefore, beam spread is twice the beam divergence.
Although beam spread must be considered when performing an ultrasonic inspection, it is important to note that in the far field, or Fraunhofer zone, the maximum sound pressure is always found along the acoustic axis (centerline) of the transducer. Therefore, the strongest reflections are likely to come from the area directly in front of the transducer.
Beam spread occurs because the vibrating particle of the material (through which the wave is traveling) do not always transfer all of their energy in the direction of wave propagation. Recall that waves propagate through the transfer of energy from one particle to another in the medium. If the particles are not directly aligned in the direction of wave propagation, some of the energy will get transferred off at an angle. (Picture what happens when one ball hits another ball slightly off center). In the near field, constructive and destructive wave interference fill the sound field with fluctuation. At the start of the far field, however, the beam strength is always greatest at the center of the beam and diminishes as it spreads outward.
As shown in the applet below, beam spread is largely determined by the frequency and diameter of the transducer. Beam spread is greater when using a low frequency transducer than when using a high frequency transducer. As the diameter of the transducer increases, the beam spread will be reduced.


Beam angle is an important consideration in transducer selection for a couple of reasons. First, beam spread lowers the amplitude of reflections since sound fields are less concentrated and, thereby weaker. Second, beam spread may result in more difficulty in interpreting signals due to reflections from the lateral sides of the test object or other features outside of the inspection area. Characterization of the sound field generated by a transducer is a prerequisite to understanding observed signals.
Numerous codes exist that can be used to standardize the method used for the characterization of beam spread. American Society for Testing and Materials ASTM E-1065, addresses methods for ascertaining beam shapes in Section A6, Measurement of Sound Field Parameters. However, these measurements are limited to immersion probes. In fact, the methods described in E-1065 are primarily concerned with the measurement of beam characteristics in water, and as such are limited to measurements of the compression mode only. Techniques described in E-1065 include pulse-echo using a ball target and hydrophone receiver, which allows the sound field of the probe to be assessed for the entire volume in front of the probe.
For a flat piston source transducer, an approximation of the beam spread may be calculated as a function of the transducer diameter (D), frequency (F), and the sound velocity (V) in the liquid or solid medium. The applet below allows the beam divergence angle (1/2 the beam spread angle) to be calculated. This angle represents a measure from the center of the acoustic axis to the point where the sound pressure has decreased by one half (-6 dB) to the side of the acoustic axis in the far field.

Note: this applet uses the equation:

Where: q = Beam divergence angle from centerline to point where signal is at half strength.
V = Sound velocity in the material. (inch/sec or cm/sec)
a = Radius of the transducer. (inch or cm)
F = Frequency of the transducer. (cycles/second)
An equal, but perhaps more common version of the formula is:

Where: q = Beam divergence angle from centerline to point where signal is at half strength.
V = Sound velocity in the material. (inch/sec or cm/sec)
D = Diameter of the transducer. (inch or cm)
F = Frequency of the transducer. (cycles/second)
Transducer Types
Ultrasonic transducers are manufactured for a variety of applications and can be custom fabricated when necessary. Careful attention must be paid to selecting the proper transducer for the application. A previous section on Acoustic Wavelength and Defect Detection gave a brief overview of factors that affect defect detectability. From this material, we know that it is important to choose transducers that have the desired frequency, bandwidth, and focusing to optimize inspection capability. Most often the transducer is chosen either to enhance the sensitivity or resolution of the system.
Transducers are classified into groups according to the application.
• Contact transducers are used for direct contact inspections, and are generally hand manipulated. They have elements protected in a rugged casing to withstand sliding contact with a variety of materials. These transducers have an ergonomic design so that they are easy to grip and move along a surface. They often have replaceable wear plates to lengthen their useful life. Coupling materials of water, grease, oils, or commercial materials are used to remove the air gap between the transducer and the component being inspected.
• Immersion transducers do not contact the component. These transducers are designed to operate in a liquid environment and all connections are watertight. Immersion transducers usually have an impedance matching layer that helps to get more sound energy into the water and, in turn, into the component being inspected. Immersion transducers can be purchased with a planer, cylindrically focused or spherically focused lens. A focused transducer can improve the sensitivity and axial resolution by concentrating the sound energy to a smaller area. Immersion transducers are typically used inside a water tank or as part of a squirter or bubbler system in scanning applications.
More on Contact Transducers.
Contact transducers are available in a variety of configurations to improve their usefulness for a variety of applications. The flat contact transducer shown above is used in normal beam inspections of relatively flat surfaces, and where near surface resolution is not critical. If the surface is curved, a shoe that matches the curvature of the part may need to be added to the face of the transducer. If near surface resolution is important or if an angle beam inspection is needed, one of the special contact transducers described below might be used.
Dual element transducers contain two independently operated elements in a single housing. One of the elements transmits and the other receives the ultrasonic signal. Active elements can be chosen for their sending and receiving capabilities to provide a transducer with a cleaner signal, and transducers for special applications, such as the inspection of course grained material. Dual element transducers are especially well suited for making measurements in applications where reflectors are very near the transducer since this design eliminates the ring down effect that single-element transducers experience (when single-element transducers are operating in pulse echo mode, the element cannot start receiving reflected signals until the element has stopped ringing from its transmit function). Dual element transducers are very useful when making thickness measurements of thin materials and when inspecting for near surface defects. The two elements are angled towards each other to create a crossed-beam sound path in the test material.
Delay line transducers provide versatility with a variety of replaceable options. Removable delay line, surface conforming membrane, and protective wear cap options can make a single transducer effective for a wide range of applications. As the name implies, the primary function of a delay line transducer is to introduce a time delay between the generation of the sound wave and the arrival of any reflected waves. This allows the transducer to complete its "sending" function before it starts its "listening" function so that near surface resolution is improved. They are designed for use in applications such as high precision thickness gauging of thin materials and delamination checks in composite materials. They are also useful in high-temperature measurement applications since the delay line provides some insulation to the piezoelectric element from the heat.
Angle beam transducers and wedges are typically used to introduce a refracted shear wave into the test material. Transducers can be purchased in a variety of fixed angles or in adjustable versions where the user determines the angles of incidence and refraction. In the fixed angle versions, the angle of refraction that is marked on the transducer is only accurate for a particular material, which is usually steel. The angled sound path allows the sound beam to be reflected from the backwall to improve detectability of flaws in and around welded areas. They are also used to generate surface waves for use in detecting defects on the surface of a component.
Normal incidence shear wave transducers are unique because they allow the introduction of shear waves directly into a test piece without the use of an angle beam wedge. Careful design has enabled manufacturing of transducers with minimal longitudinal wave contamination. The ratio of the longitudinal to shear wave components is generally below -30dB.
Paint brush transducers are used to scan wide areas. These long and narrow transducers are made up of an array of small crystals that are carefully matched to minimize variations in performance and maintain uniform sensitivity over the entire area of the transducer. Paint brush transducers make it possible to scan a larger area more rapidly for discontinuities. Smaller and more sensitive transducers are often then required to further define the details of a discontinuity
Transducer Testing
Some transducer manufacturers have lead in the development of transducer characterization techniques and have participated in developing the AIUM Standard Methods for Testing Single-Element Pulse-Echo Ultrasonic Transducers as well as ASTM-E 1065 Standard Guide for Evaluating Characteristics of Ultrasonic Search Units.
Additionally, some manufacturers perform characterizations according to AWS, ESI, and many other industrial and military standards. Often, equipment in test labs is maintained in compliance with MIL-C-45662A Calibration System Requirements. As part of the documentation process, an extensive database containing records of the waveform and spectrum of each transducer is maintained and can be accessed for comparative or statistical studies of transducer characteristics.
Manufacturers often provide time and frequency domain plots for each transducer. The signals below were generated by a spiked pulser. The waveform image on the left shows the test response signal in the time domain (amplitude versus time). The spectrum image on the right shows the same signal in the frequency domain (amplitude versus frequency). The signal path is usually a reflection from the back wall (fused silica) with the reflection in the far field of the transducer.

Other tests may include the following:
• Electrical Impedance Plots provide important information about the design and construction of a transducer and can allow users to obtain electrically similar transducers from multiple sources.
• Beam Alignment Measurements provide data on the degree of alignment between the sound beam axis and the transducer housing. This information is particularly useful in applications that require a high degree of certainty regarding beam positioning with respect to a mechanical reference surface.
• Beam Profiles provide valuable information about transducer sound field characteristics. Transverse beam profiles are created by scanning the transducer across a target (usually either a steel ball or rod) at a given distance from the transducer face and are used to determine focal spot size and beam symmetry. Axial beam profiles are created by recording the pulse-echo amplitude of the sound field as a function of distance from the transducer face and provide data on depth of field and focal length
• Transducer Testing II
• As noted in the ASTM E1065 Standard Guide for Evaluating Characteristics of Ultrasonic Transducers, the acoustic and electrical characteristics which can be described from the data, are obtained from specific procedures that are listed below:
• Frequency Response--The frequency response may be obtained from one of two procedures: shock excitation and sinusoidal burst.
• Relative Pulse-Echo Sensitivity--The relative pulse-echo sensitivity may be obtained from the frequency response data by using a sinusoidal burst procedure. The value is obtained from the relationship of the amplitude of the voltage applied to the transducer and the amplitude of the pulse-echo signal received from a specified target.
• Time Response--The time response provides a means for describing the radio frequency (RF) response of the waveform. A shock excitation, pulse-echo procedure is used to obtain the response. The time or waveform responses are recorded from specific targets that are chosen for the type of transducer under evaluation, for example, immersion, contact straight beam, or contact angle beam.

• Typical time and frequency domain plots provided
by transducer manufacturers
• Frequency Response--The frequency response of the above transducer has a peak at 5 MHz and operates over a broad range of frequencies. Its bandwidth (4.1 to 6.15 MHz) is measured at the -6 dB points, or 70% of the peak frequency. The useable bandwidth of broadband transducers, especially in frequency analysis measurements, is often quoted at the -20 dB points. Transducer sensitivity and bandwidth (more of one means less of the other) are chosen based on inspection needs.
• Complex Electrical Impedance--The complex electrical impedance may be obtained with commercial impedance measuring instrumentation, and these measurements may provide the magnitude and phase of the impedance of the search unit over the operating frequency range of the unit. These measurements are generally made under laboratory conditions with minimum cable lengths or external accessories and in accordance with specifications given by the instrument manufacturer. The value of the magnitude of the complex electrical impedance may also be obtained using values recorded from the sinusoidal burst.
• Sound Field Measurements--The objective of these measurements is to establish parameters such as the on-axis and transverse sound beam profiles for immersion, and flat and curved transducers. These measurements are often achieved by scanning the sound field with a hydrophone transducer to map the sound field in three dimensional space. An alternative approach to sound field measurements is a measure of the transducer's radiating surface motion using laser interferometry.
• Transducer Modeling
• In high-technology manufacturing, part design and simulation of part inspection is done in the virtual world of the computer. Transducer modeling is necessary to make accurate predictions of how a part or component might be inspected, prior to the actual building of that part. Computer modeling is also used to design ultrasonic transducers.
• As noted in the previous section, an ultrasonic transducer may be characterized by detailed measurements of its electrical and sound radiation properties. Such measurements can completely determine the response of any one individual transducer.
• There is ongoing research to develop general models that relate electrical inputs (voltage, current) to mechanical outputs (force, velocity) and vice-versa. These models can be very robust in giving accurate prediction of transducer response, but suffer from a lack of accurate modeling of physical variables inherent in transducer manufacturing. These electrical-mechanical response models must take into account the physical and electrical components in the figure below.

• The Thompson-Gray Measurement Model, which makes very accurate predictions of ultrasonic scattering measurements made through liquid-solid interfaces, does not attempt to model transducer electrical-mechanical response. The Thompson-Gray Measurement Model approach makes use of reference data taken with the same transducer(s) to deconvolve electro-physical characteristics specific to individual transducers. See Section 5.4 Thompson-Gray Measurement Model.
• The long term goal in ultrasonic modeling is to incorporate accurate models of the transducers themselves as well as accurate models of pulser-receivers, cables, and other components that completely describe any given inspection setup and allow the accurate prediction of inspection signals.
• Couplant
• A couplant is a material (usually liquid) that facilitates the transmission of ultrasonic energy from the transducer into the test specimen. Couplant is generally necessary because the acoustic impedance mismatch between air and solids (i.e. such as the test specimen) is large. Therefore, nearly all of the energy is reflected and very little is transmitted into the test material. The couplant displaces the air and makes it possible to get more sound energy into the test specimen so that a usable ultrasonic signal can be obtained. In contact ultrasonic testing a thin film of oil, glycerin or water is generally used between the transducer and the test surface.
• When scanning over the part or making precise measurements, an immersion technique is often used. In immersion ultrasonic testing both the transducer and the part are immersed in the couplant, which is typically water. This method of coupling makes it easier to maintain consistent coupling while moving and manipulating the transducer and/or the part.

• .
• Electromagnetic Acoustic Transducers (EMATs)
• As discussed on the previous page, one of the essential features of ultrasonic measurements is mechanical coupling between the transducer and the solid whose properties or structure are to be studied. This coupling is generally achieved in one of two ways. In immersion measurements, energy is coupled between the transducer and sample by placing both objects in a tank filled with a fluid, generally water. In contact measurements, the transducer is pressed directly against the sample, and coupling is achieved by the presence of a thin fluid layer inserted between the two. When shear waves are to be transmitted, the fluid is generally selected to have a significant viscosity.
• Electromagnetic-acoustic transducers (EMAT) acts through totally different physical principles and do not need couplant. When a wire is placed near the surface of an electrically conducting object and is driven by a current at the desired ultrasonic frequency, eddy currents will be induced in a near surface region of the object. If a static magnetic field is also present, these eddy currents will experience Lorentz forces of the form
• F = J x B
• F is the body force per unit volume, J is the induced dynamic current density, and B is the static magnetic induction.
• The most important application of EMATs has been in nondestructive evaluation (NDE) applications such as flaw detection or material property characterization. Couplant free transduction allows operation without contact at elevated temperatures and in remote locations. The coil and magnet structure can also be designed to excite complex wave patterns and polarizations that would be difficult to realize with fluid coupled piezoelectric probes. In the inference of material properties from precise velocity or attenuation measurements, using EMATs can eliminate errors associated with couplant variation, particularly in contact measurements.
• A number of practical EMAT configurations are shown below. In each, the biasing magnet structure, the coil, and the forces on the surface of the solid are shown in an exploded view. The first three configurations will excite beams propagating normal to the surface of the half-space and produce beams with radial, longitudinal, and transverse polarizations, respectively. The final two use spatially varying stresses to excite beams propagating at oblique angles or along the surface of a component. Although a great number of variations on these configurations have been conceived and used in practice, consideration of these three geometries should suffice to introduce the fundamentals.

• Cross-sectional view of a spiral coil EMAT exciting radially polarized shear waves propagating normal to the surface.

• Cross-sectional view of a tangential field EMAT for exciting polarized longitudinal waves propagating normal to the surface.

• Cross-sectional view of a normal field EMAT for exciting plane polarized shear waves propagating normal to the surface.

• Cross-sectional view of a meander coil EMAT for exciting obliquely propagating L or SV waves, Rayleigh waves, or guided modes (such as Lamb waves) in plates.

• Cross-sectional view of a periodic permanent magnet EMAT for exciting grazing or obliquely propagating horizontally polarized (SH) waves or guided SH modes in plates.
• Practical EMAT designs are relatively narrowband and require strong magnetic fields and large currents to produce ultrasound that is often weaker than that produced by piezoelectric transducers. Rare-earth materials such as Samarium-Cobalt and Neodymium-Iron-Boron are often used to produce sufficiently strong magnetic fields, which may also be generated by pulsed electromagnets.
• The EMAT offers many advantages based on its couplant-free operation. These advantages include the abilities to operate in remote environments at elevated speeds and temperatures, to excite polarizations not easily excited by fluid coupled piezoelectrics, and to produce highly consistent measurements.
• These advantages are tempered by low efficiencies, and careful electronic design is essential to applications.
• More information about the use of EMATs can be found at the following links.
• Lamb Wave Generation With EMATs
Shear Wave Generation With EMATs
Velocity Measurements With EMATs
Texture Measurement I With EMATs
Texture Measurement II With EMATs
Stress Measurement With EMATs
Composite inspection With EMATs
Pulser-Receivers
Ultrasonic pulser-receivers are well suited to general purpose ultrasonic testing. Along with appropriate transducers and an oscilloscope, they can be used for flaw detection and thickness gauging in a wide variety of metals, plastics, ceramics, and composites. Ultrasonic pulser-receivers provide a unique, low-cost ultrasonic measurement capability.

The pulser section of the instrument generates short, large amplitude electric pulses of controlled energy, which are converted into short ultrasonic pulses when applied to an ultrasonic transducer. Most pulser sections have very low impedance outputs to better drive transducers. Control functions associated with the pulser circuit include:
• Pulse length or damping (The amount of time the pulse is applied to the transducer.)
• Pulse energy (The voltage applied to the transducer. Typical pulser circuits will apply from 100 volts to 800 volts to a transducer.)
In the receiver section the voltage signals produced by the transducer, which represent the received ultrasonic pulses, are amplified. The amplified radio frequency (RF) signal is available as an output for display or capture for signal processing. Control functions associated with the receiver circuit include
• Signal rectification (The RF signal can be viewed as positive half wave, negative half wave or full wave.)
• Filtering to shape and smooth return signals
• Gain, or signal amplification
• Reject control
The pulser-receiver is also used in material characterization work involving sound velocity or attenuation measurements, which can be correlated to material properties such as elastic modulus. In conjunction with a stepless gate and a spectrum analyzer, pulser-receivers are also used to study frequency dependent material properties or to characterize the performance of ultrasonic transducers.
Tone Burst Generators In Research
Tone burst generators are often used in high power ultrasonic applications. Modern computer controlled ultrasonic instrumentation, such as Ritec's RAM 10000, is a complete advanced measurement system designed to satisfy the needs of the acoustic researcher in materials science or advanced NDE. Its purpose is to transmit bursts of acoustic energy into a test piece, receive signals from the piece following this burst, then manipulate and analyze these received signals in various ways. Extreme versatility is achieved through a modular approach allowing an instrument to be configured for unique applications not previously encountered. Unwanted modules need not be purchased and in many cases special modules can be designed and constructed.
The high power radio frequency (RF) burst capability allows researchers to work with difficult, highly attenuative materials or inefficient transducers such as EMATs.
A computer interface makes it possible for the system to make high speed complex measurements, such as those involving multiple frequencies. Many of these measurements are very limited or impossible with manually controlled instruments. A Windows or DOS based personal computer controls and acquires data from the system. Software is supplied with each RAM-10000 suitable for a wide variety of applications including those involving EMATs, acoustic resonance, velocity, relative velocity, and attenuation measurements. In addition, the source code for this software is made available so that it may be modified to include new applications or changes in technique.
The unique automatic tracking superheterodyne receiver, quadrature phase sensitive detection circuits and gated integrators offer superb analog signal processing capability. Both the real and imaginary parts of the value of the Fourier transform at the driving frequency are obtained. This increases the dynamic range of the instrumentation and allows phase and amplitude information at the driving frequency to be extracted from noise and out-of-band spurious signals more efficiently than using Fast Fourier Transform (FFT) techniques
Arbitrary Function Generators
Arbitrary waveform generators permit the user to design and generate virtually any waveform in addition to the standard function generator signals (i.e. sine wave, square wave, etc.). Waveforms are generated digitally from a computer's memory, and most instruments allow the downloading of digital waveform files from computers.
Ultrasonic generation pulses must be varied to accommodate different types of ultrasonic transducers. General-purpose highly damped contact transducers are usually excited by a wideband, spike-like pulse provided by many common pulser/receiver units. The lightly damped transducers used in high power generation, for example, require a narrowband tone-burst excitation from a separate generator unit. Sometimes the same transducer will be excited differently, such as in the study of the dispersion of a material's ultrasonic attenuation or to characterize ultrasonic transducers.

Section of biphase modulated spread spectrum ultrasonic waveform
In spread spectrum ultrasonics (see spread spectrum page), encoded sound is generated by an arbitrary waveform generator continuously transmitting coded sound into the part or structure being tested. Instead of receiving echoes, spread spectrum ultrasonics generates an acoustic correlation signature having a one-to-one correspondence with the acoustic state of the part or structure (in its environment) at the instant of measurement. In its simplest embodiment, the acoustic correlation signature is generated by cross correlating an encoding sequence (with suitable cross and auto correlation properties) transmitted into a part (structure) with received signals returning from the part (structure).
Electrical Impedance Matching and Termination
When computer systems were first introduced decades ago, they were large, slow-working devices that were incompatible with each other. Today, national and international networking standards have established electronic control protocols that enable different systems to "talk" to each other. The Electronics Industries Associations (EIA) and the Institute of Electrical and Electronics Engineers (IEEE) developed standards that established common terminology and interface requirements, such as EIA RS-232 and IEEE 802.3. If a system designer builds equipment to comply with these standards, the equipment will interface with other systems. But what about analog signals that are used in ultrasonics?
Data Signals: Input versus Output
Consider the signal going to and from ultrasonic transducers. When you transmit data through a cable, the requirement usually simplifies into comparing what goes in one end with what comes out the other. High frequency pulses degrade or deteriorate when they are passed through any cable. Both the height of the pulse (magnitude) and the shape of the pulse (wave form) change dramatically, and the amount of change depends on the data rate, transmission distance and the cable's electrical characteristics. Sometimes a marginal electrical cable may perform adequately if used in only short lengths, but the same cable with the same data in long lengths will fail. This is why system designers and industry standards specify precise cable criteria.
Recommendation: Observe manufacturer's recommended practices for cable impedance, cable length, impedance matching, and any requirements for termination in characteristic impedance.
Recommendation: If possible, use the same cables and cable dressing for all inspections.
Cable Electrical Characteristics
The most important characteristics in an electronic cable are impedance, attenuation, shielding, and capacitance. In this page, we can only review these characteristics very generally, however, we will discuss capacitance in more detail.
Impedance (Ohms) represents the total resistance that the cable presents to the electrical current passing through it. At low frequencies the impedance is largely a function of the conductor size, but at high frequencies conductor size, insulation material, and insulation thickness all affect the cable's impedance. Matching impedance is very important. If the system is designed to be 100 Ohms, then the cable should match that impedance, otherwise error-producing reflections are created.
Attenuation is measured in decibels per unit length (dB/m), and provides an indication of the signal loss as it travels through the cable. Attenuation is very dependent on signal frequency. A cable that works very well with low frequency data may do very poorly at higher data rates. Cables with lower attenuation are better.
Shielding is normally specified as a cable construction detail. For example, the cable may be unshielded, contain shielded pairs, have an overall aluminum/mylar tape and drain wire, or have a double shield. Cable shields usually have two functions: to act as a barrier to keep external signals from getting in and internal signals from getting out, and to be a part of the electrical circuit. Shielding effectiveness is very complex to measure and depends on the data frequency within the cable and the precise shield design. A shield may be very effective in one frequency range, but a different frequency may require a completely different design. System designers often test complete cable assemblies or connected systems for shielding effectiveness.
Capacitance in a cable is usually measured as picofarads per foot (pf/m). It indicates how much charge the cable can store within itself. If a voltage signal is being transmitted by a twisted pair, the insulation of the individual wires becomes charged by the voltage within the circuit. Since it takes a certain amount of time for the cable to reach its charged level, this slows down and interferes with the signal being transmitted. Digital data pulses are a string of voltage variations that are represented by square waves. A cable with a high capacitance slows down these signals so that they come out of the cable looking more like "saw-teeth," rather than square waves. The lower the capacitance of the cable, the better it performs with high speed data.
Data Presentation
Ultrasonic data can be collected and displayed in a number of different formats. The three most common formats are know in the NDT world as A-scan, B-scan and C-scan presentations. Each presentation mode provides a different way of looking at and evaluating the region of material being inspected. Modern computerized ultrasonic scanning systems can display data in all three presentation forms simultaneously.
A-Scan Presentation
The A-scan presentation displays the amount of received ultrasonic energy as a function of time. The relative amount of received energy is plotted along the vertical axis and the elapsed time (which may be related to the sound energy travel time within the material) is displayed along the horizontal axis. Most instruments with an A-scan display allow the signal to be displayed in its natural radio frequency form (RF), as a fully rectified RF signal, or as either the positive or negative half of the RF signal. In the A-scan presentation, relative discontinuity size can be estimated by comparing the signal amplitude obtained from an unknown reflector to that from a known reflector. Reflector depth can be determined by the position of the signal on the horizontal sweep.
In the illustration of the A-scan presentation to the right, the initial pulse generated by the transducer is represented by the signal IP, which is near time zero. As the transducer is scanned along the surface of the part, four other signals are likely to appear at different times on the screen. When the transducer is in its far left position, only the IP signal and signal A, the sound energy reflecting from surface A, will be seen on the trace. As the transducer is scanned to the right, a signal from the backwall BW will appear later in time, showing that the sound has traveled farther to reach this surface. When the transducer is over flaw B, signal B will appear at a point on the time scale that is approximately halfway between the IP signal and the BW signal. Since the IP signal corresponds to the front surface of the material, this indicates that flaw B is about halfway between the front and back surfaces of the sample. When the transducer is moved over flaw C, signal C will appear earlier in time since the sound travel path is shorter and signal B will disappear since sound will no longer be reflecting from it.
B-Scan Presentation
The B-scan presentations is a profile (cross-sectional) view of the test specimen. In the B-scan, the time-of-flight (travel time) of the sound energy is displayed along the vertical axis and the linear position of the transducer is displayed along the horizontal axis. From the B-scan, the depth of the reflector and its approximate linear dimensions in the scan direction can be determined. The B-scan is typically produced by establishing a trigger gate on the A-scan. Whenever the signal intensity is great enough to trigger the gate, a point is produced on the B-scan. The gate is triggered by the sound reflecting from the backwall of the specimen and by smaller reflectors within the material. In the B-scan image above, line A is produced as the transducer is scanned over the reduced thickness portion of the specimen. When the transducer moves to the right of this section, the backwall line BW is produced. When the transducer is over flaws B and C, lines that are similar to the length of the flaws and at similar depths within the material are drawn on the B-scan. It should be noted that a limitation to this display technique is that reflectors may be masked by larger reflectors near the surface.
C-Scan Presentation
The C-scan presentation provides a plan-type view of the location and size of test specimen features. The plane of the image is parallel to the scan pattern of the transducer. C-scan presentations are produced with an automated data acquisition system, such as a computer controlled immersion scanning system. Typically, a data collection gate is established on the A-scan and the amplitude or the time-of-flight of the signal is recorded at regular intervals as the transducer is scanned over the test piece. The relative signal amplitude or the time-of-flight is displayed as a shade of gray or a color for each of the positions where data was recorded. The C-scan presentation provides an image of the features that reflect and scatter the sound within and on the surfaces of the test piece.
High resolution scans can produce very detailed images. Below are two ultrasonic C-scan images of a US quarter. Both images were produced using a pulse-echo technique with the transducer scanned over the head side in an immersion scanning system. For the C-scan image on the left, the gate was setup to capture the amplitude of the sound reflecting from the front surface of the quarter. Light areas in the image indicate areas that reflected a greater amount of energy back to the transducer. In the C-scan image on the right, the gate was moved to record the intensity of the sound reflecting from the back surface of the coin. The details on the back surface are clearly visible but front surface features are also still visible since the sound energy is affected by these features as it travels through the front surface of the coin.



Error Analysis
All measurements, including ultrasonic measurements, however careful and scientific, are subject to some uncertainties. Error analysis is the study and evaluation of these uncertainties; its two main functions being to allow the practitioner to estimate how large the uncertainties are and to help him or her to reduce them when necessary. Because ultrasonics depends on measurements, evaluation and minimization of uncertainties is crucial.
In science the word "error" does not mean "mistake" or "blunder" but rather the inevitable uncertainty of all measurements. Because they cannot be avoided, errors in this context are not, strictly speaking, "mistakes." At best, they can be made as small as reasonably possible, and their size can be reliably estimated.
To illustrate the inevitable occurrence of uncertainties surrounding attempts at measurement, let us consider a carpenter who must measure the height of a doorway to an X-ray vault in order to install a door. As a first rough measurement, she might simply look at the doorway and estimate that it is 210 cm high. This crude "measurement" is certainly subject to uncertainty. If pressed, the carpenter might express this uncertainty by admitting that the height could be as little as 205 or as much as 215 cm.
If she wanted a more accurate measurement, she would use a tape measure, and she might find that the height is 211.3 cm. This measurement is certainly more precise than her original estimate, but it is obviously still subject to some uncertainty, since it is inconceivable that she could know the height to be exactly 211.3000 rather than 211.3001 cm, for example.
There are many reasons for this remaining uncertainty. Some of these causes of uncertainty could be removed if enough care were taken. For example, one source of uncertainty might be that poor lighting is making it difficult to read the tape; this could be corrected by improved lighting.
On the other hand, some sources of uncertainty are intrinsic to the process of measurement and can never be entirely removed. For instance, let us suppose the carpenter's tape is graduated in half-centimeters. The top of the door will probably not coincide precisely with one of the half-centimeter marks, and if it does not, then the carpenter must estimate just where the top lies between two marks. Even if the top happens to coincide with one of the marks, the mark itself is perhaps a millimeter wide, so she must estimate just where the top lies within the mark. In either case, the carpenter ultimately must estimate where the top of the door lies relative to the markings on her tape, and this necessity causes some uncertainty in her answer.
By buying a better tape with closer and finer markings, the carpenter can reduce her uncertainty, but she cannot eliminate it entirely. If she becomes obsessively determined to find the height of the door with the greatest precision that is technically possible, she could buy an expensive laser interferometer. But even the precision of an interferometer is limited to distances on the order of the wavelength of light (about 0.000005 meters). Although she would now be able to measure the height with fantastic precision, she still would not know the height of the doorway exactly.
Furthermore, as the carpenter strives for greater precision, she will encounter an important problem of principle. She will certainly find that the height is different in different places. Even in one place, she will find that the height varies if the temperature and humidity vary, or even if she accidentally rubs off a thin layer of dirt. In other words, she will find that there is no such thing as one exact height of the doorway. This kind of problem is called a "problem of definition" (the height of the door is not well-defined and plays an important role in many scientific measurements).
Our carpenter's experiences illustrate what is found to be generally true. No physical quantity (a thickness, time between pulse-echoes, a transducer position, etc.) can be measured with complete certainty. With care we may be able to reduce the uncertainties until they are extremely small, but to eliminate them entirely is impossible.
In everyday measurements we do not usually bother to discuss uncertainties. Sometimes the uncertainties are simply not interesting. If we say that the distance between home and school is 3 mile
Normal Beam Inspection
Pulse-echo ultrasonic measurements can determine the location of a discontinuity in a part or structure by accurately measuring the time required for a short ultrasonic pulse generated by a transducer to travel through a thickness of material, reflect from the back or the surface of a discontinuity, and be returned to the transducer. In most applications, this time interval is a few microseconds or less. The two-way transit time measured is divided by two to account for the down-and-back travel path and multiplied by the velocity of sound in the test material. The result is expressed in the well-known relationship
d = vt/2 or v = 2d/t
where d is the distance from the surface to the discontinuity in the test piece, v is the velocity of sound waves in the material, and t is the measured round-trip transit time.
The diagram below allows you to move a transducer over the surface of a stainless steel test block and see return echoes as they would appear on an oscilloscope. The transducer employed is a 5 MHz broadband transducer 0.25 inches in diameter. The signals were generated with computer software similar to that found in the Thompson-Gray Measurement Model and UTSIM developed at the Center for Nondestructive Evaluation at Iowa State University.

Precision ultrasonic thickness gages usually operate at frequencies between 500 kHz and 100 MHz, by means of piezoelectric transducers that generate bursts of sound waves when excited by electrical pulses. A wide variety of transducers with various acoustic characteristics have been developed to meet the needs of industrial applications. Typically, lower frequencies are used to optimize penetration when measuring thick, highly attenuating or highly scattering materials, while higher frequencies will be recommended to optimize resolution in thinner, non-attenuating, non-scattering materials.
In thickness gauging, ultrasonic techniques permit quick and reliable measurement of thickness without requiring access to both sides of a part. Accuracy's as high as ±1 micron or ±0.0001 inch can be achieved in some applications. It is possible to measure most engineering materials ultrasonically, including metals, plastic, ceramics, composites, epoxies, and glass as well as liquid levels and the thickness of certain biological specimens. On-line or in-process measurement of extruded plastics or rolled metal often is possible, as is measurements of single layers or coatings in multilayer materials. Modern handheld gages are simple to use and very reliable.
Angle Beams I
Angle Beam Transducers and wedges are typically used to introduce a refracted shear wave into the test material. An angled sound path allows the sound beam to come in from the side, thereby improving detectability of flaws in and around welded areas.








Angle Beams II
Angle Beam Transducers and wedges are typically used to introduce a refracted shear wave into the test material. The geometry of the sample below allows the sound beam to be reflected from the back wall to improve detectability of flaws in and around welded areas.

Crack Tip Diffraction
When the geometry of the part is relatively uncomplicated and the orientation of a flaw is well known, the length (a) of a crack can be determined by a technique known as tip diffraction. One common application of the tip diffraction technique is to determine the length of a crack originating from on the backside of a flat plate as shown below. In this case, when an angle beam transducer is scanned over the area of the flaw, the principle echo comes from the base of the crack to locate the position of the flaw (Image 1). A second, much weaker echo comes from the tip of the crack and since the distance traveled by the ultrasound is less, the second signal appears earlier in time on the scope (Image 2).



Crack height (a) is a function of the ultrasound velocity (v) in the material, the incident angle and the difference in arrival times between the two signal (dt). Since the incident angle and the thickness of the material is the same in both measurements, two similar right triangle are formed such that one can be overlayed on the other. A third similar right triangle is made, which is comprised on the crack, the length dt and the angleThe variable dt is really the difference in time but can easily be converted to a distance by dividing the time in half (to get the one-way travel time) and multiplying this value by the velocity of the sound in the material. Using trigonometry an equation for estimating crack height from these variables can be derived as shown below.


Solving for "a" the equation becomes

The equation is complete once distance dt is calculated by dividing the difference in time between the two signals (dt) by two and multiplying this value by the sound velocity.

Automated Scanning
Ultrasonic scanning systems are used for automated data acquisition and imaging. They typically integrate a ultrasonic instrumentation, a scanning bridge, and computer controls. The signal strength and/or the time-of-flight of the signal is measured for every point in the scan plan. The value of the data is plotted using colors or shades of gray to produce detailed images of the surface or internal features of a component. Systems are usually capable of displaying the data in A-, B- and C-scan modes simultaneously. With any ultrasonic scanning system there are two factors to consider:
1. how to generate and receive the ultrasound.
2. how to scan the transducer(s) with respect to the part being inspected.
The most common ultrasonic scanning systems involve the use of an immersion tank as shown in the image above. The ultrasonic transducer and the part are placed under water so that consistent coupling is maintained by the water path as the transducer or part is moved within the tank. However, scanning systems come in a large variety of configurations to meet specific inspection needs. In the image to the right, an engineer aligns the heads of a squirter system that uses a through-transmission technique to inspect aircraft composite structures. In this system, the ultrasound travels through columns of forced water which are scanned about the part with a robotic system. A variation of the squirter system is the "Dripless Bubbler" scanning system, which is discussed below.
It is often desirable to eliminate the need for the water coupling and a number of state-of-the-art UT scanning systems have done this. Laser ultrasonic systems use laser beams to generate the ultrasound and collect the resulting signals in an noncontact mode. Advances in transducer technology has lead to the development of an inspection technique known as air-coupled ultrasonic inspection. These systems are capable of sending ultrasonic energy through air and getting enough energy into the part to have a useable signal. These system typically use a through-transmission technique since reflected energy from discontinuities are too weak to detect.
The second major consideration is how to scan the transducer(s) with respect to the part being inspected. When the sample being inspected has a flat surface, a simple raster-scan can be performed. If the sample is cylindrical, a turntable can be used to turn the sample while the transducer is held stationary or scanned in the axial direction of the cylinder. When the sample is irregular shaped, scanning becomes more difficult. As illustrated in the beam modeling animation, curved surface can steer, focus and defocus the ultrasonic beam. For inspection applications involving parts having complex curvatures, scanning systems capable of performing contour following are usually necessary.
Precision Velocity Measurements
Changes in ultrasonic wave propagation speed, along with energy losses, from interactions with a materials microstructures are often used to nondestructively gain information about a material's properties. Measurements of sound velocity and ultrasonic wave attenuation can be related to the elastic properties that can be used to characterize the texture of polycrystalline metals. These measurements enable industry to replace destructive microscopic inspections with nondestructive methods.
Of interest in velocity measurements are longitudinal wave, which propagate in gases, liquids, and solids. In solids, also of interest are transverse (shear) waves. The longitudinal velocity is independent of sample geometry when the dimensions at right angles to the beam are large compared to the beam area and wavelength. The transverse velocity is affected little by the physical dimensions of the sample.
Pulse-Echo and Pulse-Echo-Overlap Methods
Rough ultrasonic velocity measurements are as simple as measuring the time it takes for a pulse of ultrasound to travel from one transducer to another (pitch-catch) or return to the same transducer (pulse-echo). Another method is to compare the phase of the detected sound wave with a reference signal: slight changes in the transducer separation are seen as slight phase changes, from which the sound velocity can be calculated. These methods are suitable for estimating acoustic velocity to about 1 part in 100. Standard practice for measuring velocity in materials is detailed in ASTM E494.
Precision Velocity Measurements (using EMATs)
Electromagnetic-acoustic transducers (EMAT) generate ultrasound in the material being investigated. When a wire or coil is placed near to the surface of an electrically conducting object and is driven by a current at the desired ultrasonic frequency, eddy currents will be induced in a near surface region. If a static magnetic field is also present, these currents will experience Lorentz forces of the form
F = J x B
where F is a body force per unit volume, J is the induced dynamic current density, and B is the static magnetic induction.
The most important application of EMATs has been in nondestructive evaluation (NDE) applications such as flaw detection or material property characterization. Couplant free transduction allows operation without contact at elevated temperatures and in remote locations. The coil and magnet structure can also be designed to excite complex wave patterns and polarizations that would be difficult to realize with fluid coupled piezoelectric probes. In the inference of material properties from precise velocity or attenuation measurements, use of EMATs can eliminate errors associated with couplant variation, particularly in contact measurements.
Differential velocity is measured using a T1-T2---R fixed array of EMAT transducers at 0, 45°, 90° or 0°, 90° relative rotational directions depending on device configuration:


EMAT Driver Frequency: 450-600 KHz (nominal)
Sampling Period: 100 ns
Time Measurement Accuracy:
--Resolution 0.1 ns
--Accuracy required for less than 2 KSI Stress Measurements: Variance 2.47 ns
--Accuracy required for texture: Variance 10.0 Ns
------W440 < 3.72E-5
------W420 < 1.47E-4
------W400 < 2.38E-4
Time Measurement Technique
Fourier Transform-Phase-Slope determination of delta time between received RF bursts (T2-R) - (T1-R), where T2 and T1 EMATs are driven in series to eliminate differential phase shift due to probe liftoff.




Slope of the phase is determined by linear regression of weighted data points within the signal bandwidth and a weighted y-intercept. The accuracy obtained with this method can exceed one part in one hundred thousand (1:100,000).
Attenuation Measurements
Ultrasonic wave propagation is influenced by the microstructure of the material through which it propagates. The velocity of the ultrasonic waves is influenced by the elastic moduli and the density of the material, which in turn are mainly governed by the amount of various phases present and the damage in the material. Ultrasonic attenuation, which is the sum of the absorption and the scattering, is mainly dependent upon the damping capacity and scattering from the grain boundary in the material. However, to fully characterize the attenuation required knowledge of a large number of thermo-physical parameters that in practice are hard to quantify.
Relative measurements such as the change of attenuation and simple qualitative tests are easier to make than absolute measure. Relative attenuation measurements can be made by examining the exponential decay of multiple back surface reflections. However, significant variations in microstructural characteristics and mechanical properties often produce only a relatively small change in wave velocity and attenuation.
Absolute measurements of attenuation are very difficult to obtain because the echo amplitude depends on factors in addition to amplitude. The most common method used to get quantitative results is to use an ultrasonic source and detector transducer separated by a known distance. By varying the separation distance, the attenuation can be measured from the changes in the amplitude. To get accurate results, the influence of coupling conditions must be carefully addressed. To overcome the problems related to conventional ultrasonic attenuation measurements, ultrasonic spectral parameters for frequency-dependent attenuation measurements, which are independent from coupling conditions are also used. For example, the ratio of the amplitudes of higher frequency peak to the lower frequency peak, has been used for microstructural characterization of some materials.
Spread Spectrum Ultrasonics
Spread spectrum ultrasonics makes use of the correlation of continuous signals rather than pulse-echo or pitch-catch techniques.
Spread spectrum ultrasonics is a patented new broad band spread-spectrum ultrasonic nondestructive evaluation method. In conventional ultrasonics, a pulse or tone burst is transmitted, then received echoes or through-transmission signals are received and analyzed.
In spread spectrum ultrasonics, encoded sound is continuously transmitted into the part or structure being tested. Instead of receiving echoes, spread spectrum ultrasonics generates an acoustic correlation signature having a one-to-one correspondence with the acoustic state of the part or structure (in its environment) at the instant of the measurement. In its simplest embodiment, the acoustic correlation signature is generated by cross correlating an encoding sequence, with suitable cross and auto correlation properties, transmitted into a part (structure) with received signals returning from the part (structure).

Section of biphase modulated spread spectrum ultrasonic waveform
Multiple probes may be used to ensure that acoustic energy is propagated through all critical volumes of the structure. Triangulation may be incorporated with multiple probes to locate regions of detected distress. Spread spectrum ultrasonics can achieve very high sensitivity to acoustic propagation changes with a low level of energy.

Two significant applications of Spread Spectrum Ultrasonics are:
1. Large Structures that allow ultrasonic transducers to be "permanently" affixed to the structures, eliminating variations in transducer registration and couplant. Comparisons with subsequent acoustic correlation signatures can be used to monitor critical structures such as fracture critical bridge girders. In environments where structures experience a great many variables such as temperature, load, vibration, or environmental coupling, it is necessary to filter out these effects to obtain the correct measurements of defects.
In the example below, simulated defects were created by setting a couple of steel blocks on the top of the bridge girder.
Trial Setup Contact Area

Baseline No Flaw --
Flaw 1 One block laying flat on girder 12.5 sq in
Flaw 2 One block standing on its long side 1.25 sq in
Flaw 3 Both blocks standing on their long sides 2.50 sq in
Flaw 4 Both blocks laying flat on girder 25.0 sq in


2. Piece-part assembly line environments where transducers and couplant may be precisely controlled, eliminating significant variations in transducer registration and couplant. Acoustic correlation signatures may be statistically compared to an ensemble of known "good" parts for sorting or accepting/rejecting criteria in a piece-part assembly line environment.
Impurities in the incoming steel used to forge piece parts may result in sulfite stringer inclusions. In this next example simulated defects were created by placing a magnetized steel wire on the surface of a small steel cylindrical piston used in hydraulic transmissions.

Two discrimination technique are tested here, which are SUF-1 and SUF-2, with the latter giving the best discrimination between defect conditions. The important point being that spread spectrum ultrasonics can be extremely sensitive to the acoustic state of a part or structure being tested, and therefore, is a good ultrasonic candidate for testing and monitoring, especially where scanning is economic unfeasible.



Signal Processing Techniques
Signal processing involves techniques that improve our understanding of information contained in received ultrasonic data. Normally, when a signal is measured with an oscilloscope, it is viewed in the time domain (vertical axis is amplitude or voltage and the horizontal axis is time). For many signals, this is the most logical and intuitive way to view them. Simple signal processing often involves the use of gates to isolate the signal of interest or frequency filters to smooth or reject unwanted frequencies.
When the frequency content of the signal is of interest, it makes sense to view the signal graph in the frequency domain. In the frequency domain, the vertical axis is still voltage but the horizontal axis is frequency.

-----------Time Domain --------------------Frequency Domain (Magnitude)
The frequency domain display shows how much of the signal's energy is present as a function of frequency. For a simple signal such as a sine wave, the frequency domain representation does not usually show us much additional information. However, with more complex signals, such as the response of a broad bandwidth transducer, the frequency domain gives a more useful view of the signal.
Fourier theory says that any complex periodic waveform can be decomposed into a set of sinusoids with different amplitudes, frequencies and phases. The process of doing this is called Fourier Analysis, and the result is a set of amplitudes, phases, and frequencies for each of the sinusoids that makes up the complex waveform. Adding these sinusoids together again will reproduce exactly the original waveform. A plot of the frequency or phase of a sinusoid against amplitude is called a spectrum.
The following Fourier Java applet, adapted with permission of Stanford University, allows the user to manipulate discrete time domain or frequency domain components and see the relationships between signals in time and frequency domains.
The top row (light blue color) represents the real and imaginary parts of the time domain. Normally the imaginary part of the time domain signal is identically zero.
The middle row (peach color) represents the the real and imaginary parts of the frequency domain.
The bottom row (light green color) represents the magnitude (amplitude) and phase of the frequency domain signal. Magnitude is the square root of the sum of the squares of the real and imaginary components. Phase is the angular relationship of the real and imaginary components. Ultrasonic transducer manufactures often provide plots of both time domain and frequency domain (magnitude) signals characteristic of each transducer. Use this applet to explore the relationship between time and frequency domains.
Exercise: Try replicating time domain signal in the upper left box with a pattern similar to the image on the right. Note the resulting bandwidth in the frequency domain (magnitude) in the lower left box. Next try changing the magnitude, perhaps more of a "mountain" shape tapering to zero. Note that "narrowing" the magnitude, results in more cycles in the time domain signal.
Flaw Reconstruction Techniques
In nondestructive evaluation of structural material defects, the size, shape, and orientation are important flaw parameters in structural integrity assessment. To illustrate flaw reconstruction, a multiviewing ultrasonic transducer system is shown below. A single probe moved sequentially to achieve different perspectives would work equally as well. The apparatus and the signal-processing algorithms were specifically designed at the Center for Nondestructive Evaluation to make use of the theoretical developments in elastic wave scattering in the long and intermediate wavelength regime.
Depicted schematically at the right is the multiprobe system consisting of a sparse array of seven unfocused immersion transducers. This system can be used to "focus" onto a target flaw in a solid by refraction at the surface. The six perimeter transducers are equally spaced on a 5.08 cm diameter ring, surrounding a center transducer. Each of the six perimeter transducers may be independently moved along its axis to allow an equalization of the propagation time for any pitch-catch or pulse-echo combinations. The system currently uses 0.25 in diameter transducers with a nominal center frequency of 10 MHz and a bandwidth extending from approximately 2 to 16 MHz. The axis of the aperture cone of the transducer assembly normally remains vertical and perpendicular to the part surface.
The flaw reconstruction algorithm normally makes use of 13 or 19 backscatter waveforms acquired in a conical pattern within the aperture. The data-acquisition and signal-processing protocol has four basic steps.
1. Step one involves the experimental setup, the location and focusing on a target flaw, and acquisition (in a predetermined pattern) of pitch-catch and pulse-echo backscatter waveforms.
2. Step two employs a measurement model to correct the backscatter waveforms for effects of attenuation, diffraction, interface losses, and transducer characteristics, thus resulting in absolute scattering amplitudes.
3. Step three employs a one-dimensional inverse Born approximation to extract a tangent plane to centroid radius estimate for each of the scattering amplitudes.
4. In step four the radius estimates and their corresponding look angles are used in a regression analysis program to determine the six ellipsoidal parameters, three semiaxes, and three Euler angles, defining an ellipsoid which best fits the data.
The inverse Born approximation sizes the flaw by computing the characteristic function of the flaw (defined as unity inside the flaw and zero outside the flaw) as a Fourier transform of the ultrasonic scattering amplitude. The one-dimensional inverse Born algorithm treats scattering data in each interrogation direction independently and has been shown to yield the size of ellipsoidal flaws (both voids and inclusions) in terms of the distance from the center of the flaw to the wavefront that is tangent to the front surface of the flaw. Using the multiprobe ultrasonic system, the 1-D inverse Born technique is used to reconstruct voids and inclusions that can be reasonably approximated by an equivalent ellipsoid. So far, the investigation has been confined to convex flaws with a center of inversion symmetry. The angular scan method described in this paper is capable of locating the bisecting symmetry planes of a flaw. The utility of the multiprobe system is, therefore, expanded since two-dimensional elliptic reconstruction may now be made for the central slice. Additionally, the multiprobe system is well suited for the 3-D flaw reconstruction technique using 2-D slices.
The model-based reconstruction method has been previously applied to voids and incursion flaws in solids. Since the least-squares regression analysis leading to the "best fit" ellipsoid is based on the tangent plane to centroid distances for the interrogation directions confined within a finite aperture. The success of reconstruction depends on the extent of the flaw surface "illuminated" by the various viewing directions. The extent of coverage of the flaw surface by the tangent plane is a function of the aperture size, flaw shape, and the flaw orientation. For example, a prolate spheroidal flaw with a large aspect ratio oriented along the axis of the aperture cone will only have one tip illuminated (i.e., covered by the tangent planes) and afford a low reconstruction reliability. For the same reason, orientation of the flaw also has a strong effect on the reconstruction accuracy.
The diagram on the right shows the difference in surface coverage of a tilted flaw and an untilted flaw subjected to the same insonification aperture. Both the experimental and simulation studies of the aperture effect reported before were conducted for oblate and prolate spheroids oriented essentially symmetrically with respect to the part surface and hence the aperture cone. From a flaw reconstruction standpoint, an oblate spheroid with its axis of rotational symmetry perpendicular to the part surface represents a high leverage situation. Likewise, a prolate spheroid with its symmetry axis parallel to the part surface also affords an easier reconstruction than a tilted prolate spheroid. In this CNDE project, we studied effects of flaw orientation on the reconstruction and derived a new data-acquisition approach that will improve reliability of the new reconstruction of arbitrarily oriented flaws.
The orientation of a flaw affects reconstruction results in the following ways.
1. For a given finite aperture, a change in flaw orientation will change the insonified surface area and hence change the "leverage" for reconstruction.
2. The scattering signal amplitude and the signal/noise ratio for any given interrogation direction depends on the flaw orientation.
3. Interference effects, such as those due to tip diffraction phenomena or flash points may be present at certain orientations. Of course, interdependencies exist in these effects, but for the sake of convenience they are discussed separately in the following.
Aperture
To assess the effects of finite aperture size on flaws of different orientation, computer simulations were performed for an oblate spheroid with semi-axes of 400, 400, and 200 µm that is tilted and untilted with respect to the part surface. For each of the 13 scattering directions, the exact radius estimates Re (i.e. the tangent plane to centroid distances) were first computed, and a random error in sizing was then introduced to simulate the experimental situation. The radius estimate used was then taken to be
Re' = Re ( I + n )
where n is a randomly generated number between ±0.1. Using the Re' values for the various directions, a best fit ellipsoid is determined using a regression program. This process is repeated 100 times for each aperture angle and mean standard deviation of the three semiaxes is expressed as a percentage of the expected values. The simulation was performed for the untilted case with the 400 x 400 µm plane parallel to the part surface and for a tilt angle of 40Æ’ from the normal of the part surface. The results are summarized in Table I.

The mean values for the ellipsoidal semi-axes converge to expected values, while the standard deviations converge to some asymptotic minimum. The values in Table I show that for a small aperture, the standard deviation as a percentage of expected value (an indication of the reconstruction error) is much higher for the oblate spheroid tilted at 40Æ’ with respect to the horizontal than is the 0Æ’ untilted case. As the aperture increases, the difference in reconstruction error approaches zero because surface illumination is sufficient to ensure a reliable reconstruction. Due to the combined effect of finite aperture and a prior unknown flaw orientation, a large aperture is desirable to increase reliability of reconstruction results.
Note that in this simulation only the aperture angle is increased, and the number of interrogation directions remains unchanged. The number of look directions is kept the same because the multiviewing system is intended for acquiring a sparse array of data based on speed considerations.
Signal/Noise Ratio
For a given scattering direction amplitude of the scattering amplitude and, therefore, the signal/noise ratio depend on orientation of the flaw. In the short wavelength limit scattering amplitude is proportional to square root of (R1 R2) with R1 and R2 being the principal radii of curvature of the flaw for the scattering direction used. This dependence is found to be important in the intermediate frequency regime as well. To illustrate this effect, the figure at the right shows the scattered signal amplitudes from a football-shaped prolate spheroidal void with two cusp-like tips in two directions: broadside and along the tips. The profile of the tips can increase the ratio of the two signal amplitudes as large as 35.

To investigate the correlation between the accuracy of flaw sizing and signal/noise ratio of the flaw waveform at different scattering directions, a 400 x 400 x 200 µm oblate spheroidal void in titanium with its axis of rotational symmetry tilted at a 30ƒ angle from normal to the part surface was reconstructed using the multiviewing transducer system. It was found that sizing results were generally more accurate for the scattering directions with a higher signal/noise ratio, as expected. Furthermore, the directions that gave the poorest signal/noise ratios were often ones closest to being in an edge-on perspective. The figure on the right shows the relationship between the percentage error of the radius estimate and signal/noise ratio of the flaw waveform. Reconstruction results of the oblate spheroid void tilted at 30ƒ are listed in Table II.

The reconstruction results of both the semi-axes length and tilt angle were improved by rejecting four data points with the lowest signal/noise ratios. Since multiviewing transducer system provides a maximum of 19 independent look angles for a given tilt angle of the transducers, rejecting a small subset of the data points based on signal/noise consideration still leaves a sufficient number of data points for the ellipsoidal regression step which requires a minimum of six data points.
Flash Point Interference
The multiview transducer system and associated signal-processing algorithms reconstruct a flaw based on a general ellipsoid model. For ellipsoids with a large aspect ratio and flaw shapes that approach those of a flat crack or a long needle, edge or tip diffractions due to points of stationary phase (flash points) governed by, geometric acoustics become important. When such phenomena are present within the transducer bandwidth, the scattered signal frequency spectrum contains strong interference maxima and minima and renders radius estimates by the 1-D inverse Born difficult or impossible.
The figures below show a test flaw in the form of a copper wire segment embedded in a transparent thermoplastic disk and tilted at 45ƒ with respect to the disk surface and the frequency spectrum of the wire inclusion at a scattering angle of 21ƒ from the wire axis. The strong interference pattern prevented the I-D inverse Born algorithm from yielding a meaningful radius estimate. However, when the spectrum was analyzed on assumption of flash point interference (without having to use the angle information), 321 µm was obtained for a path length difference of the stationary phase points in the scattering direction; this compared reasonably well with 374 µm for twice the tangent plane distance in this orientation.
----
Photomicrograph of copper wire segment titled at 45ƒ and embedded in thermoplastic. Each minor division of scale is 10 µm, and wire segment is approximately prolate spheroid with semi-axes Ax = 80 µm, Ay = 80 µm, and Az = 200 µm.
Spatial Data-acquisition Pattern For Arbitrarily Oriented Flaw
From the investigation described earlier, it is clear that reliable reconstruction of an arbitrarily oriented flaw generally requires a large aperture. However, a large viewing aperture perpendicular to the part surface may still contain scattering directions hampered by weak flaw signal amplitude (poor signal-to-noise ratio) and, in certain cases, flash point interference. A predetermined data-acquisition pattern that is relatively free from such disadvantages can improve reconstruction reliability. In this work we explored a method to predetermine a spatial pattern for data acquisition. This pattern affords a high leverage for reliable reconstruction for arbitrarily oriented flaws that can be approximated by the shape of a spheroid.
Consider a tilted prolate spheroid as shown on the right. We may define a vertical sagittal plane (VSP) as the plane that bisects the flaw and contains the z axis. We further define a perpendicular sagittal plane (PSP) as the plane bisecting the spheroid and perpendicular to the VSP. The intersection of the VSP and PSP (direction M in diagram) then corresponds to a direction of maximum flaw signal amplitude. The orientation of the VSP can be located by a series of azimuthal scans at different polar angles. A maximum in the signal amplitude should be observed at the azimuthal angle of the VSP. This definition of the VSP and PSP and their relationship to backscattered flaw signal amplitude also holds true for an oblate spheroid. Below shows the azimuthal scans at four different polar angles for the 2.5:1 prolate spheroid (wire segment) flaw. Once the azimuthal angle of the VSP is determined (30Æ’ in this case), a polar scan below at the azimuthal angle of the VSP determines the tilt angle of the wire segment to be 41Æ’, as compared to 45Æ’ from optical measurement.
----
Flaw signal amplitude as a function of azimuthal and polar angles.
The angular scans serve two very useful functions. First, they provide some information about the shape and orientation of the flaw. For example, a scan in the perpendicular sagittal plane can distinguish a prolate spheroid from an oblate spheroid by changing the polar angle and the azimuthal angle simultaneously. A scan in the PSP of the 2:1 oblate spheroid tilted at 30Æ’ showed a peak in flaw signal amplitude at the intersection of the VSP and the PSP (direction M), whereas a scan in the PSP of the tilted 2.5:1 prolate spheroid showed a constant flaw signal amplitude.
Second, it provides a basis for predetermining a spatial data-acquisition pattern that is equivalent to a tilted aperture cone centered at direction M. This data-acquisition pattern not only ensures good signal-to-noise ratio, avoids possible flash point interference due to end-on or edge-on perspectives, and provides a maximum illuminated area on the flaw surface, but also allows one to reconstruct the flaw with two mutually orthogonal elliptical cross sections in the VSP and PSP.
So far, the discussion of angular scans has been confined to flaws that are approximately spheroidal in shape. For a general ellipsoid with three unequal semi-axes and oriented arbitrarily in space, the angular scan results will be more complicated. For example, an azimuthal scan at different polar angles is not expected to show a peak at the same azimuthal angle. Shape and orientation information, in principle, can still be extracted from such data, and further investigations are underway for the general case.
Reconstruction Results
To verify the reconstruction method using the new spatial data-acquisition configuration experimentally, reconstructions were performed on two test specimens. The first flaw was the 400 µm long 80 µm radius copper wire segment embedded in a thermoplastic disk. This flaw was used to approximate a prolate spheroid with a 2.5:1 aspect ratio. The axis of the wire segment was at a 45ƒ angle relative to the part surface. The second flaw was a 400 x 200 µm oblate spheroidal void tilted at a 30ƒ angle in a diffusion bonded titanium disk, as just described.
The flaw reconstruction procedure using an aperture cone perpendicular to the part surface was first carried out for the 2.5:1 prolate inclusion (copper wire) tilted at a 45Æ’ angle. Difficulties due to a poor signal-to-noise ratio and flash point interference associated with look directions close to the end-on perspective prevented a successful reconstruction; in fact, enough inconsistencies occurred in the tangent plane distance estimates that the regression step failed to converge.
Based on orientations of the sagittal planes determined in the angular scans, the new data-acquisition pattern equivalent to tilting the aperture axis to the direction of maximum signal strength was used. The ellipsoidal reconstruction gave a tilt angle of 42ƒ and three semiaxes of 257, 87, and 81 µm. These results compared very favorably with the actual tilt angle of 45ƒ and the actual semi-axes of 200, 80, and 80 µm.
The new data-acquisition pattern also allows one to reconstruct an arbitrarily tilted spheroidal flaw with the two mutually orthogonal elliptical cross-sectional cuts in the VSP and PSP. This was done for the copper wire inclusion. After identifying the vertical sagittal plane and the perpendicular sagittal plane, a series of tangent plane distance estimates were made for scattering directions confined in these two planes. Using these results, the two mutually orthogonal elliptical cross sections in the VSP and PSP were reconstructed using a similar regression program in 2-D. The two reconstructed ellipses were 266 x 83 µm and 80 x 75 µm, respectively, and the tilt angle was found to be 51ƒ. Table III shows the results of the 3-D reconstruction using 19 look perspectives and the 2-D reconstruction of the ellipses in the VSP and PSP. Both reconstructions compared very favorably with the expected values. The greatest discrepancy is in the value of the semi-axis Ax; this is to be expected because the wire segment is approximately a prolate spheroid with two ends truncated.

The 2:1 oblate spheroidal void tilted at a 30Æ’ angle in a titanium disk was investigated, again, following the procedure of predetermining a favorable data-acquisition pattern based on angular scan results. Table IV shows the reconstruction results using the new data-acquisition pattern equivalent to an aperture cone centered on the direction of maximum backscatter signal. As a comparison, reconstruction results using an aperture cone normal to the part surface (described earlier) are also shown. As can be seen, the improvement of the reconstruction by using the new data-acquisition pattern is not as dramatic as the prolate inclusion case. This is consistent with the fact that the oblate spheroid has a smaller aspect ratio and a smaller tilt angle and is therefore not nearly a "low leverage" flaw to reconstruct using the normal (untilted) data-acquisition pattern.

The reliability problem of reconstructing arbitrarily oriented flaws using the multiviewing transducer system and associated model-based algorithm has been studied. An arbitrarily oriented flaw may afford a low leverage for reconstructing the entire flaw based on limited surface area covered by the tangent planes in a finite aperture and, therefore, requires a greater aperture for a reliable reconstruction. However, the aperture size has practical limits in a single-side access inspection situation and a larger aperture does not necessarily alleviate such difficulties as poor signal-to-noise ratio and flash point interference associated with certain interrogation directions. In our study of reconstructing approximately spheroidal flaws oriented at some arbitrary angle, it was found beneficial to predetermine a spatial data-acquisition pattern based on angular dependence of the flaw signal amplitude. The new data-acquisition pattern is equivalent to tilting the interrogation aperture cone to compensate for the particular orientation of the flaw and restore the leverage for a more reliable reconstruction. This method worked well on two test cases.
Calibration Methods

Calibration refers to the act of evaluating and adjusting the precision and accuracy of measurement equipment. In ultrasonic testing, several forms of calibration must occur. First, the electronics of the equipment must be calibrated to ensure that they are performing as designed. This operation is usually performed by the equipment manufacturer and will not be discussed further in this material. It is also usually necessary for the operator to perform a "user calibration" of the equipment. This user calibration is necessary because most ultrasonic equipment can be reconfigured for use in a large variety of applications. The user must "calibrate" the system, which includes the equipment settings, the transducer, and the test setup, to validate that the desired level of precision and accuracy are achieved. The term calibration standard is usually only used when an absolute value is measured and in many cases, the standards are traceable back to standards at the National Institute for Standards and Technology.
In ultrasonic testing, there is also a need for reference standards. Reference standards are used to establish a general level of consistency in measurements and to help interpret and quantify the information contained in the received signal. Reference standards are used to validate that the equipment and the setup provide similar results from one day to the next and that similar results are produced by different systems. Reference standards also help the inspector to estimate the size of flaws. In a pulse-echo type setup, signal strength depends on both the size of the flaw and the distance between the flaw and the transducer. The inspector can use a reference standard with an artificially induced flaw of known size and at approximately the same distance away for the transducer to produce a signal. By comparing the signal from the reference standard to that received from the actual flaw, the inspector can estimate the flaw size.
This section will discuss some of the more common calibration and reference specimen that are used in ultrasonic inspection. Some of these specimens are shown in the figure above. Be aware that are other standards available and that specially designed standards may be required for many applications. The information provided here is intended to serve a general introduction to the standards and not to be instruction on the proper use of the standards.
Introduction to the Common Standards
Calibration and reference standards for ultrasonic testing come in many shapes and sizes. The type of standard used is dependent on the NDE application and the form and shape of the object being evaluated. The material of the reference standard should be the same as the material being inspected and the artificially induced flaw should closely resemble that of the actual flaw. This second requirement is a major limitation of most standard reference samples. Most use drilled holes and notches that do not closely represent real flaws. In most cases the artificially induced defects in reference standards are better reflectors of sound energy (due to their flatter and smoother surfaces) and produce indications that are larger than those that a similar sized flaw would produce. Producing more "realistic" defects is cost prohibitive in most cases and, therefore, the inspector can only make an estimate of the flaw size. Computer programs that allow the inspector to create computer simulated models of the part and flaw may one day lessen this limitation.
The IIW Type Calibration Block

The standard shown in the above figure is commonly known in the US as an IIW type reference block. IIW is an acronym for the International Institute of Welding. It is referred to as an IIW "type" reference block because it was patterned after the "true" IIW block but does not conform to IIW requirements in IIS/IIW-23-59. "True" IIW blocks are only made out of steel (to be precise, killed, open hearth or electric furnace, low-carbon steel in the normalized condition with a grain size of McQuaid-Ehn #8) where IIW "type" blocks can be commercially obtained in a selection of materials. The dimensions of "true" IIW blocks are in metric units while IIW "type" blocks usually have English units. IIW "type" blocks may also include additional calibration and references features such as notches, circular groves, and scales that are not specified by IIW. There are two full-sized and a mini versions of the IIW type blocks. The Mini version is about one-half the size of the full-sized block and weighs only about one-fourth as much. The IIW type US-1 block was derived the basic "true" IIW block and is shown below in the figure on the left. The IIW type US-2 block was developed for US Air Force application and is shown below n the center. The Mini version is shown on the right.
IIW Type US-1

IIW Type US-2

IIW Type Mini

IIW type blocks are used to calibrate instruments for both angle beam and normal incident inspections. Some of their uses include setting metal-distance and sensitivity settings, determining the sound exit point and refracted angle of angle beam transducers, and evaluating depth resolution of normal beam inspection setups. Instructions on using the IIW type blocks can be found in the annex of American Society for Testing and Materials Standard E164, Standard Practice for Ultrasonic Contact Examination of Weldments.
The Miniature Angle-Beam or ROMPAS Calibration Block

The miniature angle-beam is a calibration block that was designed for the US Air Force for use in the field for instrument calibration. The block is much smaller and lighter than the IIW block but performs many of the same functions. The miniature angle-beam block can be used to check the beam angle and exit point of the transducer. The block can also be used to make metal-distance and sensitivity calibrations for both angle and normal-beam inspection setups.
AWS Shearwave Distance/Sensitivity Calibration (DSC) Block

A block that closely resembles the miniature angle-beam block and is used in a similar way is the DSC AWS Block. This block is used to determine the beam exit point and refracted angle of angle-beam transducers and to calibrate distance and set the sensitivity for both normal and angle beam inspection setups. Instructions on using the DSC block can be found in the annex of American Society for Testing and Materials Standard E164, Standard Practice for Ultrasonic Contact Examination of Weldments.
AWS Shearwave Distance Calibration (DC) Block

The DC AWS Block is a metal path distance and beam exit point calibration standard that conforms to the requirements of the American Welding Society (AWS) and the American Association of State Highway and Transportation Officials (AASHTO). Instructions on using the DC block can be found in the annex of American Society for Testing and Materials Standard E164, Standard Practice for Ultrasonic Contact Examination of Weldments.
AWS Resolution Calibration (RC) Block

The RC Block is used to determine the resolution of angle beam transducers per the requirements of AWS and AASHTO. Engraved Index markers are provided for 45, 60, and 70 degree refracted angle beams.
30 FBH Resolution Reference Block

The 30 FBH resolution reference block is used to evaluate the near-surface resolution and flaw size/depth sensitivity of a normal-beam setup. The block contains number 3 (3/64"), 5 (5/64"), and 8 (8/64") ASTM flat bottom holes at ten metal-distances ranging from 0.050 inch (1.27 mm) to 1.250 inch (31.75 mm).
Miniature Resolution Block

The miniature resolution block is used to evaluate the near-surface resolution and sensitivity of a normal-beam setup It can be used to calibrate high-resolution thickness gages over the range of 0.015 inches (0.381 mm) to 0.125 inches (3.175 mm).
Step and Tapered Calibration Wedges

Step and tapered calibration wedges come in a large variety of sizes and configurations. Step wedges are typically manufactured with four or five steps but custom wedge can be obtained with any number of steps. Tapered wedges have a constant taper over the desired thickness range.
Distance/Sensitivity (DS) Block

The DS test block is a calibration standard used to check the horizontal linearity and the dB accuracy per requirements of AWS and AASHTO.
Distance/Area-Amplitude Blocks
-------
Distance/area amplitude correction blocks typically are purchased as a ten-block set, as shown above. Aluminum sets are manufactured per the requirements of ASTM E127 and steel sets per ASTM E428. Sets can also be purchased in titanium. Each block contains a single flat-bottomed, plugged hole. The hole sizes and metal path distances are as follows:
• 3/64" at 3"
• 5/64" at 1/8", 1/4", 1/2", 3/4", 11/2", 3", and 6"
• 8/64" at 3" and 6"
Sets are commonly sold in 4340 Vacuum melt Steel, 7075-T6 Aluminum, and Type 304 Corrosion Resistant Steel. Aluminum blocks are fabricated per the requirements of ASTM E127, Standard Practice for Fabricating and Checking Aluminum Alloy Ultrasonic Standard Reference Blocks. Steel blocks are fabricated per the requirements of ASTM E428, Standard Practice for Fabrication and Control of Steel Reference Blocks Used in Ultrasonic Inspection.
Area-Amplitude Blocks
Area-amplitude blocks are also usually purchased in an eight-block set and look very similar to Distance/Area-Amplitude Blocks. However, area-amplitude blocks have a constant 3-inch metal path distance and the hole sizes are varied from 1/64" to 8/64" in 1/64" steps. The blocks are used to determine the relationship between flaw size and signal amplitude by comparing signal responses for the different sized holes. Sets are commonly sold in 4340 Vacuum melt Steel, 7075-T6 Aluminum, and Type 304 Corrosion Resistant Steel. Aluminum blocks are fabricated per the requirements of ASTM E127, Standard Practice for Fabricating and Checking Aluminum Alloy Ultrasonic Standard Reference Blocks. Steel blocks are fabricated per the requirements of ASTM E428, Standard Practice for Fabrication and Control of Steel Reference Blocks Used in Ultrasonic Inspection.
Distance-Amplitude #3, #5, #8 FBH Blocks
Distance-amplitude blocks also very similar to the distance/area-amplitude blocks pictured above. Nineteen block sets with flat-bottom holes of a single size and varying metal path distances are also commercially available. Sets have either a #3 (3/64") FBH, a #5 (5/64") FBH, or a #8 (8/64") FBH. The metal path distances are 1/16", 1/8", 1/4", 3/8", 1/2", 5/8", 3/4", 7/8", 1", 1-1/4", 1-3/4", 2-1/4", 2-3/4", 3-14", 3-3/4", 4-1/4", 4-3/4", 5-1/4", and 5-3/4". The relationship between the metal path distance and the signal amplitude is determined by comparing signals from same size flaws at different depth. Sets are commonly sold in 4340 Vacuum melt Steel, 7075-T6 Aluminum, and Type 304 Corrosion Resistant Steel. Aluminum blocks are fabricated per the requirements of ASTM E127, Standard Practice for Fabricating and Checking Aluminum Alloy Ultrasonic Standard Reference Blocks. Steel blocks are fabricated per the requirements of ASTM E428, Standard Practice for Fabrication and Control of Steel Reference Blocks Used in Ultrasonic Inspection.
Distance Amplitude Correction (DAC)
Distance Amplitude Correction (DAC)
Acoustic signals from the same reflecting surface will have different amplitudes at different distances from the transducer. Distance amplitude correction (DAC) provides a means of establishing a graphic ‘reference level sensitivity’ as a function of sweep distance on the A-scan display. The use of DAC allows signals reflected from similar discontinuities to be evaluated where signal attenuation as a function of depth has been correlated. Most often DAC will allow for loss in amplitude over material depth (time), graphically on the A-scan display but can also be done electronically by certain instruments. Because near field length and beam spread vary according to transducer size and frequency, and materials vary in attenuation and velocity, a DAC curve must be established for each different situation. DAC may be employed in both longitudinal and shear modes of operation as well as either contact or immersion inspection techniques.
A distance amplitude correction curve is constructed from the peak amplitude responses from reflectors of equal area at different distances in the same material. A-scan echoes are displayed at their non-electronically compensated height and the peak amplitude of each signal is marked on the flaw detector screen or, preferably, on a transparent plastic sheet attached to the screen. Reference standards which incorporate side drilled holes (SDH), flat bottom holes (FBH), or notches whereby the reflectors are located at varying depths are commonly used. It is important to recognize that regardless of the type of reflector used, the size and shape of the reflector must be constant. Commercially available reference standards for constructing DAC include ASTM Distance/Area Amplitude and ASTM E1158 Distance Amplitude blocks, NAVSHIPS Test block, and ASME Basic Calibration Blocks.
The following applet shows a test block with a side drilled hole. The transducer was chosen so that the signal in the shortest pulse-echo path is in the far-field. The transducer may be moved finding signals at depth ratios of 1, 3, 5, and 7. Red points are "drawn" at the peaks of the signals and are used to form the distance amplitude correction curve drawn in blue. Start by pressing the green "Test now!" button. After determining the amplitudes for various path lengths (4), press "Draw DAC" and then press the green "Test now!" button.
Thompson-Gray Measurement Model
The Thompson-Gray Measurement Model allows the approximate prediction of ultrasonic scattering measurements made through liquid-solid interfaces. Liquid-solid interfaces are common in physical inspection scenarios. The model allows us to make predictions about received ultrasonic signals scattered from various classes of defects. The model predicts an absolute scattering amplitude in the sense that amplitudes are correct and transducer and system characteristics are removed by deconvolution techniques.
Work begun in the early 1980's continues to be refined and has resulted into an increasingly valuable working tool for comparison of ultrasonic theory and experiment. The Thompson-Gray Measurement Model is at the heart of UTSIM (see section 5.4 Ultrasonic Simulation - UTSIM).
The validity of any model rests on how well its predictions agree with experiment. Shown below are three examples taken from the J. Acoust. Soc. Am., 74(4) October 1983 entitled, "A model relating ultrasonic scattering measurements through liquid-solid interfaces to unbounded medium scattering amplitudes."

Comparison of theory and experimental magnitude of longitudinal pitch-catch scattering amplitude for a 114 µm radius tin-lead solder sphere in a Lucite cylindrical disk. Illumination was at normal incidence and reception at an 8° angle (15° in the solid).

Comparison of theory and experimental magnitude of longitudinal pitch-catch scattering amplitude for a 114 µm radius tin-lead solder sphere in a Lucite cylindrical disk. Illumination was at normal incidence and reception at an 15.7° angle (30° in the solid).

Comparison of theory and experimental magnitude of longitudinal pitch-catch scattering amplitude for a 114 µm radius tin-lead solder sphere in a Lucite cylindrical disk. Illumination was at normal incidence and reception at a 22.5° angle (45° in the solid).
The relationship between scattering data (obtained from ultrasonic experiments in which the waves are excited and detected in a finite measurement geometry) and unbounded medium, farfield scattering amplitudes, forms the basis of an ultrasonic measurement model.

Geometry of theoretical scattering calculation
For a scatterer in a single fluid medium, a Green's function approach is used to develop an approximate but absolute relationship between these experimental and theoretical cases.
Electromechanical reciprocity relationships are then employed to generalize to a two medium case in which the scatterer is located in an elastic solid which, along with the ultrasonic transducer, is immersed in a fluid medium.
The scattering of elastic waves by a flaw in, an unbounded solid, e.g., a crack, void, or inclusion, is often characterized by a scattering amplitude A which defines the spherically spreading wave scattered into the farfield when the flaw is "illuminated" by a unit amplitude plane wave, as schematically illustrated in the above diagram. However, measurements of scattering are always made with transducers of finite aperture, at finite distances from the scatterer. Furthermore, the transducer is often immersed in a fluid medium and the wave has passed through the liquid-solid interface twice during the measurement.
In principle, complete theoretical scattering solutions can be developed for this more complex scattering situation. However, even the introduction of the liquid-solid interface significantly complicates the elastic wave scattering and further introduction of finite beam effects in an exact manner would generally lead to computational complexity, which would severely restrict the use of the results in the routine interpretation of experiments.
An alternative point of view would be to view the unbounded medium scattering amplitude A as a canonical solution and to develop approximate expressions, which relate this to the solutions for the more complex measurement geometries. This point of view is routinely adopted in studies of the acoustic scattering (e.g. sonar) from various obstacles. In this case, the problem is greatly simplified by the fact that: (a) the fluid medium only supports a single wave type, (b) the waves do not pass through a refracting and mode converting interface, and (c) calibration experiments can be performed with arbitrary relative positions of transducers and reflecting surfaces to eliminate diffraction effects.
Ultrasonic Simulation - UTSIM
UTSIM is a user interface integrating a CAD model representing a part under inspection and an ultrasound beam model. The beam model can accurately predict fields for focused or planar beams, through curved or flat interfaces in a 3D space. Isotropic materials are required for the current implementation. Within UTSIM, the geometrical boundary conditions are automatically set up when the user points the virtual probe at the 3D solid model. It is fast enough to generate waveforms from flaws in real time.

Grain Noise Modeling
In recent years, a number of theoretical models have been developed at Iowa State University to predict the electrical voltage signals seen during ultrasonic inspections of metal components. For example, the Thompson-Gray measurement model can predict the absolute voltage of the echo from a small defect, given information about the host metal (information such as density, sound speeds, surface curvature, etc.), the defect (size, shape, location, etc.), and the inspection system (water path, transducer characteristics, reference echo from a calibration block, etc.). If an additional metal property which characterizes the inherent noisiness of the metal microstructure is known, the independent scatterer model can be used to predict the absolute root-mean-squared (rms) level of the ultrasonic grain noise seen during an inspection. By combining the two models, signal-to-noise (S/N) ratios can be calculated.
Accurate model calculations often require intensive computer calculations. However, by making a number of approximations in the formalism, it is possible to obtain rapid first-order estimates of noise levels and S/N ratios. These calculations are for normal-incidence pulse-echo inspections through flat or curved surfaces, and the flaw may be a flat crack or a spherical inclusion. The figure below shows the results of one of the calculations.

References & Standards
What are standards?
Standards are documented agreements containing technical specifications or other precise criteria to be used consistently as rules, guidelines, or definitions of characteristics, in order to ensure that materials, products, processes, and services are fit for their purpose.
For example, the format of the credit cards, phone cards, and "smart" cards that have become commonplace is derived from an ISO International Standard. Adhering to the standard, which defines such features as an optimal thickness (0.76 mm), means that the cards can be used worldwide.
An important source of practice codes, standards, and recommendations for NDT is given in the Annual Book of the American Society of Testing and Materials, ASTM. Volume 03.03, Nondestructive Testing is revised annually, covering acoustic emission, eddy current, leak testing, liquid penetrants, magnetic particle, radiography, thermography, and ultrasonics.
There are many efforts on the part of the National Institute of Standards and Technology (NIST) and other standards organizations, both national and international, to work through technical issues and harmonize national and international standards
History of Radiography
Approximate Radiographic Equivalence Factors
Based on Energy Level

METAL 100 kV 150 kV 200 kV 250 kV 400 kV 1
MeV IR 192 CO 60
Magnesium 0.05 0.05 0.08 - - - - -
Aluminum (pure) 0.08 0.12 0.18 - - - 0.35 0.35
Aluminum Alloy 0.10 0.14 0.18 - - - 0.35 0.35
Titanium - 0.40 0.35 - 0.71 0.9 0.9 0.9
Iron/All Steels 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
Copper 1.5 1.6 1.4 1.4 1.4 1.1 1.1 1.1
Zinc - 1.4 1.3 - 1.3 - 1.1 1.0
Brass* - 1.9 1.8 1.6 1.3 1.2 1.1 1.0
Inconel Xb - 1.4 1.3 - 1.3 1.3 1.3 1.3
Monel 1.7 - 1.2 - - - - -
Zirconium 2.4 2.3 2.0 1.7 1.5 1.0 1.2 1.0
Lead 14.0 14.0 12.0 - - 5.0 4.0 2.3
Uranium - - 20.0 16.0 12.0 4.0 12.6 3.4
* The values for brass will change if tin or lead is included in the alloy.

X-rays were discovered in 1895 by Wilhelm Conrad Roentgen (1845-1923) who was a Professor at Wuerzburg University in Germany. Working with a cathode-ray tube in his laboratory, Roentgen observed a fluorescent glow of crystals on a table near his tube. The tube that Roentgen was working with consisted of a glass envelope (bulb) with positive and negative electrodes encapsulated in it. The air in the tube was evacuated, and when a high voltage was applied, the tube produced a fluorescent glow. Roentgen shielded the tube with heavy black paper, and discovered a green colored fluorescent light generated by a material located a few feet away from the tube.
Present State of Radiography
In many ways, radiography has changed little from the early days of its use. We still capture a shadow image on film using similar procedures and processes technicians were using in the late 1800's. Today, however, we are able to generate images of higher quality and greater sensitivity through the use of higher quality films with a larger variety of film grain sizes. Film processing has evolved to an automated state, producing more consistent film quality by removing manual processing variables. Electronics and computers allow technicians to now capture images digitally. The use of "filmless radiography" provides a means of capturing an image, digitally enhancing, sending the image anywhere in the world, and archiving an image that will not deteriorate with time. Technological advances have provided industry with smaller, lighter, and very portable equipment that produce high quality X-rays. The use of linear accelerators provide a means of generating extremely short wavelength, highly penetrating radiation, a concept dreamed of only a few short years ago.
While the process has changed little, technology has evolved allowing radiography to be widely used in numerous areas of inspection. Radiography has seen expanded usage in industry to inspect not only welds and castings, but to radiographically inspect items such as airbags and canned food products. Radiography has found use in metallurgical material identification and security systems at airports and other facilities.
Gamma ray inspection has also changed considerably since the Curies' discovery of radium. Man-made isotopes of today are far stronger and offer the technician a wide range of energy levels and half-lives. The technician can select Co-60 which will effectively penetrate very thick materials, or select a lower energy isotope, such as Tm-170, which can be used to inspect plastics and very thin or low density materials. Today gamma rays find wide application in industries such as petrochemical, casting, welding, and aerospace.
Addressing Health Concerns
It was in the Manhattan District of US Army Corps of Engineers that the name "health physics" was born, and great advances were made in radiation safety. From the onset, the leaders of the Manhattan District recognized that a new and intense source of radiation and radioactivity would be created. In the summer of 1942, the leaders asked Ernest O. Wollan, a cosmic ray physicist at the University of Chicago, to form a group to study and control radiation hazards. Thus, Wollan was the first to bear the title of health physicist. He was soon joined by Carl G. Gamertsfelder, recently graduated physics baccalaureate, and Herbert M. Parker, the noted British-American medical physicist. By mid 1943, six others had been added. These six include Karl Z. Morgan, James C. Hart, Robert R. Coveyou, O.G. Landsverk, L.A. Pardue, and John E. Rose.
Within the Manhattan District, the name "health physicist" seems to have been derived in part from the need for secrecy (and hence a code name for radiation protection activities) and the fact that it was a group of mostly physicists working on health related problems. Activities included developing appropriate monitoring instruments, physical controls, administrative procedures, monitoring radiation areas, personnel monitoring, and radioactive waste disposal. It was in the Manhattan District that many of the modern concepts of protection were born, including the rem unit, which took into account the biological effectiveness of the radiation. It was in the Manhattan District that radiation protection concepts realized maturity and enforceability.
Future Direction of Radiographic Education
Although many of the methods and techniques developed over a century ago remain in use, computers are slowly becoming a part of radiographic inspection. The future of radiography will likely see many changes. As noted earlier, companies are performing many inspections without the aid of film.
Radiographers of the future will capture images in digitized form and e-mail them to the customer when the inspection has been completed. Film evaluation will likely be left to computers. Inspectors may capture a digitized image, feed them into a computer and wait for a printout of the image with an accept/reject report. Systems will be able to scan a part and present a three-dimensional image to the radiographer, helping him or her to locate the defect within the part.
Inspectors in the future will be able to peal away layer after layer of a part to evaluate the material in much greater detail. Color images, much like computer generated ultrasonic C-scans of today, will make interpretation of indications much more reliable and less time consuming.
Educational techniques and materials will need to be revised and updated to keep pace with technology and meet the requirements of industry. These needs may well be met with computers. Computer programs can simulate radiographic inspections using a computer aided design (CAD) model of a part to produce physically accurate simulated x-ray radiographic images. Programs allow the operator to select different parts to inspect, adjust the placement and orientation of the part to obtain the proper equipment/part relationships, and adjust all the usual x-ray generator settings to arrive at the desired radiographic film exposure.
Computer simulation will likely have its greatest impact in the classroom, allowing the student to see results in almost real-time. Simulators and computers may well become the primary tool for instructors as well as students in the technical classroom.
Nature of Penetrating Radiation

The Electromagnetic Spectrum
X-rays and gamma rays differ only in their source of origin. X-rays are produced by an x-ray generator and gamma radiation is the product of radioactive atoms. They are both part of the electromagnetic spectrum. They are waveforms, as are light rays, microwaves, and radio waves. X-rays and gamma rays cannot been seen, felt, or heard. They possess no charge and no mass and, therefore, are not influenced by electrical and magnetic fields and will generally travel in straight lines. However, they can be diffracted (bent) in a manner similar to light.
Both X-rays and gamma rays can be characterized by frequency, wavelength, and velocity. However, they act somewhat like a particle at times in that they occur as small "packets" of energy and are referred to as "photons." Due to their short wavelength they have more energy to pass through matter than do the other forms of energy in the electromagnetic spectrum. As they pass through matter, they are scattered and absorbed and the degree of penetration depends on the kind of matter and the energy of the rays.
Properties of X-Rays and Gamma Rays
• They are not detected by human senses (cannot be seen, heard, felt, etc.).
• They travel in straight lines at the speed of light.
• Their paths cannot be changed by electrical or magnetic fields.
• They can be diffracted to a small degree at interfaces between two different materials.
• They pass through matter until they have a chance encounter with an atomic particle.
• Their degree of penetration depends on their energy and the matter they are traveling through.
• They have enough energy to ionize matter and can damage or destroy living cells
• X-Radiation

• X-rays are just like any other kind of electromagnetic radiation. They can be produced in parcels of energy called photons, just like light. There are two different atomic processes that can produce X-ray photons. One is called Bremsstrahlung and is a German term meaning "braking radiation." The other is called K-shell emission. They can both occur in the heavy atoms of tungsten. Tungsten is often the material chosen for the target or anode of the x-ray tube.
• Both ways of making X-rays involve a change in the state of electrons. However, Bremsstrahlung is easier to understand using the classical idea that radiation is emitted when the velocity of the electron shot at the tungsten changes. The negatively charged electron slows down after swinging around the nucleus of a positively charged tungsten atom. This energy loss produces X-radiation. Electrons are scattered elastically and inelastically by the positively charged nucleus. The inelastically scattered electron loses energy, which appears as Bremsstrahlung. Elastically scattered electrons (which include backscattered electrons) are generally scattered through larger angles. In the interaction, many photons of different wavelengths are produced, but none of the photons have more energy than the electron had to begin with. After emitting the spectrum of X-ray radiation, the original electron is slowed down or stopped.
• Bremsstrahlung Radiation
X-ray tubes produce x-ray photons by accelerating a stream of electrons to energies of several hundred kilovolts with velocities of several hundred kilometers per hour and colliding them into a heavy target material. The abrupt acceleration of the charged particles (electrons) produces Bremsstrahlung photons. X-ray radiation with a continuous spectrum of energies is produced with a range from a few keV to a maximum of the energy of the electron beam. Target materials for industrial tubes are typically tungsten, which means that the wave functions of the bound tungsten electrons are required. The inherent filtration of an X-ray tube must be computed, which is controlled by the amount that the electron penetrates into the surface of the target and by the type of vacuum window present.
• The bremsstrahlung photons generated within the target material are attenuated as they pass through typically 50 microns of target material. The beam is further attenuated by the aluminum or beryllium vacuum window. The results are an elimination of the low energy photons, 1 keV through l5 keV, and a significant reduction in the portion of the spectrum from 15 keV through 50 keV. The spectrum from an x-ray tube is further modified by the filtration caused by the selection of filters used in the setup.
• The applet below allows the user to visualize an electron accelerating and interacting with a heavy target material. The graph keeps a record of the bremsstrahlung photons numbers as a function of energy. After a few events, the "building up" of the graph may be accomplished by pressing the "automate" button.

• K-shell Emission Radiation
Remember that atoms have their electrons arranged in closed "shells" of different energies. The K-shell is the lowest energy state of an atom. An incoming electron can give a K-shell electron enough energy to knock it out of its energy state. About 0.1% of the electrons produce K-shell vacancies; most produce heat. Then, a tungsten electron of higher energy (from an outer shell) can fall into the K-shell. The energy lost by the falling electron shows up in an emitted x-ray photon. Meanwhile, higher energy electrons fall into the vacated energy state in the outer shell, and so on. K-shell emission produces higher-intensity x-rays than Bremsstrahlung, and the x-ray photon comes out at a single wavelength.
• When outer-shell electrons drop into inner shells, they emit a quantized photon "characteristic" of the element. The energies of the characteristic X-rays produced are only very weakly dependent on the chemical structure in which the atom is bound, indicating that the non-bonding shells of atoms are the X-ray source. The resulting characteristic spectrum is superimposed on the continuum as shown in the graphs below. An atom remains ionized for a very short time (about 10-14 second) and thus an atom can be repeatedly ionized by the incident electrons which arrive about every 10-12 second.

• Gamma Radiation
• Gamma radiation is one of the three types of natural radioactivity. Gamma rays are electromagnetic radiation, like X-rays. The other two types of natural radioactivity are alpha and beta radiation, which are in the form of particles. Gamma rays are the most energetic form of electromagnetic radiation, with a very short wavelength of less than one-tenth of a nanometer.
• Gamma radiation is the product of radioactive atoms. Depending upon the ratio of neutrons to protons within its nucleus, an isotope of a particular element may be stable or unstable. When the binding energy is not strong enough to hold the nucleus of an atom together, the atom is said to be unstable. Atoms with unstable nuclei are constantly changing as a result of the imbalance of energy within the nucleus. Over time, the nuclei of unstable isotopes spontaneously disintegrate, or transform, in a process known as radioactive decay. Various types of penetrating radiation may be emitted from the nucleus and/or its surrounding electrons. Nuclides which undergo radioactive decay are called radionuclides. Any material which contains measurable amounts of one or more radionuclides is a radioactive material.
• Types Radiation Produced by Radioactive Decay
When an atom undergoes radioactive decay, it emits one or more forms of radiation with sufficient energy to ionize the atoms with which it interacts. Ionizing radiation can consist of high speed subatomic particles ejected from the nucleus or electromagnetic radiation (gamma-rays) emitted by either the nucleus or orbital electrons.

• Alpha Particles
Certain radionuclides of high atomic mass (Ra226, U238, Pu239) decay by the emission of alpha particles. These alpha particles are tightly bound units of two neutrons and two protons each (He4 nucleus) and have a positive charge. Emission of an alpha particle from the nucleus results in a decrease of two units of atomic number (Z) and four units of mass number (A). Alpha particles are emitted with discrete energies characteristic of the particular transformation from which they originate. All alpha particles from a particular radionuclide transformation will have identical energies.
• Beta Particles
A nucleus with an unstable ratio of neutrons to protons may decay through the emission of a high speed electron called a beta particle. This results in a net change of one unit of atomic number (Z). Beta particles have a negative charge and the beta particles emitted by a specific radionuclide will range in energy from near zero up to a maximum value, which is characteristic of the particular transformation.
• Gamma-rays
A nucleus which is in an excited state may emit one or more photons (packets of electromagnetic radiation) of discrete energies. The emission of gamma rays does not alter the number of protons or neutrons in the nucleus but instead has the effect of moving the nucleus from a higher to a lower energy state (unstable to stable). Gamma ray emission frequently follows beta decay, alpha decay, and other nuclear decay processes.
• Activity (of Radionuclides)

The quantity which expresses the degree of radioactivity or the radiation producing potential of a given amount of radioactive material is activity. The curie was originally defined as that amount of any radioactive material that disintegrates at the same rate as one gram of pure radium. The curie has since been defined more precisely as a quantity of radioactive material in which 3.7 x 1010 atoms disintegrate per second. The International System (SI) unit for activity is the Becquerel (Bq), which is that quantity of radioactive material in which one atom is transformed per second. The radioactivity of a given amount of radioactive material does not depend upon the mass of material present. For example, two one-curie sources of Cs-137 might have very different masses depending upon the relative proportion of non-radioactive atoms present in each source. Radioactivity is expressed as the number of curies or becquerels per unit mass or volume.
• The concentration of radioactivity, or the relationship between the mass of radioactive material and the activity, is called "specific activity." Specific activity is expressed as the number of curies or becquerels per unit mass or volume. Each gram of Cobalt-60 will contain approximately 50 curies. Iridium-192 will contain 350 curies for every gram of material. The shorter half-life, the less amount of material that will be required to produce a given activity or curies. The higher specific activity of Iridium results in physically smaller sources. This allows technicians to place the source in closer proximity to the film while maintaining geometric unsharpness requirements on the radiograph. These unsharpness requirements may not be met if a source with a low specific activity were used at similar source to film distances.
• Isotope Decay Rate (Half-Life)
• Each radionuclide decays at its own unique rate which cannot be altered by any chemical or physical process. A useful measure of this rate is the half-life of the radionuclide. Half-life is defined as the time required for the activity of any particular radionuclide to decrease to one-half of its initial value. In other words one-half of the atoms have reverted to a more stable state material. Half-lives of radionuclides range from microseconds to billions of years. Half-life of two widely used industrial isotopes are 74 days for Iridium-192, and 5.3 years for Cobalt-60. More exacting calculations can be made for the half-life of these materials, however, these times are commonly used.
• The applet below offers an interactive representation of radioactive decay series. The four series represented are Th232, Ir192, Co60, Ga75, and C14. Use the radio buttons to select the series that you would like to study. Note that Carbon-14 is not used in radiography, but is one of many useful radioactive isotopes used to determine the age of fossils. If you are interested in learning more about Carbon-14 Dating, follow this link: Carbon-14 Dating.
• The Sequence Info button displays a chart that depicts the path of the series with atomic numbers indicated on the vertical axis on the left, and the number of neutrons shown along the bottom. Colored arrows represent alpha and beta decays. To return to the main user interface, click the "Dismiss" button.
• Initially, a selected series contains all parent material, and the amount is represented by a colored bar on a vertical logarithmic scale. Each line represents a factor of ten. In order to step forward through the sequence by a specified number of years, you may type the appropriate number into the "Time Step" field and hit "Enter." A negative time step will backtrack through the sequence.
• You may choose a step interval in years and progress through each step by pressing the "Enter" key. The "Animate" button will automate the progress through the series. You can either choose a time step before you animate or leave it at zero. If the time step is left at zero, the system will choose time steps to optimize viewing performance.
• Ionization
• As penetrating radiation moves from point to point in matter, it loses its energy through various interactions with the atoms it encounters. The rate at which this energy loss occurs depends upon the type and energy of the radiation and the density and atomic composition of the matter through which it is passing.
• The various types of penetrating radiation impart their energy to matter primarily through excitation and ionization of orbital electrons. The term "excitation" is used to describe an interaction where electrons acquire energy from a passing charged particle but are not removed completely from their atom. Excited electrons may subsequently emit energy in the form of x-rays during the process of returning to a lower energy state. The term "ionization" refers to the complete removal of an electron from an atom following the transfer of energy from a passing charged particle. In describing the intensity of ionization, the term "specific ionization" is often used. This is defined as the number of ion pairs formed per unit path length for a given type of radiation.
• Because of their double charge and relatively slow velocity, alpha particles have a high specific ionization and a relatively short range in matter (a few centimeters in air and only fractions of a millimeter in tissue). Beta particles have a much lower specific ionization than alpha particles and, generally, a greater range. For example, the relatively energetic beta particles from P32 have a maximum range of 7 meters in air and 8 millimeters in tissue. The low energy betas from H3, on the other hand, are stopped by only 6 millimeters of air or 6 micrometers of tissue.
• Gamma-rays, x-rays, and neutrons are referred to as indirectly ionizing radiation since, having no charge, they do not directly apply impulses to orbital electrons as do alpha and beta particles. Electromagnetic radiation proceeds through matter until there is a chance of interaction with a particle. If the particle is an electron, it may receive enough energy to be ionized, whereupon it causes further ionization by direct interactions with other electrons. As a result, indirectly ionizing radiation (e.g. gamma, x-rays, and neutrons) can cause the liberation of directly ionizing particles (electrons) deep inside a medium. Because these neutral radiations undergo only chance encounters with matter, they do not have finite ranges, but rather are attenuated in an exponential manner. In other words, a given gamma ray has a definite probability of passing through any medium of any depth.
• Neutrons lose energy in matter by collisions which transfer kinetic energy. This process is called moderation and is most effective if the matter the neutrons collide with has about the same mass as the neutron. Once slowed down to the same average energy as the matter being interacted with (thermal energies), the neutrons have a much greater chance of interacting with a nucleus. Such interactions can result in material becoming radioactive or can cause radiation to be given off.
• Newton's Inverse Square Law
• Any point source which spreads its influence equally in all directions without a limit to its range will obey the inverse square law. This comes from strictly geometrical considerations. The intensity of the influence at any given radius (r) is the source strength divided by the area of the sphere. Being strictly geometric in its origin, the inverse square law applies to diverse phenomena. Point sources of gravitational force, electric field, light, sound, and radiation obey the inverse square law.
• As one of the fields which obey the general inverse square law, a point radiation source can be characterized by the diagram above whether you are talking about Roentgens, rads, or rems. All measures of exposure will drop off by the inverse square law. For example, if the radiation exposure is 100 mR/hr at 1 inch from a source, the exposure will be 0.01 mR/hr at 100 inches.
• The applet below shows a radioactive source. The distance to the green source is shown below. You can also drag the little person and his Geiger counter around to a distance of your choice. When the mouse button is released, a point is plotted on the graph. The dosage the person receives at the particular distance is shown numerically and graphically. The graph allows you to confirm Newton's Inverse Square Law.
• If the distance is too small, the dosage will be too high and our brave technician will face severe medical effects. To clear the graph, select a new material, or the same one again. Moving the mouse from the white area to the gray will turn off the sound!

• What dosage in mR/hr is considered safe? Better find out!
The red dosage lines represent 2, 5, and 100 mR/hr levels.
• Exercise: Assume you are standing three feet from a a 15 Curie Cobalt-60 source. How many mR/hr dosages are you getting?
• Interaction Between Penetrating Radiation
and Matter
When x-rays or gamma rays are directed into an object, some of the photons interact with the particles of the matter and their energy can be absorbed or scattered. This absorption and scattering is called attenuation. Other photons travel completely through the object without interacting with any of the material's particles. The number of photons transmitted through a material depends on the thickness, density and atomic number of the material, and the energy of the individual photons.
Even when they have the same energy, photons travel different distances within a material simply based on the probability of their encounter with one or more of the particles of the matter and the type of encounter that occurs. Since the probability of an encounter increases with the distance traveled, the number of photons reaching a specific point within the matter decreases exponentially with distance traveled. As shown in the graphic to the right, if 1000 photons are aimed at ten 1 cm layers of a material and there is a 10% chance of a photon being attenuated in this layer, then there will be 100 photons attenuated. This leave 900 photos to travel into the next layer where 10% of these photos will be attenuated. By continuing this progression, the exponential shape of the curve becomes apparent.









• The formula that describes this curve is:

• The factor that indicates how much attenuation will take place per cm (10% in this example) is known as the linear attenuation coefficient, m. The above equation and the linear attenuation coefficient will be discussed in more detail on the following page.
Transmitted Intensity and
Linear Attenuation Coefficient
For a narrow beam of mono-energetic photons, the change in x-ray beam intensity at some distance in a material can be expressed in the form of an equation as:

Where: dI = the change in intensity
I = the initial intensity
n = the number of atoms/cm3
s = a proportionality constant that reflects the total probability of a photon being scattered or absorbed
dx = the incremental thickness of material traversed
When this equation is integrated, it becomes:

The number of atoms/cm3 (n) and the proportionality constant (s) are usually combined to yield the linear attenuation coefficient (m). Therefore the equation becomes:

Where: I = the intensity of photons transmitted across some distance x
I0 = the initial intensity of photons
s = a proportionality constant that reflects the total probability of a photon being scattered or absorbed
m = the linear attenuation coefficient
x = distance traveled
The Linear Attenuation Coefficient (m)
The linear attenuation coefficient (m) describes the fraction of a beam of x-rays or gamma rays that is absorbed or scattered per unit thickness of the absorber. This value basically accounts for the number of atoms in a cubic cm volume of material and the probability of a photon being scattered or absorbed from the nucleus or an electron of one of these atoms. The linear attenuation coefficients for a variety of materials and x-ray energies are available in various reference books.
Using the transmitted intensity equation above, linear attenuation coefficients can be used to make a number of calculations. These include:
• the intensity of the energy transmitted through a material when the incident x-ray intensity, the material and the material thickness are known.
• the intensity of the incident x-ray energy when the transmitted x-ray intensity, material, and material thickness are known.
• the thickness of the material when the incident and transmitted intensity, and the material are known.
• the material can be determined from the value of m when the incident and transmitted intensity, and the material thickness are known.
Half-Value Layer
The thickness of any given material where 50% of the incident energy has been attenuated is know as the half-value layer (HVL). The HVL is expressed in units of distance (mm or cm). Like the attenuation coefficient, it is photon energy dependant. Increasing the penetrating energy of a stream of photons will result in an increase in a material's HVL.
The HVL is inversely proportional to the attenuation coefficient. If an incident energy of 1 and a transmitted energy is 0.5 is plugged into the equation introduced on the preceding page, it can be seen that the HVL multiplied by m must equal 0.693.


If x is the HVL then m times HVL must equal 0.693 (since the number 0.693 is the exponent value that gives a value of 0.5).
Therefore, the HVL and m are related as follows:

The HVL is often used in radiography simply because it is easier to remember values and perform simple calculations. In a shielding calculation, such as illustrated to the right, it can be seen that if the thickness of one HVL is known, it is possible to quickly determine how much material is needed to reduce the intensity to less than 1%.



Approximate HVL for Various Materials when Radiation is from a Gamma Source
Half-Value Layer, mm (inch)
Source Concrete Steel Lead Tungsten Uranium
Iridium-192 44.5 (1.75) 12.7 (0.5) 4.8 (0.19) 3.3 (0.13) 2.8 (0.11)
Cobalt-60 60.5 (2.38) 21.6 (0.85) 12.5 (0.49) 7.9 (0.31) 6.9 (0.27)
Approximate Half-Value Layer for Various Materials when Radiation is from an X-ray Source
Half-Value Layer, mm (inch)
Peak Voltage (kVp) Lead Concrete
50 0.06 (0.002) 4.32 (0.170)
100 0.27 (0.010) 15.10 (0.595)
150 0.30 (0.012) 22.32 (0.879)
200 0.52 (0.021) 25.0 (0.984)
250 0.88 (0.035) 28.0 (1.102)
300 1.47 (0.055) 31.21 (1.229)
400 2.5 (0.098) 33.0 (1.299)
1000 7.9 (0.311) 44.45 (1.75)
Note: The values presented on this page are intended for educational purposes. Other sources of information should be consulted when designing shielding for radiation sources.


Sources of Attenuation
The attenuation that results due to the interaction between penetrating radiation and matter is not a simple process. A single interaction event between a primary x-ray photon and a particle of matter does not usually result in the photon changing to some other form of energy and effectively disappearing. Several interaction events are usually involved and the total attenuation is the sum of the attenuation due to different types of interactions. These interactions include the photoelectric effect, scattering, and pair production. The figure below shows an approximation of the total absorption coefficient, (µ), in red, for iron plotted as a function of radiation energy. The four radiation-matter interactions that contribute to the total absorption are shown in black. The four types of interactions are: photoelectric (PE), Compton scattering (C), pair production (PP), and Thomson or Rayleigh scattering (R). Since most industrial radiography is done in the 0.1 to 1.5 MeV range, it can be seen from the plot that photoelectric and Compton scattering account for the majority of attenuation encountered.

Summary of different mechanisms that cause attenuation of an incident x-ray beam
Photoelectric (PE) absorption of x-rays occurs when the x-ray photon is absorbed, resulting in the ejection of electrons from the outer shell of the atom, and hence the ionization of the atom. Subsequently, the ionized atom returns to the neutral state with the emission of an x-ray characteristic of the atom. This subsequent emission of lower energy photons is generally absorbed and does not contribute to (or hinder) the image making process. Photoelectron absorption is the dominant process for x-ray absorption up to energies of about 500 KeV. Photoelectron absorption is also dominant for atoms of high atomic numbers.
Compton scattering (C) occurs when the incident x-ray photon is deflected from its original path by an interaction with an electron. The electron gains energy and is ejected from its orbital position. The x-ray photon loses energy due to the interaction but continues to travel through the material along an altered path. Since the scattered x-ray photon has less energy, it, therefore, has a longer wavelength than the incident photon. The event is also known as incoherent scattering because the photon energy change resulting from an interaction is not always orderly and consistent. The energy shift depends on the angle of scattering and not on the nature of the scattering medium. Click here for more information on Compton scattering and the relationship between the scatter angle and photon energy.
Pair production (PP) can occur when the x-ray photon energy is greater than 1.02 MeV, but really only becomes significant at energies around 10 MeV. Pair production occurs when an electron and positron are created with the annihilation of the x-ray photon. Positrons are very short lived and disappear (positron annihilation) with the formation of two photons of 0.51 MeV energy. Pair production is of particular importance when high-energy photons pass through materials of a high atomic number.
Below are other interaction phenomenon that can occur. Under special circumstances these may need to be considered, but are generally negligible.
Thomson scattering (R), also known as Rayleigh, coherent, or classical scattering, occurs when the x-ray photon interacts with the whole atom so that the photon is scattered with no change in internal energy to the scattering atom, nor to the x-ray photon. Thomson scattering is never more than a minor contributor to the absorption coefficient. The scattering occurs without the loss of energy. Scattering is mainly in the forward direction.
Photodisintegration (PD) is the process by which the x-ray photon is captured by the nucleus of the atom with the ejection of a particle from the nucleus when all the energy of the x-ray is given to the nucleus. Because of the enormously high energies involved, this process may be neglected for the energies of x-rays used in radiography.
Effect of Photon Energy on Attenuation
Absorption characteristics will increase or decrease as the energy of the x-ray is increased or decreased. Since attenuation characteristics of materials are important in the development of contrast in a radiograph, an understanding of the relationship between material thickness, absorption properties, and photon energy is fundamental to producing a quality radiograph. A radiograph with higher contrast will provide greater probability of detection of a given discontinuity. An understanding of absorption is also necessary when designing x-ray and gamma ray shielding, cabinets, or exposure vaults.
The applet below can be used to investigate the effect that photon energy has on the type of interaction that the photon is likely to have with a particle of the material (shown in gray). Various materials and material thicknesses may be selected and the x-ray energy can be set to produce a range from 1 to 199 KeV. Notice as various experiments are run with the applets that low energy radiation produces predominately photoelectric events and higher energy x-rays produce predominately Compton scattering events. Also notice that if the energy is too low, none of the radiation penetrates the material.

This second applet is similar to the one above except that the voltage (KVp) for a typical generic x-ray tube source can be selected. The applet displays the spectrum of photon energies (without any filtering) that the x-ray source produces at the selected voltage. Pressing the "Emit X-ray" button will show the interaction that will occur from one photon with an energy within the spectrum. Pressing the "Auto" button will show the interactions from a large number
Compton Scattering
As mentioned on the previous page, Compton scattering occurs when the incident x-ray photon is deflected from its original path by an interaction with an electron. The electron is ejected from its orbital position and the x-ray photon loses energy because of the interaction but continues to travel through the material along an altered path. Energy and momentum are conserved in this process. The energy shift depends on the angle of scattering and not on the nature of the scattering medium. Since the scattered x-ray photon has less energy, it has a longer wavelength and less penetrating than the incident photon.
Compton effect was first observed by Arthur Compton in 1923 and this discovery led to his award of the 1927 Nobel Prize in Physics. The discovery is important because it demonstrates that light cannot be explained purely as a wave phenomenon. Compton's work convinced the scientific community that light can behave as a stream of particles (photons) whose energy is proportional to the frequency.
The change in wavelength of the scattered photon is given by:

Where: l = wavelength of incident x-ray photon
l' = wavelength of scattered x-ray photon
h = Planck's Constant: The fundamental constant equal to the ratio of the energy E of a quantum of energy to its frequency v: E=hv.
me = the mass of an electron at rest
c = the speed of light
q = The scattering angle of the scattered photon
The applet below demonstrates Compton scattering as calculated with the Klein-Nishina formula, which provides an accurate prediction of the angular distribution of x-rays and gamma-rays that are incident upon a single electron. Before this formula was derived, the electron cross section had been classically derived by the British physicist and discoverer of the electron, J.J. Thomson. However, scattering experiments showed significant deviations from the results predicted by Thomson's model. The Klein-Nishina formula incorporates the Breit-Dirac recoil factor, R, also known as radiation pressure. The formula also corrects for relativistic quantum mechanics and takes into account the interaction of the spin and magnetic moment of the electron with electromagnetic radiation. Quantum mechanics is
a system of mechanics based on quantum theory to provide a consistent explanation of both electromagnetic wave and atomic structure.
The applet shows that when a photon of a given energy hits an atom, it is sometimes reflected in a different direction. At the same time, it loses energy to an electron that is ejected from the atom. Theta is the angle between the scattered photon direction and the path of the incident photon. Phi is the angle between the scattered electron direction and the path of the incident photon.
Geometric Unsharpness
Geometric unsharpness refers to the loss of definition that is the result of geometric factors of the radiographic equipment and setup. It occurs because the radiation does not originate from a single point but rather over an area. Consider the images below which show two sources of different sizes, the paths of the radiation from each edge of the source to each edge of the feature of the sample, the locations where this radiation will expose the film and the density profile across the film. In the first image, the radiation originates at a very small source. Since all of the radiation originates from basically the same point, very little geometric unsharpness is produced in the image. In the second image, the source size is larger and the different paths that the rays of radiation can take from their point of origin in the source causes the edges of the notch to be less defined.

The three factors controlling unsharpness are source size, source to object distance, and object to detector distance. The source size is obtained by referencing manufacturers specifications for a given X-ray or gamma ray source. Industrial x-ray tubes often have focal spot sizes of 1.5 mm squared but microfocus systems have spot sizes in the 30 micron range. As the source size decreases, the geometric unsharpness also decreases. For a given size source, the unsharpness can also be decreased by increasing the source to object distance, but this comes with a reduction in radiation intensity.
The object to detector distance is usually kept as small as possible to help minimize unsharpness. However, there are situations, such as when using geometric enlargement, when the object is separated from the detector, which will reduce the definition. The applet below allow the geometric unsharpness to be visualized as the source size, source to object distance, and source to detector distance are varied. The area of varying density at the edge of a feature that results due to geometric factors is called the penumbra. The penumbra is the gray area seen in the applet.

Codes and standards used in industrial radiography require that geometric unsharpness be limited. In general, the allowable amount is 1/100 of the material thickness up to a maximum of 0.040 inch. These values refer to the degree of penumbra shadow in a radiographic image. Since the penumbra is not nearly as well defined as shown in the image to the right, it is difficult to measure it in a radiograph. Therefore it is typically calculated. The source size must be obtained from the equipment manufacturer or measured. Then the unsharpness can be calculated using measurements made of the setup.
For the case, such as that shown to the right, where a sample of significant thickness is placed adjacent to the detector, the following formula is used to calculate the maximum amount of unsharpness due to specimen thickness:
Ug = f * b/a
f = source focal-spot size
a = distance from the source to front surface of the object
b = the thickness of the object
For the case when the detector is not placed next to the sample, such as when geometric magnification is being used, the calculation becomes:
Ug = f* b/a
f = source focal-spot size.
a = distance from x-ray source to front surface of material/object
b = distance from the front surface of the object to the detector



Filters in Radiography
At x-ray energies, filters consist of material placed in the useful beam to absorb, preferentially, radiation based on energy level or to modify the spatial distribution of the beam. Filtration is required to absorb the lower-energy x-ray photons emitted by the tube before they reach the target. The use of filters produce a cleaner image by absorbing the lower energy x-ray photons that tend to scatter more.
The total filtration of the beam includes the inherent filtration (composed of part of the x-ray tube and tube housing) and the added filtration (thin sheets of a metal inserted in the x-ray beam). Filters are typically placed at or near the x-ray port in the direct path of the x-ray beam. Placing a thin sheet of copper between the part and the film cassette has also proven an effective method of filtration.
For industrial radiography, the filters added to the x-ray beam are most often constructed of high atomic number materials such as lead, copper, or brass. Filters for medical radiography are usually made of aluminum (Al). The amount of both the inherent and the added filtration are stated in mm of Al or mm of Al equivalent. The amount of filtration of the x-ray beam is specified by and based on the voltage potential (keV) used to produce the beam. The thickness of filter materials is dependent on atomic numbers, kilovoltage settings, and the desired filtration factor.
Gamma radiography produces relatively high energy levels at essentially monochromatic radiation, therefore filtration is not a useful technique and is seldom used
Secondary (Scatter) Radiation and Undercut Control
Secondary (Scatter) Radiation
Secondary or scatter radiation must often be taken into consideration when producing a radiograph. The scattered photons create a loss of contrast and definition. Often secondary radiation is thought of as radiation striking the film reflected from an object in the immediate area, such as a wall, or from the table or floor where the part is resting. Side scatter originates from walls, or objects on the source side of the film. Control of side scatter can be achieved by moving objects in the room away from the film, moving the x-ray tube to the center of the vault, or placing a collimator at the exit port, thus reducing the diverging radiation surrounding the central beam.
It is often called backscatter when it comes from objects behind the film. Industry codes and standards often require that a lead letter "B" be placed on the back of the cassette to verify the control of backscatter. If the letter "B" shows as a "ghost" image on the film, a significant amount of backscatter radiation is reaching the film. The image of the "B" is often very nondistinct as shown in the image to the right. The arrow points to the area of backscatter radiation from the lead "B" located on the back side of the film. The control of backscatter radiation is achieved by backing the film in the cassette with a sheet of lead that is at least 0.010 inch thick. It is a common practice in industry to place a 0.005" lead screen in front and a 0.010" screen behind the film.
Undercut
Another condition that must often be controlled when producing a radiograph is called undercut. Parts with holes, hollow areas, or abrupt thickness changes are likely to suffer from undercut if controls are not put in place. Undercut appears as a darkening of the radiograph in the area of the thickness transition. This results in a loss of resolution or blurring at the transition area. Undercut occurs due to scattering within the film. At the edges of a part or areas where the part transitions from thick to thin, the intensity of the radiation reaching the film is much greater than in the thicker areas of the part. The high level of radiation intensity reaching the film results in a high level of scattering within the film. It should also be noted that the faster the film speed, the more undercut that is likely to occur. Scattering from within the walls of the part also contributes to undercut, but research has shown that scattering within the film is the primary cause. Masks are used to control undercut. Sheets of lead cut to fill holes or surround the part and metallic shot and liquid absorbers are often used as masks.
Radiation Safety
Ionizing radiation is an extremely important NDT tool but it can pose a hazard to human health. For this reason, special precautions must be observed when using and working around ionizing radiation. The possession of radioactive materials and use of radiation producing devices in the United States is governed by strict regulatory controls. The primary regulatory authority for most types and uses of radioactive materials is the federal Nuclear Regulatory Commission (NRC). However, more than half of the states in the US have entered into "agreement" with the NRC to assume regulatory control of radioactive material use within their borders. As part of the agreement process, the states must adopt and enforce regulations comparable to those found in Title 10 of the Code of Federal Regulations. Regulations for control of radioactive material used in Iowa are found in Chapter 136C of the Iowa Code.
For most situations, the types and maximum quantities of radioactive materials possessed, the manner in which they may be used, and the individuals authorized to use radioactive materials are stipulated in the form of a "specific" license from the appropriate regulatory authority. In Iowa, this authority is the Iowa Department of Public Health. However, for certain institutions which routinely use large quantities of numerous types of radioactive materials, the exact quantities of materials and details of use may not be specified in the license. Instead, the license grants the institution the authority and responsibility for setting the specific requirements for radioactive material use within its facilities. These licensees are termed "broadscope" and require a Radiation Safety Committee and usually a full-time Radiation Safety Officer.
More information on Radiation Safety

X-ray Generators
The major components of an X-ray generator are the tube, the high voltage generator, the control console, and the cooling system. As discussed earlier in this material, X-rays are generated by directing a stream of high speed electrons at a target material such as tungsten, which has a high atomic number. When the electrons are slowed or stopped by the interaction with the atomic particles of the target, X-radiation is produced. This is accomplished in an X-ray tube such as the one shown here. The X-ray tube is one of the components of an X-ray generator.
The tube cathode (filament) is heated with a low-voltage current of a few amps. The filament heats up and the electrons in the wire become loosely held. A large electrical potential is created between the cathode and the anode by the high-voltage generator. Electrons that break free of the cathode are strongly attracted to the anode target. The stream of electrons between the cathode and the anode is the tube current. The tube current is measured in milliamps and is controlled by regulating the low-voltage, heating current applied to the cathode. The higher the temperature of the filament, the larger the number of electrons that leave the cathode and travel to the anode. The milliamp or current setting on the control console regulates the filament temperature, which relates to the intensity of the X-ray output.
The high-voltage between the cathode and the anode affects the speed at which the electrons travel and strike the anode. The higher the kilovoltage, the more speed and, therefore, energy the electrons have when they strike the anode. Electrons striking with more energy results in X-rays with more penetrating power. The high-voltage potential is measured in kilovolts, and this is controlled with the voltage or kilovoltage control on the control console. An increase in the kilovoltage will also result in an increase in the intensity of the radiation.
A focusing cup is used to concentrate the stream of electrons to a small area of the target called the focal spot. The focal spot size is an important factor in the system's ability to produce a sharp image. See the information on image resolution and geometric unsharpness for more information on the effect of the focal spot size. Much of the energy applied to the tube is transformed into heat at the focal spot of the anode. As mentioned above, the anode target is commonly made from tungsten, which has a high melting point in addition to a high atomic number. However, cooling of the anode by active or passive means is necessary. Water or oil recirculating systems are often used to cool tubes. Some low power tubes are cooled simply with the use of thermally conductive materials and heat radiating fins.
It should also be noted that in order to prevent the cathode from burning up and to prevent arcing between the anode and the cathode, all of the oxygen is removed from the tube by pulling a vacuum. Some systems have external vacuum pumps to remove any oxygen that may have leaked into the tube. However, most industrial X-ray tubes simply require a warm-up procedure to be followed. This warm-up procedure carefully raises the tube current and voltage to slowly burn any of the available oxygen before the tube is operated at high power.
The other important component of an X-ray generating system is the control console. Consoles typically have a keyed lock to prevent unauthorized use of the system. They will have a button to start the generation of X-rays and a button to manually stop the generation of X-rays. The three main adjustable controls regulate the tube voltage in kilovolts, the tube amperage in millivolts, and the exposure time in minutes and seconds. Some systems also have a switch to change the focal spot size of the tube.
X-ray Generator Options
Kilovoltage - X-ray generators come in a large variety of sizes and configurations. There are stationary units that are intended for use in lab or production environments and portable systems that can be easily moved to the job site. Systems are available in a wide range of energy levels. When inspecting large steel or heavy metal components, systems capable of producing millions of electron volts may be necessary to penetrate the full thickness of the material. Alternately, small, lightweight components may only require a system capable of producing only a few tens of kilovolts.
Focal Spot Size - Another important consideration is the focal spot size of the tube since this factors into the geometric unsharpness of the image produced. Generally, the smaller the spot size the better. But as the electron stream is focused to a smaller area, the power of the tube must be reduced to prevent overheating at the tube anode. Therefore, the focal spot size becomes a tradeoff of resolving capability and power. Generators can be classified as a conventional, minifocus, and microfocus system. Conventional units have focal-spots larger than about 0.5 mm, minifocus units have focal-spots ranging from 50 microns to 500 microns (.050 mm to .5 mm), and microfocus systems have focal-spots smaller than 50 microns. Smaller spot sizes are especially advantageous in instances where the magnification of an object or region of an object is necessary. The cost of a system typically increases as the spot size decreases and some microfocus tubes exceed $100,000. Some manufacturers combine two filaments of different sizes to make a dual-focus tube. This usually involves a conventional and a minifocus spot-size and adds flexibility to the system.
AC and Constant Potential Systems - AC X-ray systems supply the tube with sinusoidal varying alternating current. They produce X-rays only during one half of the 1/60th second cycle. This produces bursts of radiation rather than a constant stream. Additionally, the voltage changes over the cycle and the X-ray energy varies as the voltage ramps up and then back down. Only a portion of the radiation is useable and low energy radiation must usually be filtered out. Constant potential generators rectify the AC wall current and supply the tube with DC current. This results in a constant stream of relatively consistent radiation. Most newer systems now use constant potential generators.
Flash X-Ray Generators
Flash X-ray generators produce short, intense bursts of radiation. These systems are useful when examining objects in rapid motion or when studying transient events such as the tripping of an electrical breaker. In these type of situations, high-speed video is used to rapidly capture images from an image intensifier or other real-time detector. Since the exposure time for each image is very short, a high level of radiation intensity is needed in order to get a usable output from the detector. To prevent the imaging system from becoming saturated from a continuous exposure high intensity radiation, the generator supplies microsecond bursts of radiation. The tubes of these X-ray generators do not have a heated filament but instead electrons are pulled from the cathode by the strong electrical potential between the cathode and the anode. This process is known as field emission or cold emission and it is capable of producing electron currents in the thousands of amperes.
Radio Isotope (Gamma) Sources
Manmade radioactive sources are produced by introducing an extra neutron to atoms of the source material. As the material rids itself of the neutron, energy is released in the form of gamma rays. Two of the more common industrial gamma-ray sources for industrial radiography are iridium-192 and cobalt-60. These isotopes emit radiation in a few discreet wavelengths. Cobalt-60 will emit a 1.33 and a 1.17 MeV gamma ray, and iridium-192 will emit 0.31, 0.47, and 0.60 MeV gamma rays. In comparison to an X-ray generator, cobalt-60 produces energies comparable to a 1.25 MeV X-ray system and iridium-192 to a 460 keV X-ray system. These high energies make it possible to penetrate thick materials with a relatively short exposure time. This and the fact that sources are very portable are the main reasons that gamma sources are widely used for field radiography. Of course, the disadvantage of a radioactive source is that it can never be turned off and safely managing the source is a constant responsibility.
Physical size of isotope materials varies between manufacturers, but generally an isotope material is a pellet that measures 1.5 mm x 1.5 mm. Depending on the level of activity desired, a pellet or pellets are loaded into a stainless steel capsule and sealed by welding. The capsule is attached to short flexible cable called a pigtail.



The source capsule and the pigtail is housed in a shielding device referred to as a exposure device or camera. Depleted uranium is often used as a shielding material for sources. The exposure device for iridium-192 and cobalt-60 sources will contain 45 pounds and 500 pounds of shielding materials, respectively. Cobalt cameras are often fixed to a trailer and transported to and from inspection sites. When the source is not being used to make an exposure, it is locked inside the exposure device.

To make a radiographic exposure, a crank-out mechanism and a guide tube are attached to opposite ends of the exposure device. The guide tube often has a collimator at the end to shield the radiation except in the direction necessary to make the exposure. The end of the guide tube is secured in the location where the radiation source needs to be to produce the radiograph. The crank-out cable is stretched as far as possible to put as much distance as possible between the exposure device and the radiographer. To make the exposure, the radiographer quickly cranks the source out of the exposure device and into position in the collimator at the end of the guide tube. At the end of the exposure time, the source is cranked back into the exposure device. There is a series of safety procedures, which include several radiation surveys, that must be accomplished when making an exposure with a gamma source. See the radiation safety material for more information.



Radiographic Film
X-ray films for general radiography consist of an emulsion-gelatin containing radiation sensitive silver halide crystals, such as silver bromide or silver chloride, and a flexible, transparent, blue-tinted base. The emulsion is different from those used in other types of photography films to account for the distinct characteristics of gamma rays and x-rays, but X-ray films are sensitive to light. Usually, the emulsion is coated on both sides of the base in layers about 0.0005 inch thick. Putting emulsion on both sides of the base doubles the amount of radiation-sensitive silver halide, and thus increases the film speed. The emulsion layers are thin enough so developing, fixing, and drying can be accomplished in a reasonable time. A few of the films used for radiography only have emulsion on one side which produces the greatest detail in the image.
When x-rays, gamma rays, or light strike the grains of the sensitive silver halide in the emulsion, some of the Br- ions are liberated and captured by the Ag+ ions. This change is of such a small nature that it cannot be detected by ordinary physical methods and is called a "latent (hidden) image." However, the exposed grains are now more sensitive to the reduction process when exposed to a chemical solution (developer), and the reaction results in the formation of black, metallic silver. It is this silver, suspended in the gelatin on both sides of the base, that creates an image. See the page on film processing for additional information.
Film Selection
The selection of a film when radiographing any particular component depends on a number of different factors. Listed below are some of the factors that must be considered when selecting a film and developing a radiographic technique.
1. Composition, shape, and size of the part being examined and, in some cases, its weight and location.
2. Type of radiation used, whether x-rays from an x-ray generator or gamma rays from a radioactive source.
3. Kilovoltages available with the x-ray equipment or the intensity of the gamma radiation.
4. Relative importance of high radiographic detail or quick and economical results.
Selecting the proper film and developing the optimal radiographic technique usually involves arriving at a balance between a number of opposing factors. For example, if high resolution and contrast sensitivity is of overall importance, a slower and finer grained film should be used in place of a faster film.
Film Packaging
Radiographic film can be purchased in a number of different packaging options. The most basic form is as individual sheets in a box. In preparation for use, each sheet must be loaded into a cassette or film holder in the darkroom to protect it from exposure to light. The sheets are available in a variety of sizes and can be purchased with or without interleaving paper. Interleaved packages have a layer of paper that separates each piece of film. The interleaving paper is removed before the film is loaded into the film holder. Many users find the interleaving paper useful in separating the sheets of film and offer some protection against scratches and dirt during handling.
Industrial x-ray films are also available in a form in which each sheet is enclosed in a light-tight envelope. The film can be exposed from either side without removing it from the protective packaging. A rip strip makes it easy to remove the film in the darkroom for processing. This form of packaging has the advantage of eliminating the process of loading the film holders in the darkroom. The film is completely protected from finger marks and dirt until the time the film is removed from the envelope for processing.
Packaged film is also available in rolls, which allows the radiographer to cut the film to any length. The ends of the packaging are sealed with electrical tape in the darkroom. In applications such as the radiography of circumferential welds and the examination of long joints on an aircraft fuselage, long lengths of film offer great economic advantage. The film is wrapped around the outside of a structure and the radiation source is positioned on axis inside, allowing for examination of a large area with a single exposure.
Envelope packaged film can be purchased with the film sandwiched between two lead oxide screens. The screens function to reduce scatter radiation at energy levels below 150keV and as intensification screens above 150 keV.
Film Handling
X-ray film should always be handled carefully to avoid physical strains, such as pressure, creasing, buckling, friction, etc. Whenever films are loaded in semi-flexible holders and external clamping devices are used, care should be taken to be sure pressure is uniform. If a film holder bears against a few high spots, such as on an un-ground weld, the pressure may be great enough to produce desensitized areas in the radiograph. This precaution is particularly important when using envelope-packed films.
Marks resulting from contact with fingers that are moist or contaminated with processing chemicals, as well as crimp marks, are avoided if large films are always grasped by the edges and allowed to hang free. A supply of clean towels should be kept close at hand as an incentive to dry the hands often and well. Use of envelope-packed films avoids many of these problems until the envelope is opened for processing.
Another important precaution is to avoid drawing film rapidly from cartons, exposure holders, or cassettes. Such care will help to eliminate circular or treelike black markings in the radiograph that sometimes result due to static electric discharges
Exposure Vaults & Cabinets
Exposure vaults and cabinets allow personnel to work safely in the area while exposures are taking place. Exposure vaults tend to be larger walk in rooms with shielding provided by high-density concrete block and lead.
Exposure cabinets are often self-contained units with integrated x-ray equipment and are typically shielded with steel and lead to absorb x-ray radiation.
Exposure vaults and cabinets are equipped with protective interlocks that disable the system if anything interrupts the integrity of the enclosure. Additionally, walk in vaults are equipped with emergency "kill buttons" that allow radiographers to shut down the system if it should accidentally be started while they were in the vault
Image Considerations
The usual objective in radiography is to produce an image showing the highest amount of detail possible. This requires careful control of a number of different variables that can affect image quality. Radiographic sensitivity is a measure of the quality of an image in terms of the smallest detail or discontinuity that may be detected. Radiographic sensitivity is dependant on the combined effects of two independent sets of variables. One set of variables affects the contrast and the other set of variables affects the definition of the image.


Radiographic contrast is the degree of density difference between two areas on a radiograph. Contrast makes it easier to distinguish features of interest, such as defects, from the surrounding area. The image to the right shows two radiographs of the same stepwedge. The upper radiograph has a high level of contrast and the lower radiograph has a lower level of contrast. While they are both imaging the same change in thickness, the high contrast image uses a larger change in radiographic density to show this change. In each of the two radiographs, there is a small circle, which is of equal density in both radiographs. It is much easier to see in the high contrast radiograph. The factors affecting contrast will be discussed in more detail on the following page.
Radiographic definition is the abruptness of change in going from one area of a given radiographic density to another. Like contrast, definition also makes it easier to see features of interest, such as defects, but in a totally different way. In the image to the right, the upper radiograph has a high level of definition and the lower radiograph has a lower level of definition. In the high definition radiograph it can be seen that a change in the thickness of the stepwedge translates to an abrupt change in radiographic density. It can be seen that the details, particularly the small circle, are much easier to see in the high definition radiograph. It can be said that the detail portrayed in the radiograph is equivalent to the physical change present in the stepwedge. In other words, a faithful visual reproduction of the stepwedge was produced. In the lower image, the radiographic setup did not produce a faithful visual reproduction. The edge line between the steps is blurred. This is evidenced by the gradual transition between the high and low density areas on the radiograph. The factors affecting definition will be discussed in more detail on a following page.
Since radiographic contrast and definition are not dependent upon the same set of factors, it is possible to produce radiographs with the following qualities:
• Low contrast and poor definition
• High contrast and poor definition
• Low contrast and good definition
• High contrast and good definition
• Radiographic Contrast
• As mentioned on the previous page, radiographic contrast describes the differences in photographic density in a radiograph. The contrast between different parts of the image is what forms the image and the greater the contrast, the more visible features become. Radiographic contrast has two main contributors: subject contrast and detector (film) contrast.
• Subject Contrast
Subject contrast is the ratio of radiation intensities transmitted through different areas of the component being evaluated. It is dependant on the absorption differences in the component, the wavelength of the primary radiation, and intensity and distribution of secondary radiation due to scattering.
• It should be no surprise that absorption differences within the subject will affect the level of contrast in a radiograph. The larger the difference in thickness or density between two areas of the subject, the larger the difference in radiographic density or contrast. However, it is also possible to radiograph a particular subject and produce two radiographs having entirely different contrast levels. Generating x-rays using a low kilovoltage will generally result in a radiograph with high contrast. This occurs because low energy radiation is more easily attenuated. Therefore, the ratio of photons that are transmitted through a thick and thin area will be greater with low energy radiation. This in turn will result in the film being exposed to a greater and lesser degree in the two areas.
• There is a tradeoff, however. Generally, as contrast sensitivity increases, the latitude of the radiograph decreases. Radiographic latitude refers to the range of material thickness that can be imaged This means that more areas of different thicknesses will be visible in the image. Therefore, the goal is to balance radiographic contrast and latitude so that there is enough contrast to identify the features of interest but also to make sure the latitude is great enough so that all areas of interest can be inspected with one radiograph. In thick parts with a large range of thicknesses, multiple radiographs will likely be necessary to get the necessary density levels in all areas.
• Film Contrast
Film contrast refers to density differences that result due to the type of film used, how it was exposed, and how it was processed. Since there are other detectors besides film, this could be called detector contrast, but the focus here will be on film. Exposing a film to produce higher film densities will generally increase the contrast in the radiograph.
• A typical film characteristic curve, which shows how a film responds to different amounts of radiation exposure, is shown to the right. (More information on film characteristic curves is presented later in this section.) From the shape of the curves, it can be seen that when the film has not seen many photon interactions (which will result in a low film density) the slope of the curve is low. In this region of the curve, it takes a large change in exposure to produce a small change in film density. Therefore, the sensitivity of the film is relatively low. It can be seen that changing the log of the relative exposure from 0.75 to 1.4 only changes the film density from 0.20 to about 0.30. However, at film densities above 2.0, the slope of the characteristic curve for most films is at its maximum. In this region of the curve, a relatively small change in exposure will result in a relatively large change in film density. For example, changing the log of relative exposure from 2.4 to 2.6 would change the film density from 1.75 to 2.75. Therefore, the sensitivity of the film is high in this region of the curve. In general, the highest overall film density that can be conveniently viewed or digitized will have the highest level of contrast and contain the most useful information.
• Lead screens in the thickness range of 0.004 to 0.015 inch typically reduce scatter radiation at energy levels below 150,000 volts. Above this point they will emit electrons to provide more exposure of the film to ionizing radiation, thus increasing the density and contrast of the radiograph. Fluorescent screens produce visible light when exposed to radiation and this light further exposes the film and increases contrast.
Definition
As mentioned previously, radiographic definition is the abruptness of change from one density to another. Geometric factors of the equipment and the radiographic setup, and film and screen factors both have an effect on definition. Geometric factors include the size of the area of origin of the radiation, the source-to-detector (film) distance, the specimen-to-detector (film) distance, movement of the source, specimen or detector during exposure, the angle between the source and some feature and the abruptness of change in specimen thickness or density.
Geometric Factors
The effect of source size, source-to-film distance and the specimen-to-detector distance were covered in detail on the geometric unsharpness page. But briefly, to produce the highest level of definition, the focal-spot or source size should be as close to a point source as possible, the source-to-detector distance should be a great as practical, and the specimen-to-detector distance should be a small as practical. This is shown graphically in the images below.

The angle between the radiation and some features will also have an effect on definition. If the radiation is parallel to an edge or linear discontinuity, a sharp distinct boundary will be seen in the image. However, if the radiation is not parallel with the discontinuity, the feature will appear distorted, out of position and less defined in the image.
Abrupt changes in thickness and/or density will appear more defined in a radiograph than will areas of gradual change. For example, consider a circle. Its largest dimension will a cord that passes through its centerline. As the cord is moved away from the centerline, the thickness gradually decreases. It is sometimes difficult to locate the edge of a void due to this gradual change in thickness.
Lastly, any movement of the specimen, source or detector during the exposure will reduce definition. Similar to photography, any movement will result in blurring of the image. Vibration from nearby equipment may be an issue in some inspection situations.
Film and Screen Factors
The last set of factors concern the film and the use of fluorescent screens. A fine grain film is capable of producing an image with a higher level of definition than is a coarse grain film. Wavelength of the radiation will influence apparent graininess. As the wavelength shortens and penetration increases, the apparent graininess of the film will increase. Also, increased development of the film will increase the apparent graininess of the radiograph.
The use of fluorescent screens also results in lower definition. This occurs for a couple of different reasons. The reason that fluorescent screens are sometimes used is because incident radiation causes them to give off light that helps to expose the film. However, the light they produce spreads in all directions, exposing the film in adjacent areas, as well as in the areas which are in direct contact with the incident radiation. Fluorescent screens also produce screen mottle on radiographs. Screen mottle is associated with the statistical variation in the numbers of photons that interact with the screen from one area to the next.
Radiographic Density
Photographic, radiographic or film density is a measure of the degree of film darkening. Technically it should be called "transmitted density" when associated with transparent-base film since it is a measure of the light transmitted through the film. Density is a logarithmic unit that describes a ratio of two measurements. Specifically, it is the log of the intensity of light incident on the film (I0) to the intensity of light transmitted through the film (It).

Similar to the decibel, using the log of the ratio allows ratios of various sizes to be described using easy to work with numbers. The following table shows the relationship between the amount of transmitted light and the calculated film density.
Transmittance
(I0/It) Percent Transmittance Film Density
Log(I0/It)
1.0 100% 0
0.1 10% 1
0.01 1% 2
0.001 0.1% 3
0.0001 0.01% 4
0.00001 0.001% 5
0.000001 0.0001% 6
0.0000001 0.00001% 7
From this table, it can be seen that a density reading of 2.0 is the result of only one percent of the incident light making it through the film. At a density of 4.0 only 0.01% of transmitted light reaches the far side of the film. Industrial codes and standards typically require a radiograph to have a density between 2.0 and 4.0 for acceptable viewing with common film viewers. Above 4.0, extremely bright viewing lights is necessary for evaluation. Contrast within a film increases with increasing density, so in general the higher the density the better. When radiographs will be digitized, densities above 4.0 are often used since digitization systems can capture and redisplay for easy viewing information from densities up to 6.0.
Film density is measured with a densitometer. A densitometer simply has a photoelectric sensor that measures the amount of light transmitted through a piece of film. The film is placed between the light source and the sensor and a density reading is produced by the instrument
Film Characteristic Curves
In film radiography, the number of photons reaching the film determines how dense the film will become when other factors such as the developing time are held constant. The number of photons reaching the film is a function of the intensity of the radiation and the time that the film is exposed to the radiation. The term used to describe the control of the number of photons reaching the film is “exposure.”
Film Characteristic Curves
Different types of radiographic film respond differently to a given amount of exposure. Film manufacturers commonly characterize their film to determine the relationship between the applied exposure and the resulting film density. This relationship commonly varies over a range of film densities, so the data is presented in the form of a curve such as the one for Kodak AA400 shown to the right. The plot is called a film characteristic curve, sensitometric curve, density curve, or H and D curve (named for developers Hurter and Driffield). "Sensitometry" is the science of measuring the response of photographic emulsions to light or radiation.
A log scale is used or the values are reported in log units on a linear scale to compress the x-axis. Also, relative exposure values (unitless) are often used. Relative exposure is the ratio of two exposures. For example, if one film is exposed at 100 keV for 6mAmin and a second film is exposed at the same energy for 3mAmin, then the relative exposure would be 2. The image directly to the right shows three film characteristic curves with the relative exposure plotted on a log scale, while the image below and to the right shows the log relative exposure plotted on a linear scale.
Use of the logarithm of the relative exposure scale makes it easy to compare two sets of values, which is the primary use of the curves. Film characteristic curves can be used to adjust the exposure used to produce a radiograph with a certain density to an exposure that will produce a second radiograph of higher or lower film density. The curves can also be used to relate the exposure produced with one type of film to exposure needed to produce a radiograph of the same density with a second type of film.
Adjusting the Exposure to Produce a Different Film Density
Suppose Film B was exposed with 140 keV at 1mA for 10 seconds and the resulting radiograph had a density in the region of interest of 1.0. Specifications typically require the density to be above 2.0 for reasons discussed on the film density page. From the film characteristic curve, the relative exposures for the actual density and desired density are determined and the ratio of these two values is used to adjust the actual exposure. In this first example, a plot with log relative exposure and a linear x-axis will be used.
From the graph, first determine the difference between the relative exposures of the actual and the desired densities. A target density of 2.5 is used to ensure that the exposure produces a density above the 2.0 minimum requirement. The log relative exposure of a density of 1.0 is 1.62 and the log of the relative exposure when the density of the film is 2.5 is 2.12. The difference between the two values is 0.5. Take the anti-log of this value to change it from log relative exposure to simply the relative exposure and this value is 3.16. Therefore, the exposure used to produce the initial radiograph with a 1.0 density needs to be multiplied by 3.16 to produce a radiograph with the desired density of 2.5. The exposure of the original x-ray was 10 mAs, so the new exposure must be 10 mAs x 3.16 or 31.6 mAs at 140 keV.
Adjusting the Exposure to Allow Use of a Different Film Type
Another use of film characteristic curves is to adjust the exposure when switching types of film. The location of the characteristic curves of different films along the x-axis relates to the film speed of the films. The farther to the right that a curve is on the chart, the slower the film speed. It must be noted that the two curves being used must have been produced with the same radiation energy. The shape of the characteristic curve is largely independent of the wavelength of the x-ray or gamma radiation, but the location of the curve along the x-axis, with respect to the curve of another film, does depend on radiation quality.
Suppose an acceptable radiograph with a density of 2.5 was produced by exposing Film A for 30 seconds at 1mA and 130 keV. Now, it is necessary to inspect the part using Film B. The exposure can be adjusted by following the above method, as long at the two film characteristic curves were produced with roughly the same radiation quality. For this example, the characteristic curves for Film A and B are shown on a chart showing relative exposure on a log scale. The relative exposure that produced a density of 2.5 on Film A is found to be 68. The relative exposure that should produce a density of 2.5 on Film B is found to be 140. The relative exposure of Film B is about twice that of Film A, or 2.1 to be more exact. Therefore, to produce a 2.5 density radiograph with Film B the exposure should be 30mAs times 2.1 or 62 mAs.
Exposure Calculations
Properly exposing a radiograph is often a trial and error process, as there are many variables that affect the final radiograph. Some of the variables that affect the density of the radiograph include:
• The spectrum of radiation produced by the x-ray generator.
• The voltage potential used to generate the x-rays (KeV).
• The amperage used to generate the x-rays (mA).
• The exposure time.
• The distance between the radiation source and the film.
• The material of the component being radiographed.
• The thickness of the material that the radiation must travel through.
• The amount of scattered radiation reaching the film.
• The film being used.
• The concentration of the film processing chemicals and the contact time.
The current industrial practice is to develop a procedure that produces an acceptable density by trail for each specific x-ray generator. This process may begin using published exposure charts to determine a starting exposure, which usually requires some refinement.
However, it is possible to calculate the density of a radiograph to a fair degree accuracy when the spectrum of an x-ray generator has been characterized. The calculation cannot completely account for scattering but, otherwise, the relationship between many of the variables and their effect on film density is known. Therefore, the change in film density can be estimated for any given variable change. For example, from Newton's Inverse Square Law, it is known that the intensity of the radiation varies inversely with distance from the source. It is also known that the intensity of the radiation transmitted through a material varies exponentially with the linear attenuation coefficient (m) and the thickness of the material.
A number of radiographic modeling program are available that make this calculation. These programs can provide a fair representation of the radiograph that will be produce with a specific setup and parameters. The applet below is a very simple radiographic density calculator. The applet allows the density of a radiograph to be estimated based on material, thickness, geometry, energy (voltage), current, and time. The effect of the energy and the physical setup are shown by looking at the film density after exposure. Since the calculation uses a generic (and fixed characteristic) x-ray source, fixed film type and development, the applet results will differ considerably from industrial x-ray configurations. The applet is design simply to demonstrate the affects of the variable on the resulting film density.
How To Use This Applet
First choose a material. Each material has a mass attenuation constant, mu. Next, the voltage to the x-ray source needs to be set. Continue to fill in numbers for the rest of the variables. The current is the number of milliamps that flow to the source. After the Distance, Time, and Thickness have been set, press the "Calculate" button.
Note, the Io field has a number in it. This is the initial intensity of the x-ray beam. For large numbers, it may be necessary to use the mouse to see the entire number. Click on the number and move the mouse as if selecting it. The cyan pointer indicates the density on the resultant radiograph. The two other pointers represent under- and over-exposure by a factor of four. These may be used to judge the degree of contrast in the resultant radiograph.
Try the following examples: material: aluminum, kV: 120, mA: 5, distance: 0.5 meter, time: 90 seconds, thickness: 6.5 cm. The resultant density will be 2.959. As can be noted on the stepwedge, reducing the exposure by a factor of four will change the density to a value of 1.0, and increasing the exposure by a factor of four will result in a density of 5.0. Reduce the time from 90 seconds to 22.5 seconds (factor of four) and note the results.
Change the material to iron and press "Calculate". Note that not enough radiation is received to generate an image. Change the following: kV: 320, mA: 10, time: 900 seconds, thickness: 1.25 cm, and then click "Calculate". Note the resulting center density of 0.561. With aluminum, the time was altered by a factor of four to change the density. With the iron, current (mA) must be increased by a factor of four to produce an increase in density. Change the current from 10 to 40 and calculate the results.

Controlling Radiographic Quality
One of the methods of controlling the quality of a radiograph is through the use of image quality indicators (IQIs). IQIs, which are also referred to as penetrameters, provide a means of visually informing the film interpreter of the contrast sensitivity and definition of the radiograph. The IQI indicates that a specified amount of change in material thickness will be detectable in the radiograph, and that the radiograph has a certain level of definition so that the density changes are not lost due to unsharpness. Without such a reference point, consistency and quality could not be maintained and defects could go undetected.
Image quality indicators take many shapes and forms due to the various codes or standards that invoke their use. In the United States, two IQI styles are prevalent: the placard, or hole-type and the wire IQI. IQIs comes in a variety of material types so that one with radiation absorption characteristics similar to the material being radiographed can be used.

Hole-Type IQIs
ASTM Standard E1025 gives detailed requirements for the design and material group classification of hole-type image quality indicators. E1025 designates eight groups of shims based on their radiation absorption characteristics. A notching system is incorporated into the requirements, which allows the radiographer to easily determine if the IQI is the correct material type for the product. The notches in the IQI to the right indicate that it is made of aluminum. The thickness in thousands of an inch is noted on each pentameter by one or more lead number. The IQI to the right is 0.005 inch thick. IQIs may also be manufactured to a military or other industry specification and the material type and thickness may be indicated differently. For example, the IQI on the left in the image above uses lead letters to indicate the material. The numbers on this same IQI indicate the sample thickness that the IQI would typically be placed on when attempting to achieve two percent contrast sensitivity.
Image quality levels are typically designated using a two part expression such as 2-2T. The first term refers to the IQI thickness expressed as a percentage of the region of interest of the part being inspected. The second term in the expression refers to the diameter of the hole that must be revealed and it is expressed as a multiple of the IQI thickness. Therefore, a 2-2T call-out would mean that the shim thickness should be two percent of the material thickness and that a hole that is twice the IQI thickness must be detectable on the radiograph. This presentation of a 2-2T IQI in the radiograph verifies that the radiographic technique is capable of showing a material loss of 2% in the area of interest.
It should be noted that even if 2-2T sensitivity is indicated on a radiograph, a defect of the same diameter and material loss may not be visible. The holes in the IQI represent sharp boundaries, and a small thickness change. Discontinues within the part may contain gradual changes and are often less visible. The IQI is used to indicate the quality of the radiographic technique and not intended to be used as a measure of the size of a cavity that can be located on the radiograph.
Wire IQIs
ASTM Standard E747 covers the radiographic examination of materials using wire IQIs to control image quality. Wire IQIs consist of a set of six wires arranged in order of increasing diameter and encapsulated between two sheets of clear plastic. E747 specifies four wire IQI sets, which control the wire diameters. The set letter (A, B, C or D) is shown in the lower right corner of the IQI. The number in the lower left corner indicates the material group. The same image quality levels and expressions (i.e. 2-2T) used for hole-type IQIs are typically also used for wire IQIs. The wire sizes that correspond to various hole-type quality levels can be found in a table in E747 or can be calculated using the following formula.
Where:
F = 0.79 (constant form factor for wire)
d = wire diameter (mm or inch)
l = 7.6 mm or 0.3 inch (effective length of wire)
T = Hole-type IQI thickness (mm or inch)
H = Hole-type IQI hole diameter (mm or inch)
Placement of IQIs
IQIs should be placed on the source side of the part over a section with a material thickness equivalent to the region of interest. If this is not possible, the IQI may be placed on a block of similar material and thickness to the region of interest. When a block is used, the IQI should be the same distance from the film as it would be if placed directly on the part in the region of interest. The IQI should also be placed slightly away from the edge of the part so that at least three of its edges are visible in the radiograph.

Film Processing
As mentioned previously, radiographic film consists of a transparent, blue-tinted base coated on both sides with an emulsion. The emulsion consists of gelatin containing microscopic, radiation sensitive silver halide crystals, such as silver bromide and silver chloride. When x-rays, gamma rays or light rays strike the the crystals or grains, some of the Br- ions are liberated and captured by the Ag+ ions. In this condition, the radiograph is said to contain a latent (hidden) image because the change in the grains is virtually undetectable, but the exposed grains are now more sensitive to reaction with the developer.
When the film is processed, it is exposed to several different chemicals solutions for controlled periods of time. Processing film basically involves the following five steps.
• Development - The developing agent gives up electrons to convert the silver halide grains to metallic silver. Grains that have been exposed to the radiation develop more rapidly, but given enough time the developer will convert all the silver ions into silver metal. Proper temperature control is needed to convert exposed grains to pure silver while keeping unexposed grains as silver halide crystals.
• Stopping the development - The stop bath simply stops the development process by diluting and washing the developer away with water.
• Fixing - Unexposed silver halide crystals are removed by the fixing bath. The fixer dissolves only silver halide crystals, leaving the silver metal behind.
• Washing - The film is washed with water to remove all the processing chemicals.
• Drying - The film is dried for viewing.
Processing film is a strict science governed by rigid rules of chemical concentration, temperature, time, and physical movement. Whether processing is done by hand or automatically by machine, excellent radiographs require a high degree of consistency and quality control.
Manual Processing & Darkrooms
Manual processing begins with the darkroom. The darkroom should be located in a central location, adjacent to the reading room and a reasonable distance from the exposure area. For portability, darkrooms are often mounted on pickups or trailers.
Film should be located in a light, tight compartment, which is most often a metal bin that is used to store and protect the film. An area next to the film bin that is dry and free of dust and dirt should be used to load and unload the film. Another area, the wet side, should be used to process the film. This method protects the film from any water or chemicals that may be located on the surface of the wet side.
Each of step in the film processing must be excited properly to develop the image, wash out residual processing chemicals, and to provide adequate shelf life of the radiograph. The objective of processing is two fold: first, to produce a radiograph adequate for viewing, and second, to prepare the radiograph for archival storage. Radiographs are often stored for 20 years or more as a record of the inspection.
Automatic Processor Evaluation
The automatic processor is the essential piece of equipment in every x-ray department. The automatic processor will reduce film processing time when compared to manual development by a factor of four. To monitor the performance of a processor, apart from optimum temperature and mechanical checks, chemical and sensitometric checks should be performed for developer and fixer. Chemical checks involve measuring the pH values of the developer and fixer as well as both replenishers. Also, the specific gravity and fixer silver levels must be measured. Ideally, pH should be measured daily and it is important to record these measurements, as regular logging provides very useful information. The daily measurements of pH values for the developer and fixer can then be plotted to observe the trend of variations in these values compared to the normal pH operating levels to identify problems.
Sensitometric checks may be carried out to evaluate if the performance of films in the automatic processors is being maximized. These checks involve measurement of basic fog level, speed and average gradient made at 1° C intervals of temperature. The range of temperature measurement depends on the type of chemistry in use, whether cold or hot developer. These three measurements: fog level, speed, and average gradient, should then be plotted against temperature and compared with the manufacturer's supplied figures.
Radiograph Interpretation - Welds
In addition to producing high quality radiographs, the radiographer must also be skilled in radiographic interpretation. Interpretation of radiographs takes place in three basic steps: (1) detection, (2) interpretation, and (3) evaluation. All of these steps make use of the radiographer's visual acuity. Visual acuity is the ability to resolve a spatial pattern in an image. The ability of an individual to detect discontinuities in radiography is also affected by the lighting condition in the place of viewing, and the experience level for recognizing various features in the image. The following material was developed to help students develop an understanding of the types of defects found in weldments and how they appear in a radiograph.
Discontinuities
Discontinuities are interruptions in the typical structure of a material. These interruptions may occur in the base metal, weld material or "heat affected" zones. Discontinuities, which do not meet the requirements of the codes or specifications used to invoke and control an inspection, are referred to as defects.
General Welding Discontinuities
The following discontinuities are typical of all types of welding.
Cold lap is a condition where the weld filler metal does not properly fuse with the base metal or the previous weld pass material (interpass cold lap). The arc does not melt the base metal sufficiently and causes the slightly molten puddle to flow into the base material without bonding.

Porosity is the result of gas entrapment in the solidifying metal. Porosity can take many shapes on a radiograph but often appears as dark round or irregular spots or specks appearing singularly, in clusters, or in rows. Sometimes, porosity is elongated and may appear to have a tail. This is the result of gas attempting to escape while the metal is still in a liquid state and is called wormhole porosity. All porosity is a void in the material and it will have a higher radiographic density than the surrounding area.
.
Cluster porosity is caused when flux coated electrodes are contaminated with moisture. The moisture turns into a gas when heated and becomes trapped in the weld during the welding process. Cluster porosity appear just like regular porosity in the radiograph but the indications will be grouped close together.

Slag inclusions are nonmetallic solid material entrapped in weld metal or between weld and base metal. In a radiograph, dark, jagged asymmetrical shapes within the weld or along the weld joint areas are indicative of slag inclusions.

Incomplete penetration (IP) or lack of penetration (LOP) occurs when the weld metal fails to penetrate the joint. It is one of the most objectionable weld discontinuities. Lack of penetration allows a natural stress riser from which a crack may propagate. The appearance on a radiograph is a dark area with well-defined, straight edges that follows the land or root face down the center of the weldment.

Incomplete fusion is a condition where the weld filler metal does not properly fuse with the base metal. Appearance on radiograph: usually appears as a dark line or lines oriented in the direction of the weld seam along the weld preparation or joining area.

Internal concavity or suck back is a condition where the weld metal has contracted as it cools and has been drawn up into the root of the weld. On a radiograph it looks similar to a lack of penetration but the line has irregular edges and it is often quite wide in the center of the weld image.

Internal or root undercut is an erosion of the base metal next to the root of the weld. In the radiographic image it appears as a dark irregular line offset from the centerline of the weldment. Undercutting is not as straight edged as LOP because it does not follow a ground edge.

External or crown undercut is an erosion of the base metal next to the crown of the weld. In the radiograph, it appears as a dark irregular line along the outside edge of the weld area.

Offset or mismatch are terms associated with a condition where two pieces being welded together are not properly aligned. The radiographic image shows a noticeable difference in density between the two pieces. The difference in density is caused by the difference in material thickness. The dark, straight line is caused by the failure of the weld metal to fuse with the land area.

Inadequate weld reinforcement is an area of a weld where the thickness of weld metal deposited is less than the thickness of the base material. It is very easy to determine by radiograph if the weld has inadequate reinforcement, because the image density in the area of suspected inadequacy will be higher (darker) than the image density of the surrounding base material.

Excess weld reinforcement is an area of a weld that has weld metal added in excess of that specified by engineering drawings and codes. The appearance on a radiograph is a localized, lighter area in the weld. A visual inspection will easily determine if the weld reinforcement is in excess of that specified by the engineering requirements.

Cracks can be detected in a radiograph only when they are propagating in a direction that produces a change in thickness that is parallel to the x-ray beam. Cracks will appear as jagged and often very faint irregular lines. Cracks can sometimes appear as "tails" on inclusions or porosity.


Discontinuities in TIG welds
The following discontinuities are unique to the TIG welding process. These discontinuities occur in most metals welded by the process, including aluminum and stainless steels. The TIG method of welding produces a clean homogeneous weld which when radiographed is easily interpreted.
Tungsten inclusions. Tungsten is a brittle and inherently dense material used in the electrode in tungsten inert gas welding. If improper welding procedures are used, tungsten may be entrapped in the weld. Radiographically, tungsten is more dense than aluminum or steel, therefore it shows up as a lighter area with a distinct outline on the radiograph.

Oxide inclusions are usually visible on the surface of material being welded (especially aluminum). Oxide inclusions are less dense than the surrounding material and, therefore, appear as dark irregularly shaped discontinuities in the radiograph.



Discontinuities in Gas Metal Arc Welds (GMAW)
The following discontinuities are most commonly found in GMAW welds.
Whiskers are short lengths of weld electrode wire, visible on the top or bottom surface of the weld or contained within the weld. On a radiograph they appear as light, "wire like" indications.
Burn-Through results when too much heat causes excessive weld metal to penetrate the weld zone. Often lumps of metal sag through the weld, creating a thick globular condition on the back of the weld. These globs of metal are referred to as icicles. On a radiograph, burn-through appears as dark spots, which are often surrounded by light globular areas (icicles).

Radiograph Interpretation - Castings
The major objective of radiographic testing of castings is the disclosure of defects that adversely affect the strength of the product. Castings are a product form that often receive radiographic inspection since many of the defects produced by the casting process are volumetric in nature, and are thus relatively easy to detect with this method. These discontinuities of course, are related to casting process deficiencies, which, if properly understood, can lead to accurate accept-reject decisions as well as to suitable corrective measures. Since different types and sizes of defects have different effects of the performance of the casting, it is important that the radiographer is able to identify the type and size of the defects. ASTM E155, Standard for Radiographs of castings has been produced to help the radiographer make a better assessment of the defects found in components. The castings used to produce the standard radiographs have been destructively analyzed to confirm the size and type of discontinuities present. The following is a brief description of the most common discontinuity types included in existing reference radiograph documents (in graded types or as single illustrations).
RADIOGRAPHIC INDICATIONS FOR CASTINGS
Gas porosity or blow holes are caused by accumulated gas or air which is trapped by the metal. These discontinuities are usually smooth-walled rounded cavities of a spherical, elongated or flattened shape. If the sprue is not high enough to provide the necessary heat transfer needed to force the gas or air out of the mold, the gas or air will be trapped as the molten metal begins to solidify. Blows can also be caused by sand that is too fine, too wet, or by sand that has a low permeability so that gas cannot escape. Too high a moisture content in the sand makes it difficult to carry the excessive volumes of water vapor away from the casting. Another cause of blows can be attributed to using green ladles, rusty or damp chills and chaplets.
Sand inclusions and dross are nonmetallic oxides, which appear on the radiograph as irregular, dark blotches. These come from disintegrated portions of mold or core walls and/or from oxides (formed in the melt) which have not been skimmed off prior to the introduction of the metal into the mold gates. Careful control of the melt, proper holding time in the ladle and skimming of the melt during pouring will minimize or obviate this source of trouble.
Shrinkage is a form of discontinuity that appears as dark spots on the radiograph. Shrinkage assumes various forms, but in all cases it occurs because molten metal shrinks as it solidifies, in all portions of the final casting. Shrinkage is avoided by making sure that the volume of the casting is adequately fed by risers which sacrificially retain the shrinkage. Shrinkage in its various forms can be recognized by a number of characteristics on radiographs. There are at least four types of shrinkage: (1) cavity; (2) dendritic; (3) filamentary; and (4) sponge types. Some documents designate these types by numbers, without actual names, to avoid possible misunderstanding.
Cavity shrinkage appears as areas with distinct jagged boundaries. It may be produced when metal solidifies between two original streams of melt coming from opposite directions to join a common front. Cavity shrinkage usually occurs at a time when the melt has almost reached solidification temperature and there is no source of supplementary liquid to feed possible cavities.


Dendritic shrinkage is a distribution of very fine lines or small elongated cavities that may vary in density and are usually unconnected.
Filamentary shrinkage usually occurs as a continuous structure of connected lines or branches of variable length, width and density, or occasionally as a network.


Sponge shrinkage shows itself as areas of lacy texture with diffuse outlines, generally toward the mid-thickness of heavier casting sections. Sponge shrinkage may be dendritic or filamentary shrinkage. Filamentary sponge shrinkage appears more blurred because it is projected through the relatively thick coating between the discontinuities and the film surface.


Cracks are thin (straight or jagged) linearly disposed discontinuities that occur after the melt has solidified. They generally appear singly and originate at casting surfaces.
Cold shuts generally appear on or near a surface of cast metal as a result of two streams of liquid meeting and failing to unite. They may appear on a radiograph as cracks or seams with smooth or rounded edges.



Inclusions are nonmetallic materials in an otherwise solid metallic matrix. They may be less or more dense than the matrix alloy and will appear on the radiograph, respectively, as darker or lighter indications. The latter type is more common in light metal castings.




Core shift shows itself as a variation in section thickness, usually on radiographic views representing diametrically opposite portions of cylindrical casting portions.






Hot tears are linearly disposed indications that represent fractures formed in a metal during solidification because of hindered contraction. The latter may occur due to overly hard (completely unyielding) mold or core walls. The effect of hot tears as a stress concentration is similar to that of an ordinary crack, and hot tears are usually systematic flaws. If flaws are identified as hot tears in larger runs of a casting type, explicit improvements in the casting technique will be required.
Misruns appear on the radiograph as prominent dense areas of variable dimensions with a definite smooth outline. They are mostly random in occurrence and not readily eliminated by specific remedial actions in the process.
Mottling is a radiographic indication that appears as an indistinct area of more or less dense images. The condition is a diffraction effect that occurs on relatively vague, thin-section radiographs, most often with austenitic stainless steel. Mottling is caused by interaction of the object's grain boundary material with low-energy X-rays (300 kV or lower). Inexperienced interpreters may incorrectly consider mottling as indications of unacceptable casting flaws. Even experienced interpreters often have to check the condition by re-radiography from slightly different source-film angles. Shifts in mottling are then very pronounced, while true casting discontinuities change only slightly in appearance.

Radiographic Indications for Casting Repair Welds
Most common alloy castings require welding either in upgrading from defective conditions or in joining to other system parts. It is mainly for reasons of casting repair that these descriptions of the more common weld defects are provided here. The terms appear as indication types in ASTM E390. For additional information, see the Nondestructive Testing Handbook, Volume 3, Section 9 on the "Radiographic Control of Welds."
Slag is nonmetallic solid material entrapped in weld metal or between weld material and base metal. Radiographically, slag may appear in various shapes, from long narrow indications to short wide indications, and in various densities, from gray to very dark.
Porosity is a series of rounded gas pockets or voids in the weld metal, and is generally cylindrical or elliptical in shape.
Undercut is a groove melted in the base metal at the edge of a weld and left unfilled by weld metal. It represents a stress concentration that often must be corrected, and appears as a dark indication at the toe of a weld.
Incomplete penetration, as the name implies, is a lack of weld penetration through the thickness of the joint (or penetration which is less than specified). It is located at the center of a weld and is a wide, linear indication.
Incomplete fusion is lack of complete fusion of some portions of the metal in a weld joint with adjacent metal (either base or previously deposited weld metal). On a radiograph, this appears as a long, sharp linear indication, occurring at the centerline of the weld joint or at the fusion line.
Melt-through is a convex or concave irregularity (on the surface of backing ring, strip, fused root or adjacent base metal) resulting from the complete melting of a localized region but without the development of a void or open hole. On a radiograph, melt-through generally appears as a round or elliptical indication.
Burn-through is a void or open hole in a backing ring, strip, fused root or adjacent base metal.
Arc strike is an indication from a localized heat-affected zone or a change in surface contour of a finished weld or adjacent base metal. Arc strikes are caused by the heat generated when electrical energy passes between the surfaces of the finished weld or base metal and the current source.
Weld spatter occurs in arc or gas welding as metal particles which are expelled during welding. These particles do not form part of the actual weld. Weld spatter appears as many small, light cylindrical indications on a radiograph.
Tungsten inclusion is usually denser than base-metal particles. Tungsten inclusions appear very light radiographic images. Accept/reject decisions for this defect are generally based on the slag criteria.
Oxidation is the condition of a surface which is heated during welding, resulting in oxide formation on the surface, due to partial or complete lack of purge of the weld atmosphere. The condition is also called sugaring.
Root edge condition shows the penetration of weld metal into the backing ring or into the clearance between the backing ring or strip and the base metal. It appears in radiographs as a sharply defined film density transition.
Root undercut appears as an intermittent or continuous groove in the internal surface of the base metal, backing ring or strip along the edge of the weld root.

Real-time Radiography
Real-time radiography (RTR), or real-time radioscopy, is a nondestructive test (NDT) method whereby an image is produced electronically, rather than on film, so that very little lag time occurs between the item being exposed to radiation and the resulting image. In most instances, the electronic image that is viewed results from the radiation passing through the object being inspected and interacting with a screen of material that fluoresces or gives off light when the interaction occurs. The fluorescent elements of the screen form the image much as the grains of silver form the image in film radiography. The image formed is a "positive image" since brighter areas on the image indicate where higher levels of transmitted radiation reached the screen. This image is the opposite of the negative image produced in film radiography. In other words, with RTR, the lighter, brighter areas represent thinner sections or less dense sections of the test object.
Real-time radiography is a well-established method of NDT having applications in automotive, aerospace, pressure vessel, electronic, and munition industries, among others. The use of RTR is increasing due to a reduction in the cost of the equipment and resolution of issues such as the protecting and storing digital images. Since RTR is being used increasingly more, these educational materials were developed by the North Central Collaboration for NDT Education (NCCE) to introduce RTR to NDT technician students.

Computed Tomography
Computed Tomography (CT) is a powerful nondestructive evaluation (NDE) technique for producing 2-D and 3-D cross-sectional images of an object from flat X-ray images. Characteristics of the internal structure of an object such as dimensions, shape, internal defects, and density are readily available from CT images. Shown below is a schematic of a CT system.

The test component is placed on a turntable stage that is between a radiation source and an imaging system. The turntable and the imaging system are connected to a computer so that x-ray images collected can be correlated to the position of the test component. The imaging system produces a 2-dimensional shadowgraph image of the specimen just like a film radiograph. Specialized computer software makes it possible to produce cross-sectional images of the test component as if it was being sliced.
How a CT System Works
The imaging system provides a shadowgraph of an object, with the 3-D structure compressed onto a 2-D plane. The density data along one horizontal line of the image is uncompressed and stretched out over an area. This information by itself is not very useful, but when the test component is rotated and similar data for the same linear slice is collected and overlaid, an image of the cross-sectional density of the component begins to develop. To help comprehend how this works, look at the animation below.

In the animation, a single line of density data was collected when a component was at the starting position and then when it was rotated 90 degrees. Use the pull-ring to stretch out the density data in the vertical direction. It can be seen that the lighter area is stretched across the whole region. This lighter area would indicate an area of less density in the component because imaging systems typically glow brighter when they are struck with an increased amount of radiation. When the information from the second line of data is stretched across and averaged with the first set of stretched data, it becomes apparent that there is a less dense area in the upper right quadrant of the component's cross-section. Data collected at more angles of rotation and merged together will further define this feature. In the movie below, a CT image of a casting is produced. It can be seen that the cross-section of the casting becomes more defined as the casting is rotated, X-rayed and the stretched density information is added to the image.

In the image below left is a set of cast aluminum tensile specimens. A radiographic image of several of these specimens is shown below right.


CT slices through several locations of a specimen are shown in the set of images below.

A number of slices through the object can be reconstructed to provide a 3-D view of internal and external structural details. As shown below, the 3-D image can then be manipulated and sliced in various ways to provide thorough understanding of the structure.

X-Ray Inspection Simulation
One of the most significant recent advances in NDT has been the development and use of computer modeling that allows inspection variables to be scientifically and mathematically evaluated. In a few cases, these models have been combined with a graphical user interface to produce inspection simulation programs that allow engineers and technicians to evaluate the inspectability of a component in a virtual computer environment. One such program, XRSIM, was designed and developed at Iowa State University's Center for Nondestructive Evaluation. The program simulates radiographic inspections using a computer aided design (CAD) model of a part to produce physically accurate simulated radiographic images. XRSIM allows the operator to select a part, input the material properties, input the size, location, and properties of a defect. The operator then selects the size and type of film and adjusts the part location and orientation in relationship to the x-ray source. The x-ray generator settings are then specified to generate a desired radiographic film exposure. Exposure variables are quickly and easily revised allowing the operator to make and see results of defect size, material, and part or defect orientation.
The almost instantaneous results produced by simulation programs make them especially valuable in education and training settings. Successful radiography depends on numerous variables that affect the outcome and quality of an image. Many of these variables have a substantial effect on image quality and others have little effect. Using inspection simulation programs, inspections can be modified and the resulting images viewed and evaluated to assess the impact these variables have on the image. Many inspection scenarios can be rapidly modeled since the shot setup and exposure can be quickly accomplished and the film-developing step is eliminated. Not only can a greater number and variety of problems be explored, but also the effects of variables can be learned and self-discovered through experimentation, which is one of the most effective modes of learning. Results are not complicated by unnecessary variables such as film processing variables and artifacts. Distractions unrelated to the primary learning exercise are eliminated. Through the use of simulation programs a more effective understanding of the scientific concepts associated with radiography will be developed.
Another important aspect of the program is that it does not require a real part for the inspections. Inspections can be simulated that would otherwise be impossible or too costly to perform outside the computer environment. Flaws of various shapes, sizes, and materials can be easily introduced into the CAD model to produce a sample set for probability of detection exercises.
It should be noted that densities produced in the simulated images may not match exactly the images produced in the laboratory using similar equipment settings. The difference between the actual and simulated radiographs are due to variations in the X-ray spectrum of various tubes and approximations made in the scattering model used to keep the computation times reasonable. As scattering effects become more dominant, the predicted density will agree less with the actual density on the radiograph. For example, when a one-inch steel sample is radiographed at 250 keV, over half of the total flux reaching the detector is due to scattering.
For more information on how the XRSIM program operates, the users manual is available here for downloading. The educational version of the program is available commercially.
Download the XRSIM Users Manual
Ten X-ray inspection exercises have been developed by the Collaboration for NDT Education that make use of XRSIM program. Educators can download these lessons from this site. More information on the XRSIM lessons.
Introduction to Acoustic Emission Testing

Acoustic Emission (AE) refers to the generation of transient elastic waves produced by a sudden redistribution of stress in a material. When a structure is subjected to an external stimulus (change in pressure, load, or temperature), localized sources trigger the release of energy, in the form of stress waves, which propagate to the surface and are recorded by sensors. With the right equipment and setup, motions on the order of picometers (10 -12 m) can be identified. Sources of AE vary from natural events like earthquakes and rockbursts to the initiation and growth of cracks, slip and dislocation movements, melting, twinning, and phase transformations in metals. In composites, matrix cracking and fiber breakage and debonding contribute to acoustic emissions. AE’s have also been measured and recorded in polymers, wood, and concrete, among other materials.
Detection and analysis of AE signals can supply valuable information regarding the origin and importance of a discontinuity in a material. Because of the versatility of Acoustic Emission Testing (AET), it has many industrial applications (e.g. assessing structural integrity, detecting flaws, testing for leaks, or monitoring weld quality) and is used extensively as a research tool.
Acoustic Emission is unlike most other nondestructive testing (NDT) techniques in two regards. The first difference pertains to the origin of the signal. Instead of supplying energy to the object under examination, AET simply listens for the energy released by the object. AE tests are often performed on structures while in operation, as this provides adequate loading for propagating defects and triggering acoustic emissions.
The second difference is that AET deals with dynamic processes, or changes, in a material. This is particularly meaningful because only active features (e.g. crack growth) are highlighted. The ability to discern between developing and stagnant defects is significant. However, it is possible for flaws to go undetected altogether if the loading is not high enough to cause an acoustic event. Furthermore, AE testing usually provides an immediate indication relating to the strength or risk of failure of a component. Other advantages of AET include fast and complete volumetric inspection using multiple sensors, permanent sensor mounting for process control, and no need to disassemble and clean a specimen.
Unfortunately, AE systems can only qualitatively gauge how much damage is contained in a structure. In order to obtain quantitative results about size, depth, and overall acceptability of a part, other NDT methods (often ultrasonic testing) are necessary. Another drawback of AE stems from loud service environments which contribute extraneous noise to the signals. For successful applications, signal discrimination and noise reduction are crucial.
Accoustic Properties for
Metals in Solid Form

Metals Longitudinal Velocity Shear
Velocity Surface
Velocity Density
g/cm3 Acoustic Impedanceg/cm2-sec x105
cm/µs in/µs cm/µs in/µs cm/µs in/µs
Aluminum 0.632 0.2488 0.313 0.1232 N/A N/A 2.70 17.10
AL 1100-0 (2SO) 0.635 0.25 0.310 0.122 0.290 0.114 2.71 17.20
AL 2014 (14S) 0.632 0.2488 0.307 0.1209 N/A N/A 2.80 17.80
AL 2024 T4 (24ST) 0.637 0.2508 0.316 0.1244 0.295 0.116 2.77 17.60
AL 2117 T4 (17ST) 0.650 0.2559 0.312 0.1228 N/A N/A 2.80 18.20

Babbitt
Bearing 0.230 0.0906 N/A N/A N/A N/A 7.4 -11.0 23.20
Beryllium 1.29 0.5079 0.888 0.3496 0.787 0.310 1.82 23.50
Bismuth 0.218 0.0858 0.110 0.0433 N/A N/A 9.80 21.40
Brass 0.428 0.1685 0.230 0.0906 N/A N/A 8.56 36.70
Brass, Half Hard 0.383 0.1508 0.205 0.0807 N/A N/A 8.10 31.02
Brass, Naval 0.443 0.1744 0.212 0.0835 0.195 0.0770 8.42 37.3
Bronze, Phospho 0.353 0.139 0.223 0.0878 0.201 0.0790 8.86 31.28

Cadmium 0.278 0.1094 0.150 0.0591 N/A N/A 8.64 24.02
Cesium (28.5? C) 0.0967 0.0381 N/A N/A N/A N/A 1.88 1.82
Columbium 0.492 0.1937 0.210 0.0827 N/A N/A 8.57 42.16
Constantan 0.524 0.2063 0.104 0.0409 N/A N/A 8.88 46.53
Copper 0.466 0.1835 0.233 0.0890 0.193 0.0760 8.93 41.61

Gallium 0.274 0.1079 N/A N/A N/A N/A 5.95 16.3
Germanium 0.541 0.213 N/A N/A N/A N/A 5.47 29.59
Gold 0.324 0.1276 0.120 0.0472 N/A N/A 19.32 62.6
Hafnium 0.384 0.1512 N/A N/A N/A N/A N/A N/A
Inconel 0.572 0.2252 N/A N/A 0.279 0.110 8.25 47.19
Indium
(156? C) 0.222 0.0874 N/A N/A N/A N/A 7.30 16.21
Iron 0.590 0.2323 0.323 0.1272 0.279 0.110 7.70 45.43
Iron, Cast 0.480 0.189 0.240 0.0945 N/A N/A 7.80 37.44

Lead 0.216 0.085 0.070 0.0276 0.0630 0.0248 11.4 24.62
Lead 5% Antinomy 0.217 0.0854 0.081 0.0319 0.0740 0.0291 1.9 23.65
Magnesium 0.631 0.2484 N/A N/A N/A N/A 1.74 10.98
Magnesium (AM-35) 0.579 0.228 0.310 0.122 0.287 0.113 1.74 10.07
Magnesium (FS-1) 0.547 0.2154 0.303 0.1193 N/A N/A 1.69 9.24
Magnesium (J-1) 0.567 0.2232 0.301 0.1185 N/A N/A 1.70 9.64
Magnesium (M) 0.576 0.2268 0.309 0.1217 N/A N/A 1.75 10.08
Magnesium (O-1) 0.580 0.2283 0.304 0.1197 N/A N/A 1.82 10.56
Magnesium (ZK-60A-TS) 0.571 0.2248 0.305 0.1201 N/A N/A 1.83 10.45
Manganese 0.466 0.1835 0.235 0.0925 N/A N/A 7.39 34.44
Molybdenum 0.629 0.2476 0.335 0.1319 0.311 0.122 10.2 64.16
Monel 0.602 0.237 0.272 0.1071 0.196 0.0772 8.83 53.16

Nickel 0.563 0.2217 0.296 0.1165 0.264 0.104 8.88 49.99
Platinum 0.396 0.1559 0 .167 N/A N/A N/A 21.4 84.74
Plutonium 0.179 0.0705 N/A N/A N/A N/A N/A 28.2
Plutonium (1% Gallium) 0.182 0.0717 N/A N/A N/A N/A N/A 28.6
Potassium (100? C) 0.182 0.0717 N/A N/A N/A N/A 0.83 1.51
Radium 0.0822 0.0324 0.111 0.0437 0.103 0.0404 5.0 4.11

Rubidium 0.126 0.0496 N/A N/A N/A N/A 1.53 1.93
Silver 0.360 0.1417 0.159 0.0626 N/A N/A 10.5 37.8
Silver, Nickel 0.462 0.1819 0.232 0.0913 0.169 0.0665 8.75 40.43
Silver, German 0.476 0.1874 N/A N/A N/A N/A 8.70 41.41
Steel, 302 Cres 0.566 0.2228 0.312 0.1228 0.312 0.123 8.03 45.45
Steel, 347 Cres 0.574 0.226 0.309 0.1217 N/A N/A 7.91 45.4
Steel, 410 Cres 0.539 0.212 0.299 0.118 0.216 0.085 7.67 56.68
Steel, 1020 0.589 0.2319 0.324 0.1276 N/A N/A 7.71 45.41
Steel, 1095 0.590 0.2323 0.319 0.1256 N/A N/A 7.80 46.02
Steel, 4150, Rc14 0.586 0.2307 0.279 0.1098 N/A N/A 7.84 45.94
Steel, 4150, Rc18 0.589 0.2319 0.318 0.1252 N/A N/A 7.82 46.06
Steel, 4150, Rc43 0.587 0.2311 0.320 0.126 N/A N/A 7.81 45.84
Steel, 4150, Rc64 0.582 0.2291 0.277 0.1091 N/A N/A 7.80 45.4

Steel, 4340 0.585 0.2303 0.128 0.0504 N/A N/A 7.80 45.63
Tantalum 0.410 0.1614 0.114 0.0449 N/A N/A 16.6 68.06
Thallium (302? C) 0.162 0.0638 N/A N/A N/A N/A 11.9 19.28
Thorium 0.240 0.0945 0.156 0.0614 N/A N/A 11.3 27.12
Tin 0.332 0.1307 0.167 0.0657 N/A N/A 7.29 24.2
Titanium 0.607 0.239 0.331 0.1303 N/A N/A 4.50 27.32

Titanium Carbide 0.827 0.3256 0.516 0.2031 N/A N/A 5.15 42.59
Tungsten 0.518 0.2039 0.287 0.113 0.265 0.104 19.25 99.72
Uranium 0.338 0.1331 0.196 0.0772 N/A N/A 18.9 63.88
Uranium Dioxide 0.518 0.2039 N/A N/A N/A N/A 6.03 31.24
Vanadium 0.600 0.2362 0.278 0.1094 N/A N/A 6.03 36.18
Zinc 0.417 0.1642 0.241 0.0948 N/A N/A 7.10 29.61
Zircaloy 0.472 0.1858 0.236 0.093 N/A N/A 9.03 42.6
Zirconium 0.465 0.1831 0.222 0.0874 N/A N/A 6.48 30.1

A Brief History of AE Testing
Although acoustic emissions can be created in a controlled environment, they can also occur naturally. Therefore, as a means of quality control, the origin of AE is hard to pinpoint. As early as 6,500 BC, potters were known to listen for audible sounds during the cooling of their ceramics, signifying structural failure. In metal working, the term "tin cry" (audible emissions produced by the mechanical twinning of pure tin during plastic deformation) was coined around 3,700 BC by tin smelters in Asia Minor. The first documented observations of AE appear to have been made in the 8th century by Arabian alchemist Jabir ibn Hayyan. In a book, Hayyan wrote that Jupiter (tin) gives off a ‘harsh sound’ when worked, while Mars (iron) ‘sounds much’ during forging.
Many texts in the late 19th century referred to the audible emissions made by materials such as tin, iron, cadmium and zinc. One noteworthy correlation between different metals and their acoustic emissions came from Czochralski, who witnessed the relationship between tin and zinc cry and twinning. Later, Albert Portevin and Francois Le Chatelier observed AE emissions from a stressed Al-Cu-Mn (Aluminum-Copper-Manganese) alloy.

Modern Tensile Testing Machine (H. Cross Company)
The next 20 years brought further verification with the work of Robert Anderson (tensile testing of an aluminum alloy beyond its yield point), Erich Scheil (linked the formation of martensite in steel to audible noise), and Friedrich Forster, who with Scheil related an audible noise to the formation of martensite in high-nickel steel. Experimentation continued throughout the mid-1900’s, culminating in the PhD thesis written by Joseph Kaiser entitled "Results and Conclusions from Measurements of Sound in Metallic Materials under Tensile Stress.” Soon after becoming aware of Kaiser’s efforts, Bradford Schofield initiated the first research program in the United States to look at the materials engineering applications of AE. Fittingly, Kaiser’s research is generally recognized as the beginning of modern day acoustic emission testing.


Theory - AE Sources
As mentioned in the Introduction, acoustic emissions can result from the initiation and growth of cracks, slip and dislocation movements, twinning, or phase transformations in metals. In any case, AE’s originate with stress. When a stress is exerted on a material, a strain in induced in the material as well. Depending on the magnitude of the stress and the properties of the material, an object may return to its original dimensions or be permanently deformed after the stress is removed. These two conditions are known as elastic and plastic deformation, respectively.
The most detectible acoustic emissions take place when a loaded material undergoes plastic deformation or when a material is loaded at or near its yield stress. On the microscopic level, as plastic deformation occurs, atomic planes slip past each other through the movement of dislocations. These atomic-scale deformations release energy in the form of elastic waves which “can be thought of as naturally generated ultrasound” traveling through the object. When cracks exist in a metal, the stress levels present in front of the crack tip can be several times higher than the surrounding area. Therefore, AE activity will also be observed when the material ahead of the crack tip undergoes plastic deformation (micro-yielding).
Two sources of fatigue cracks also cause AE’s. The first source is emissive particles (e.g. nonmetallic inclusions) at the origin of the crack tip. Since these particles are less ductile than the surrounding material, they tend to break more easily when the metal is strained, resulting in an AE signal. The second source is the propagation of the crack tip that occurs through the movement of dislocations and small-scale cleavage produced by triaxial stresses.
The amount of energy released by an acoustic emission and the amplitude of the waveform are related to the magnitude and velocity of the source event. The amplitude of the emission is proportional to the velocity of crack propagation and the amount of surface area created. Large, discrete crack jumps will produce larger AE signals than cracks that propagate slowly over the same distance.
Detection and conversion of these elastic waves to electrical signals is the basis of AE testing. Analysis of these signals yield valuable information regarding the origin and importance of a discontinuity in a material. As discussed in the following section, specialized equipment is necessary to detect the wave energy and decipher which signals are meaningful.
Activity of AE Sources in Structural Loading

Basic AE history plot showing Kaiser effect (BCB), Felicity effect (DEF), and emission during hold (GH) 2
AE signals generated under different loading patterns can provide valuable information concerning the structural integrity of a material. Load levels that have been previously exerted on a material do not produce AE activity. In other words, discontinuities created in a material do not expand or move until that former stress is exceeded. This phenomenon, known as the Kaiser Effect, can be seen in the load versus AE plot to the right. As the object is loaded, acoustic emission events accumulate (segment AB). When the load is removed and reapplied (segment BCB), AE events do not occur again until the load at point B is exceeded. As the load exerted on the material is increased again (BD), AE’s are generated and stop when the load is removed. However, at point F, the applied load is high enough to cause significant emissions even though the previous maximum load (D) was not reached. This phenomenon is known as the Felicity Effect. This effect can be quantified using the Felicity Ratio, which is the load where considerable AE resumes, divided by the maximum applied load (F/D).
Knowledge of the Kaiser Effect and Felicity Effect can be used to determine if major structural defects are present. This can be achieved by applying constant loads (relative to the design loads exerted on the material) and “listening” to see if emissions continue to occur while the load is held. As shown in the figure, if AE signals continue to be detected during the holding of these loads (GH), it is likely that substantial structural defects are present. In addition, a material may contain critical defects if an identical load is reapplied and AE signals continue to be detected. Another guideline governing AE’s is the Dunegan corollary, which states that if acoustic emissions are observed prior to a previous maximum load, some type of new damage must have occurred. (Note: Time dependent processes like corrosion and hydrogen embrittlement tend to render the Kaiser Effect useless)
Noise
The sensitivity of an acoustic emission system is often limited by the amount of background noise nearby. Noise in AE testing refers to any undesirable signals detected by the sensors. Examples of these signals include frictional sources (e.g. loose bolts or movable connectors that shift when exposed to wind loads) and impact sources (e.g. rain, flying objects or wind-driven dust) in bridges. Sources of noise may also be present in applications where the area being tested may be disturbed by mechanical vibrations (e.g. pumps).
To compensate for the effects of background noise, various procedures can be implemented. Some possible approaches involve fabricating special sensors with electronic gates for noise blocking, taking precautions to place sensors as far away as possible from noise sources, and electronic filtering (either using signal arrival times or differences in the spectral content of true AE signals and background noise).
Pseudo Sources
In addition to the AE source mechanisms described above, pseudo source mechanisms produce AE signals that are detected by AE equipment. Examples include liquefaction and solidification, friction in rotating bearings, solid-solid phase transformations, leaks, cavitation, and the realignment or growth of magnetic domains (See Barkhausen Effect).
Theory - Acoustic Waves

Primitive AE wave released at a source. The primitive wave is essentially a stress pulse corresponding to a permanent displacement of the material. The ordinate quantities refer to a point in the material.

Angular dependence of acoustic emission radiated from a growing microcrack. Most of the energy is directed in the 90 and 270o directions, perpendicular to the crack surfaces.
Wave Propagation
A primitive wave released at the AE source is illustrated in the figure right. The displacement waveform is a step-like function corresponding to the permanent change associated with the source process. The analogous velocity and stress waveforms are essentially pulse-like. The width and height of the primitive pulse depend on the dynamics of the source process. Source processes such as microscopic crack jumps and precipitate fractures are usually completed in a fraction of a microsecond or a few microseconds, which explains why the pulse is short in duration. The amplitude and energy of the primitive pulse vary over an enormous range from submicroscopic dislocation movements to gross crack jumps.
Waves radiates from the source in all directions, often having a strong directionality depending on the nature of the source process, as shown in the second figure. Rapid movement is necessary if a sizeable amount of the elastic energy liberated during deformation is to appear as an acoustic emission.
As these primitive waves travel through a material, their form is changed considerably. Elastic wave source and elastic wave motion theories are being investigated to determine the complicated relationship between the AE source pulse and the corresponding movement at the detection site. The ultimate goal of studies of the interaction between elastic waves and material structure is to accurately develop a description of the source event from the output signal of a distant sensor.
However, most materials-oriented researchers and NDT inspectors are not concerned with the intricate knowledge of each source event. Instead, they are primarily interested in the broader, statistical aspects of AE. Because of this, they prefer to use narrow band (resonant) sensors which detect only a small portion of the broadband of frequencies emitted by an AE. These sensors are capable of measuring hundreds of signals each second, in contrast to the more expensive high-fidelity sensors used in source function analysis. More information on sensors will be discussed later in the Equipment section.
The signal that is detected by a sensor is a combination of many parts of the waveform initially emitted. Acoustic emission source motion is completed in a few millionths of a second. As the AE leaves the source, the waveform travels in a spherically spreading pattern and is reflected off the boundaries of the object. Signals that are in phase with each other as they reach the sensor produce constructive interference which usually results in the highest peak of the waveform being detected. The typical time interval from when an AE wave reflects around the test piece (repeatedly exciting the sensor) until it decays, ranges from the order of 100 microseconds in a highly damped, nonmetallic material to tens of milliseconds in a lightly damped metallic material.
Attenuation
The intensity of an AE signal detected by a sensor is considerably lower than the intensity that would have been observed in the close proximity of the source. This is due to attenuation. There are three main causes of attenuation, beginning with geometric spreading. As an AE spreads from its source in a plate-like material, its amplitude decays by 30% every time it doubles its distance from the source. In three-dimensional structures, the signal decays on the order of 50%. This can be traced back to the simple conservation of energy. Another cause of attenuation is material damping, as alluded to in the previous paragraph. While an AE wave passes through a material, its elastic and kinetic energies are absorbed and converted into heat. The third cause of attenuation is wave scattering. Geometric discontinuities (e.g. twin boundaries, nonmetallic inclusions, or grain boundaries) and structural boundaries both reflect some of the wave energy that was initially transmitted.
Measurements of the effects of attenuation on an AE signal can be performed with a simple apparatus known as a Hsu-Nielson Source. This consists of a mechanical pencil with either 0.3 or 0.5 mm 2H lead that is passed through a cone-shaped Teflon shoe designed to place the lead in contact with the surface of a material at a 30 degree angle. When the pencil lead is pressed and broken against the material, it creates a small, local deformation that is relieved in the form of a stress wave, similar to the type of AE signal produced by a crack. By using this method, simulated AE sources can be created at various sites on a structure to determine the optimal position for the placement of sensors and to ensure that all areas of interest are within the detection range of the sensor or sensors.
Wave Mode and Velocity
As mentioned earlier, using AE inspection in conjunction with other NDE techniques can be an effective method in gauging the location and nature of defects. Since source locations are determined by the time required for the wave to travel through the material to a sensor, it is important that the velocity of the propagating waves be accurately calculated. This is not an easy task since wave propagation depends on the material in question and the wave mode being detected. For many applications, Lamb waves are of primary concern because they are able to give the best indication of wave propagation from a source whose distance from the sensor is larger than the thickness of the material. For additional information on Lamb waves, see the wave mode page in the Ultrasonic Inspection section.
Equipment



Acoustic emission testing can be performed in the field with portable instruments or in a stationary laboratory setting. Typically, systems contain a sensor, preamplifier, filter, and amplifier, along with measurement, display, and storage equipment (e.g. oscilloscopes, voltmeters, and personal computers). Acoustic emission sensors respond to dynamic motion that is caused by an AE event. This is achieved through transducers which convert mechanical movement into an electrical voltage signal. The transducer element in an AE sensor is almost always a piezoelectric crystal, which is commonly made from a ceramic such as lead zirconate titanate (PZT). Transducers are selected based on operating frequency, sensitivity and environmental characteristics, and are grouped into two classes: resonant and broadband. The majority of AE equipment is responsive to movement in its typical operating frequency range of 30 kHz to 1 MHz. For materials with high attenuation (e.g. plastic composites), lower frequencies may be used to better distinguish AE signals. The opposite holds true as well.
Ideally, the AE signal that reaches the mainframe will be free of background noise and electromagnetic interference. Unfortunately, this is not realistic. However, sensors and preamplifiers are designed to help eliminate unwanted signals. First, the preamplifier boosts the voltage to provide gain and cable drive capability. To minimize interference, a preamplifier is placed close to the transducer; in fact, many transducers today are equipped with integrated preamplifiers. Next, the signal is relayed to a bandpass filter for elimination of low frequencies (common to background noise) and high frequencies. Following completion of this process, the signal travels to the acoustic system mainframe and eventually to a computer or similar device for analysis and storage. Depending on noise conditions, further filtering or amplification at the mainframe may still be necessary.

Schematic Diagram of a Basic Four-channel Acoustic Emission Testing System
After passing the AE system mainframe, the signal comes to a detection/measurement circuit as shown in the figure directly above. Note that multiple-measurement circuits can be used in multiple sensor/channel systems for source location purposes (to be described later). At the measurement circuitry, the shape of the conditioned signal is compared with a threshold voltage value that has been programmed by the operator. Signals are either continuous (analogous to Gaussian, random noise with amplitudes varying according to the magnitude of the AE events) or burst-type. Each time the threshold voltage is exceeded, the measurement circuit releases a digital pulse. The first pulse is used to signify the beginning of a hit. (A hit is used to describe the AE event that is detected by a particular sensor. One AE event can cause a system with numerous channels to record multiple hits.) Pulses will continue to be generated while the signal exceeds the threshold voltage. Once this process has stopped for a predetermined amount of time, the hit is finished (as far as the circuitry is concerned). The data from the hit is then read into a microcomputer and the measurement circuit is reset.
Hit Driven AE Systems and Measurement of Signal Features
Although several AE system designs are available (combining various options, sensitivity, and cost), most AE systems use a hit-driven architecture. The hit-driven design is able to efficiently measure all detected signals and record digital descriptions for each individual feature (detailed later in this section). During periods of inactivity, the system lies dormant. Once a new signal is detected, the system records the hit or hits, and the data is logged for present and/or future display.
Also common to most AE systems is the ability to perform routine tasks that are valuable for AE inspection. These tasks include quantitative signal measurements with corresponding time and/or load readings, discrimination between real and false signals (noise), and the collection of statistical information about the parameters of each signal.
AE Signal Features
With the equipment configured and setup complete, AE testing may begin. The sensor is coupled to the test surface and held in place with tape or adhesive. An operator then monitors the signals which are excited by the induced stresses in the object. When a useful transient, or burst signal is correctly obtained, parameters like amplitude, counts, measured area under the rectified signal envelope (MARSE), duration, and rise time can be gathered. Each of the AE signal feature shown in the image is described below.
Amplitude, A, is the greatest measured voltage in a waveform and is measured in decibels (dB). This is an important parameter in acoustic emission inspection because it determines the detectability of the signal. Signals with amplitudes below the operator-defined, minimum threshold will not be recorded.
Rise time, R, is the time interval between the first threshold crossing and the signal peak. This parameter is related to the propagation of the wave between the source of the acoustic emission event and the sensor. Therefore, rise time is used for qualification of signals and as a criterion for noise filter.
Duration, D, is the time difference between the first and last threshold crossings. Duration can be used to identify different types of sources and to filter out noise. Like counts (N), this parameter relies upon the magnitude of the signal and the acoustics of the material.
MARSE, E, sometimes referred to as energy counts, is the measure of the area under the envelope of the rectified linear voltage time signal from the transducer. This can be thought of as the relative signal amplitude and is useful because the energy of the emission can be determined. MARSE is also sensitive to the duration and amplitude of the signal, but does not use counts or user defined thresholds and operating frequencies. MARSE is regularly used in the measurements of acoustic emissions.
Counts, N, refers to the number of pulses emitted by the measurement circuitry if the signal amplitude is greater than the threshold. Depending on the magnitude of the AE event and the characteristics of the material, one hit may produce one or many counts. While this is a relatively simple parameter to collect, it usually needs to be combined with amplitude and/or duration measurements to provide quality information about the shape of a signal
Data Display
Software-based AE systems are able to generate graphical displays for analysis of the signals recorded during AE inspection. These displays provide valuable information about the detected events and can be classified into four categories: location, activity, intensity, and data quality (crossplots).
Location displays identify the origin of the detected AE events. These can be graphed by X coordinates, X-Y coordinates, or by channel for linear computed-source location, planar computed-source location, and zone location techniques. Examples of each graph are shown to the right.
Activity displays show AE activity as a function of time on an X-Y plot (figure below left). Each bar on the graphs represents a specified amount of time. For example, a one-hour test could be divided into 100 time increments. All activity measured within a given 36 second interval would be displayed in a given histogram bar. Either axis may be displayed logarithmically in the event of high AE activity or long testing periods. In addition to showing measured activity over a single time period, cumulative activity displays (figure below right) can be created to show the total amount of activity detected during a test. This display is valuable for measuring the total emission quantity and the average rate of emission.

Intensity displays are used to give statistical information concerning the magnitude of the detected signals. As can be seen in the amplitude distribution graph to the near right, the number of hits is plotted at each amplitude increment (expressed in dB’s) beyond the user-defined threshold. These graphs can be used to determine whether a few large signals or many small ones created the detected AE signal energy. In addition, if the Y-axis is plotted logarithmically, the shape of the amplitude distribution can be interpreted to determine the activity of a crack (e.g. a linear distribution indicates growth).
The fourth category of AE displays, crossplots, is used for evaluating the quality of the data collected. Counts versus amplitude, duration versus amplitude, and counts versus duration are frequently used crossplots. As shown in the final figure, each hit is marked as a single point, indicating the correlation between the two signal features. The recognized signals from AE events typically form a diagonal band since larger signals usually generate higher counts. Because noise signals caused by electromagnetic interference do not have as many threshold-crossing pulses as typical AE source events, the hits are located below the main band. Conversely, signals caused by friction or leaks have more threshold-crossing pulses than typical AE source events and are subsequently located above the main band. In the case of ambiguous data, expertise is necessary in separating desirable and unwanted hits.
AE Source Location Techniques
Multi-Channel Source Location Techniques:
Locating the source of significant acoustic emissions is often the main goal of an inspection. Although the magnitude of the damage may be unknown after AE analysis, follow up testing at source locations can provide these answers. As previously mentioned, many AE systems are capable of using multiple sensors/channels during testing, allowing them to record a hit from a single AE event. These AE systems can be used to determine the location of an event source. As hits are recorded by each sensor/channel, the source can be located by knowing the velocity of the wave in the material and the difference in hit arrival times among the sensors, as measured by hardware circuitry or computer software. By properly spacing the sensors in this manner, it is possible to inspect an entire structure with relatively few sensors.
Source location techniques assume that AE waves travel at a constant velocity in a material. However, various effects may alter the expected velocity of the AE waves (e.g. reflections and multiple wave modes) and can affect the accuracy of this technique. Therefore, the geometric effects of the structure being tested and the operating frequency of the AE system must be considered when determining whether a particular source location technique is feasible for a given test structure.
Linear Location Technique
Several source location techniques have been developed based on this method. One of the commonly used computed-source location techniques is the linear location principle shown to the right. Linear location is often used to evaluate struts on truss bridges. When the source is located at the midpoint, the time of arrival difference for the wave at the two sensors is zero. If the source is closer to one of the sensors, a difference in arrival times is measured. To calculate the distance of the source location from the midpoint, the arrival time is multiplied by the wave velocity. Whether the location lies to the right or left of the midpoint is determined by which sensor first records the hit. This is a linear relationship and applies to any event sources between the sensors.
Because the above scenario implicitly assumes that the source is on a line passing through the two sensors, it is only valid for a linear problem. When using AE to identify a source location in a planar material, three or more sensors are used, and the optimal position of the source is between the sensors. Two categories of source location analysis are used for this situation: zonal location and point location.


Zonal Location Technique
As the name implies, zonal location aims to trace the waves to a specific zone or region around a sensor. This method is used in anisotropic materials or in other structures where sensors are spaced relatively far apart or when high material attenuation affects the quality of signals at multiple sensors. Zones can be lengths, areas or volumes depending on the dimensions of the array. A planar sensor array with detection by one sensor is shown in the upper right figure. The source can be assumed to be within the region and less than halfway between sensors.
When additional sensors are applied, arrival times and amplitudes help pinpoint the source zone. The ordered pair in lower right figure represents the two sensors detecting the signal in the zone and the order of signal arrival at each sensor. When relating signal strength to peak amplitude, the largest peak amplitude is assumed to come from the nearest sensor, second largest from the next closest sensor and so forth.
Point Location
In order for point location to be justified, signals must be detected in a minimum number of sensors: two for linear, three for planar, four for volumetric. Accurate arrival times must also be available. Arrival times are often found by using peak amplitude or the first threshold crossing. The velocity of wave propagation and exact position of the sensors are necessary criteria as well. Equations can then be derived using sensor array geometry or more complex algebra to locate more specific points of interest.

AE Barkhausen Techniques
Barkhausen Effect
The Barkhausen effect refers to the sudden change in size of ferromagnetic domains that occur during magnetization or demagnetization. During magnetization, favorably oriented domains develop at the cost of less favorably oriented domains. These two factors result in minute jumps of magnetization when a ferromagnetic sample (e.g. iron) is exposed to an increasing magnetic field (see figure). Domain wall motion itself is determined by many factors like microstructure, grain boundaries, inclusions, and stress and strain. By the same token, the Barkhausen effect is too a function of stress and strain.
Barkhausen Noise
Barkhausen noise can be heard if a coil of wire is wrapped around the sample undergoing magnetization. Abrupt movements in the magnetic field produce spiking current pulses in the coil. When amplified, the clicks can be compared to Rice Krispies or the crumbling a candy wrapper. The amount of Barkhausen noise is influenced by material imperfections and dislocations and is likewise dependent on the mechanical properties of a material. Currently, materials exposed to high energy particles (nuclear reactors) or cyclic mechanical stresses (pipelines) are available for nondestructive evaluation using Barkhausen noise, one of the many branches of AE testing.
Applications
Acoustic emission is a very versatile, non-invasive way to gather information about a material or structure. Acoustic Emission testing (AET) is be applied to inspect and monitor pipelines, pressure vessels, storage tanks, bridges, aircraft, and bucket trucks, and a variety of composite and ceramic components. It is also used in process control applications such as monitoring welding processes. A few examples of AET applications follow.
Weld Monitoring
During the welding process, temperature changes induce stresses between the weld and the base metal. These stresses are often relieved by heat treating the weld. However, in some cases tempering the weld is not possible and minor cracking occurs. Amazingly, cracking can continue for up to 10 days after the weld has been completed. Using stainless steel welds with known inclusions and accelerometers for detection purposes and background noise monitoring, it was found by W. D. Jolly (1969) that low level signals and more sizeable bursts were related to the growth of microfissures and larger cracks respectively. ASTM E 749-96 is a standard practice of AE monitoring of continuous welding.
Bucket Truck (Cherry Pickers) Integrity Evaluation
Accidents, overloads and fatigue can all occur when operating bucket trucks or other aerial equipment. If a mechanical or structural defect is ignored, serious injury or fatality can result. In 1976, the Georgia Power Company pioneered the aerial manlift device inspection. Testing by independent labs and electrical utilities followed. Although originally intended to examine only the boom sections, the method is now used for inspecting the pedestal, pins, and various other components. Normally, the AE tests are second in a chain of inspections which start with visual checks. If necessary, follow-up tests take the form of magnetic particle, dye penetrant, or ultrasonic inspections. Experienced personnel can perform five to ten tests per day, saving valuable time and money along the way. ASTM F914 governs the procedures for examining insulated aerial personnel devices.
Gas Trailer Tubes
Acoustic emission testing on pressurized jumbo tube trailers was authorized by the Department of Transportation in 1983. Instead of using hydrostatic retesting, where tubes must be removed from service and disassembled, AET allows for in situ testing. A 10% over-pressurization is performed at a normal filling station with AE sensors attached to the tubes at each end. A multichannel acoustic system is used to detection and mapped source locations. Suspect locations are further evaluated using ultrasonic inspection, and when defects are confirmed the tube is removed from use. AET can detect subcritical flaws whereas hydrostatic testing cannot detect cracks until they cause rupture of the tube. Because of the high stresses in the circumferential direction of the tubes, tests are geared toward finding longitudinal fatigue cracks.
Bridges
Bridges contain many welds, joints and connections, and a combination of load and environmental factors heavily influence damage mechanisms such as fatigue cracking and metal thinning due to corrosion. Bridges receive a visual inspection about every two years and when damage is detected, the bridge is either shut down, its weight capacity is lowered, or it is singled out for more frequent monitoring. Acoustic Emission is increasingly being used for bridge monitoring applications because it can continuously gather data and detect changes that may be due to damage without requiring lane closures or bridge shutdown. In fact, traffic flow is commonly used to load or stress the bridge for the AE testing.
Aerospace Structures
Most aerospace structures consist of complex assemblies of components that have been design to carry significant loads while being as light as possible. This combination of requirements leads to many parts that can tolerate only a minor amount of damage before failing. This fact makes detection of damage extremely important but components are often packed tightly together making access for inspections difficult. AET has found applications in monitoring the health of aerospace structures because sensors can be attached in easily accessed areas that are remotely located from damage prone sites. AET has been used in laboratory structural tests, as well as in flight test applications. NASA's Wing Leading Edge Impact Detection System is partially based on AE technology. The image to the right shows a technician applying AE transducers on the inside of the Space Shuttle Discovery wing structure. The impact detection system was developed to alert NASA officials to events such as the sprayed-on-foam insulation impact that damaged the Space Shuttle Columbia's wing leading edge during launch and lead to its breakup on reentry to the Earth's atmosphere.
Others
• Fiber-reinforced polymer-matrix composites, in particular glass-fiber reinforced parts or structures (e.g. fan blades)
• Material research (e.g. investigation of material properties, breakdown mechanisms, and damage behavior)
• Inspection and quality assurance, (e.g. wood drying processes, scratch tests)
• Real-time leakage test and location within various components (small valves, steam lines, tank bottoms)
• Detection and location of high-voltage partial discharges in transformers
• Railroad tank car and rocket motor testing
There are a number of standards and guidelines that describe AE testing and application procedures as supplied by the American Society for Testing and Materials (ASTM). Examples are ASTM E 1932 for the AE examination of small parts and ASTM E1419-00 for the method of examining seamless, gas-filled, pressure vessels.
Remote Field Testing (RFT)
Remote Field Testing or "RFT" is one of several electromagnetic testing methods commonly employed in the field of nondestructive testing. Other electromagnetic inspection methods include magnetic flux leakage, conventional eddy current and alternating current field measurement testing. Remote field testing is associated with eddy current testing and the term "Remote Field Eddy Current Testing" is often used when describing remote field testing. However, there are several major differences between eddy current testing and remote field testing which will be noted in this section.
RFT is primarily used to inspect ferromagnetic tubing since conventional eddy current techniques have difficulty inspecting the full thickness of the tube wall due to the strong skin effect in ferromagnetic materials. For example, using conventional eddy current bobbin probes to inspect a steel pipe 10 mm thick (such as what might be found in heat exchangers) would require frequencies around 30 Hz to achieve the adequate I.D. to O.D. penetration through the tube wall. The use of such a low frequency results in a very low sensitivity of flaw detection. The degree of penetration can, in principle, be increased by the use of partial saturation eddy current probes, magnetically biased probes, and pulsed saturation probes. However, because of the large volume of metal present as well as potential permeability variations within the product, these specialized eddy current probes are still limited in their inspection capabilities.
The difficulties encountered in the testing of ferromagnetic tubes can be greatly alleviated with the use of the remote field testing method. The RFT method has the advantage of allowing nearly equal sensitivities of detection at both the inner and outer surfaces of a ferromagnetic tube. The method is highly sensitive to variations in wall thickness and tends to be less sensitive to fill-factor changes between the coil and tube. RFT can be used to inspect any conducting tubular product, but it is generally considered to be less sensitive than conventional eddy current techniques when inspecting nonferromagnetic materials.
RFT Theory of Operation
A probe consisting of an exciter coil and one or more detectors is pulled through the tube. The exciter coil and the detector coil(s) are rigidly fixed at an axial distance of two tube diameters or more between them. The exciter coil is driven with a relatively low frequency sinusoidal current to produce a magnetic field.

This changing magnetic field induces strong circumferential eddy currents which extend axially, as well as radially in the tube wall.

These eddy currents, in turn, produce their own magnetic field, which opposes the magnetic field from the exciter coil. Due to resistance in the tube wall and imperfect inductive coupling, the magnetic field from the eddy currents does not fully counterbalance the magnetic exciting field. However, since the eddy current field is more spread out than the exciter field, the magnetic field from the eddy currents extends farther along the tube axis. The interaction between the two fields is fairly complex but the simple fact is that the exciter field is dominant near the exciter coil and the eddy current field becomes dominant at some distance away from the exciter coil.

The receiving coils are positioned at a distance where the magnetic field from the eddy currents is dominant. In other words, they are placed at a distance where they are unaffected by the magnetic field from the exciter coil but can still adequately measure the field strength from the secondary magnetic field. Electromagnetic induction occurs as the changing magnetic field cuts across the pick-up coil array. By monitoring the consistency of the voltage induced in the pick-up coils one can monitor changes in the test specimen. The strength of the magnetic field at this distance from the excitation coil is fairly weak but it is sensitive to changes in the pipe wall from the I.D. to the O.D.

RFT Theory of Operation (cont.)
The Zones

Direct Couple Zone
The region where the magnetic field from the exciter coil is interacting with the tube wall to produce a concentrated field of eddy currents is called the direct field or direct coupled zone. This zone does not contribute a great deal of useful data to the RFT inspection due to problems with rather high noise levels due to the intense varying magnetic field from the excitation coil.
Transition Zone
The region just outside the direct couple zone is known as the transition zone. In this zone there is a great deal of interaction between the magnet flux from the exciter coil and the flux induced by the eddy currents. As can be seen in the graph, the interaction of the two opposing fields is strongest near the ID of the tube and fairly subtle at the OD of the tube. The "resultant" field strength (the magnetic field at the sum of the two fields) in this region tends to change abruptly on the ID due to the interaction of the fields with differing directional characteristics of the two fields.

The receiver coil's signal phase, with respect to the exciter coil, as a function of distance between the two coils is also shown in the graph. When the two coils are directly coupled and there is no interference from a secondary field, their currents are in phase as seen at location zero. In the transition zone, it can be seen that the phase swiftly shifts, indicating the location where the magnetic field from the eddy currents becomes dominate and the start of the remote field.
Remote Field Zone
The remote field zone is the region in which direct coupling between the exciter coil and the receiver coil(s) is negligible. Coupling takes place indirectly through the generation of eddy currents and their resulting magnetic field. The remote field zone starts to occur at approximately two tube diameters away from the exciter coil. The amplitude of the field strength on the OD actually exceeds that of the ID after an axial distance of approximately 1.65 tube diameters. Therefore, RFT is sensitive to changes in material that occur at the outside diameter as well as the inside diameter of the tube.
RFT Probes
Probes for inspection of pipe and tubing are typically of the bobbin (ID) variety. These probes use either a single or dual excitation coil to develop an electromagnetic field through the pipe or tube. The excitation coils are driven by alternating current. The sensing coil or coils are located a few tube diameters away in the remote field zone. Probes can be used in differential or absolute modes for detection of general discontinuities, pitting, and variations from the I.D. in ferromagnetic tubing. To insure maximum sensitivity, each probe is specifically designed for the inside diameter, composition, and the wall thickness of a particular tube.


RFT Instrumentation
Instruments used for RFT inspection are often dual use eddy current / RFT instruments employing multi-frequency technology. The excitation current from these instruments is passed on to the probe that contains an exciter coil, sometimes referred to as the driver coil. The receiving coil voltage is typically in the microvolt range, so an amplifier is required to boost the signal strength.
Certain systems will incorporate a probe excitation method known as multiplexing. This utilizes an extreme high speed switching method that excites the probe at more than one frequency in sequence. Another method of coil excitation that may be used is simultaneous injection. In this coil stimulation technique, the exciter coil is excited with multiple frequencies at the same time while incorporating filter schemes that subtract aspects of the acquired data. The instrument monitors the pickup coils and passes the data to the display section of the instrument. Some systems are capable of recording the data to some type of storage device for later review.
RFT Signal Interpretation
The signals obtained with RFT are very similar to those obtained with conventional eddy current testing. When all the proper conditions are met, changes in the phase of the receiver signal with respect to the phase of the exciter voltage are directly proportional to the sum of the wall thickness within the inspection area. Localized changes in wall thickness result in phase and amplitude changes. These changes can be indicative of defects such as cracks, corrosion pitting or corrosion/erosion thinning.




RFT Reference Standards
Reference standards for the RFT inspection of tubular products come in many variations. In order to produce reliable and consistent test results, the material used for manufacturing calibration standards must closely match the physical and chemical properties of the inspection specimen. Some of the important properties that must be considered include conductivity, permeability and alloy content. In addition, tube dimensions including I.D., O.D. and wall thickness must also be controlled.
The type of damage mechanisms that are expected to be encountered must also be carfully considered when developing or selecting a reference standard. In order to get accurate quantitative data, artificial discontinuity conditions are typically machined into the standards that will closely match those conditions that may be found in the tubing bundle.


Introduction to Thermal Testing
(AKA Thermal Inspection, Thermography, Thermal Imaging, Thermal Wave Imaging and Infrared Testing)

(Photo courtesy of NASA/JPL-Caltech/IPAC)
Thermal NDT methods involve the measurement or mapping of surface temperatures as heat flows to, from and/or through an object. The simplest thermal measurements involve making point measurements with a thermocouple. This type of measurement might be useful in locating hot spots, such as a bearing that is wearing out and starting to heat up due to an increase in friction.
In its more advanced form, the use of thermal imaging systems allow thermal information to be very rapidly collected over a wide area and in a non-contact mode. Thermal imaging systems are instruments that create pictures of heat flow rather than of light. Thermal imaging is a fast, cost effective way to perform detailed thermal analysis. The image above is a heat map of the space shuttle as it lands.
Thermal measurement methods have a wide range of uses. They are used by the police and military for night vision, surveillance, and navigation aid; by firemen and emergency rescue personnel for fire assessment, and for search and rescue; by the medical profession as a diagnostic tool; and by industry for energy audits, preventative maintenance, processes control and nondestructive testing. The basic premise of thermographic NDT is that the flow of heat from the surface of a solid is affected by internal flaws such as disbonds, voids or inclusions. The use of thermal imaging systems for industrial NDT applications will be the focus of this material.
Partial History of Thermal Testing
The detection of thermal energy is not a problem for the human body. Some sources say that the nerve endings in human skin respond to temperature changes as small as 0.009oC (0.0162oF). While humans have always had the ability to detect thermal energy, they have not had a way to quantify temperature until a few hundred years ago. A few of the more significant thermal measurement advances are discussed in the following paragraphs.
The Thermometer
Ancient Greeks knew that air was expanded by heat. This knowledge was eventually used to develop the thermoscope, which traps air in a bulb so that the size of the bulb changes as the air expands or contracts in response to a temperature increase or decrease. The image on the right shows the first published sketch of a thermoscope, which was published by Italian inventor Santorio Santorii. The next step in making a thermometer was to apply a scale to measure the expansion and relate this to heat. Some references say that Galileo Galilei invented a rudimentary water thermometer in 1593 but there is no surviving documentation to support his work on this. Therefore, Santorii is regarded as the inventor of the thermometer, for he published the earliest account of it in 1612. Gabriel Fahrenheit invented the first mercury thermometer in 1714.
Infrared Energy
Sir William Herschel, an astronomer, is credited with the discovery of infrared energy in 1800. Knowing that sunlight was made up of all the colors of the spectrum, Herschel wanted to explore the colors and their relationship to heat. He devised an experiment using a prism to spread the light into the color spectrum and thermometers with blackened bulbs to measure the temperatures of the different colors. Herschel observed an increase in temperature from violet to red and observed that the hottest temperature was actually beyond red light. Herschel termed the radiation causing the heating beyond the visible red range "calorific rays." Today, it is called "infrared" energy.
The Seebeck Effect (Thermocouples)
In 1821, Thomas Johann Seebeck found that a circuit made from two dissimilar metals, with junctions at different temperatures, would deflect a compass needle. He initially believed this was due to magnetism induced by a temperature difference, but soon realized that it was an electrical current that was induced. More specifically, the temperature difference produces an electric potential (voltage) which can drive electric current in a closed circuit. Today, this is known as the Seebeck effect.
The voltage difference, DV, produced across the terminals of an open circuit made from a pair of dissimilar metals, A and B, whose two junctions are held at different temperatures, is directly proportional to the difference between the hot and cold junction temperatures, Th - Tc. The Seebeck voltage does not depend on the distribution of temperature along the metals between the junctions. This is the physical basis for a thermocouple, which was invented by Nobili in 1829.
Noncontact Thermal Detectors
Melloni soon used the thermocouple technology to produce a device called the thermopile. A thermopile is made of thermocouple junction pairs connected electrically in series. The absorption of thermal radiation by one of the thermocouple junctions, called the active junction, increases its temperature. The differential temperature between the active junction and a reference junction kept at a fixed temperature produces an electromotive force directly proportional to the differential temperature created. This effect is called a thermoelectric effect. Melloni was able to show that a person 30 feet away could be detected by focusing his or her thermal energy on the thermopile. Thermopile detectors are used today for spectrometers, process temperature monitoring, fire and flame detection, presence monitor, and a number of other non-contact temperature measurement devices. A device similar to the thermopile measured a change in electrical resistance rather than a voltage change. This device was named the bolometer, and in 1880 it was shown that it could detect a cow over 1000 feet away.
During World War I, Case became the first to experiment with photoconducting detectors. These thallium sulfide detectors produced signals due to the direct interaction of infrared photons and were faster and much more sensitive than other thermal detectors that functioned from being heated. During World War II, photoconductive or quantum detectors were further refined and this resulted in a number of military applications, such as target locating, tracking, weapons guiding and intelligence gathering.
Imaging Systems
Application areas expanded to surveillance and intrusion during the Vietnam era. Shortly thereafter space-based applications for natural resource and pollution monitoring and astronomy were developed. IR imaging technology developed for the military spilled over into commercial markets in the 1960s. Initial applications were in laboratory level R&D, preventative maintenance applications, and surveillance. The first portable systems suitable for NDT applications were produced in the 1970s. These systems utilized a cooled scanned detector and the image quality was poor by today's standards. However, infrared imaging systems were soon being widely used for a variety of industrial and medical applications.
In the late 1980s, the US military released the focal plane array (FPA) technology into the commercial marketplace. The FPA uses a large array of tiny IR sensitive semiconductor detectors, similar to those used in charge couple device (CCD) cameras. This resulted in a dramatic increase in image quality. Concurrently, the advances in computer technology and image processing programs helped to simplify data collection and to improve data interpretation.
Current State
In 1992, the American Society for Nondestructive Testing officially adopted infrared testing as a standard test method. Today, a wide variety of thermal measurement equipment is commercially available and the technology is heavily used by industry. Researchers continue to improve systems and explore new applications.
Scientific Principles of Thermal Testing
Thermal Energy
Energy can come in many forms, and it can change from one form to another but can never be lost. This is the First Law of Thermodynamics. A byproduct of nearly all energy conversion is heat, which is also known as thermal energy. When there is a temperature difference between two objects or two areas within the same object, heat transfer occurs. Heat energy transfers from the warmer areas to the cooler areas until thermal equilibrium is reached. This is the Second Law of Thermodynamics. When the temperature of an object is the same as the surrounding environment, it is said to be at ambient temperature.
Heat Transfer Mechanisms
Thermal energy transfer occurs through three mechanisms: conduction, convection, and/or radiation. Conduction occurs primarily in solids and to a lesser degree in fluids as warmer, more energetic molecules transfer their energy to cooler adjacent molecules. Convection occurs in liquids and gases, and involves the mass movement of molecules such as when stirring or mixing is involved.
The third way that heat is transferred is through electromagnetic radiation of energy. Radiation needs no medium to flow through and, therefore, can occur even in a vacuum. Electromagnetic radiation is produced when electrons lose energy and fall to a lower energy state. Both the wavelength and intensity of the radiation is directly related to the temperature of the surface molecules or atoms.
Wavelength of Thermal Energy

The wavelength of thermal radiation extends from 0.1 microns to several hundred microns. As highlighted in the image, this means that not all of the heat radiated from an object will be visible to the human eye… but the heat is detectable. Consider the gradual heating of a piece of steel. With the application of a heat source, heat radiating from the part is felt long before a change in color is noticed. If the heat intensity is great enough and applied for long enough, the part will gradually change to a red color. The heat that is felt prior to the part changing color is the radiation that lies in the infrared frequency spectrum of electromagnetic radiation. Infrared (IR) radiation has a wavelength that is longer than visible light or, in other words, greater than 700 nanometers. As the wavelength of the radiation shortens, it reaches the point where it is short enough to enter the visible spectrum and can be detected with the human eye.
An infrared camera has the ability to detect and display infrared energy. Below is an infrared image of an ice cube melting. Note the temperature scale on side, which shows warm areas in red and cool areas in purple. It can be seen that the ice cube is colder than the surrounding air and it is absorbing heat at its surface. The basis for infrared imaging technology is that any object whose temperature is above 0°K radiates infrared energy. Even very cold objects radiate some infrared energy. Even though the object might be absorbing thermal energy to warm itself, it will still emit some infrared energy that is detectable by sensors. The amount of radiated energy is a function of the object's temperature and its relative efficiency of thermal radiation, known as emissivity.

(Photo courtesy of NASA/JPL-Caltech/IPAC)
Emissivity
A very important consideration in radiation heat transfer is the emissivity of the object being evaluated. Emissivity is a measure of a surface's efficiency in transferring infrared energy. It is the ratio of thermal energy emitted by a surface to the energy emitted by a perfect blackbody at the same temperature. A perfect blackbody only exists in theory and is an object that absorbs and reemits all of its energy. Human skin is nearly a perfect blackbody as it has an emissivity of 0.98, regardless of actual skin color.
If an object has low emissivity, IR instruments will indicate a lower temperature than the true surface temperature. For this reason, most systems and instruments provide the ability for the operator to adjust the emissivity of the object being measured. Sometimes, spray paints, powders, tape or "emissivity dots" are used to improve the emissivity of an object.
Equipment - Detectors
Thermal energy detection and measurement equipment comes in a large variety of forms and levels of sophistication. One way to categorize the equipment and materials is to separate thermal detectors from quantum (photon) detectors. The basic distinction between the two is that thermal detectors depend on a two-step process. The absorption of thermal energy in these detectors raises the temperature of the device, which in turn changes some temperature-dependent parameter, such as electrical conductivity. Quantum devices detect photons from infrared radiation. Quantum detectors are much more sensitive but require cooling to operate properly.
Thermal Detectors
Thermal detectors include heat sensitive coatings, thermoelectric devices and pryoelectric devices. Heat sensitive coatings range from simple wax-based substances that are blended to melt at certain temperatures to specially formulated paint and greases that change color as temperature changes. Heat sensitive coatings are relatively inexpensive but do not provide good quantitative data.
Thermoelectric devices include thermocouples, thermopiles (shown right), thermistors and bolometers. These devices produce an electrical response based on a change in temperature of the sensor. They are often used for point or localized measurement in a contact or near contact mode. However, thermal sensors can be miniaturized. For example, mirobolometers are the active elements in some high-tech portable imaging systems, such as those used by fire departments. Benefits of thermal detectors are that the element does not need to be cooled and they are comparatively low in price. Thermal detectors are used to measure the temperature in everything from home appliances to fire and intruder detection systems to industrial furnaces to rockets.
Quantum (Photon) Detectors
Unlike thermal detectors, quantum detectors do not rely on the conversion of incoming radiation to heat, but convert incoming photons directly into an electrical signal. When photons in a particular range of wavelengths are absorbed by the detector, they create free electron-hole pairs, which can be detected as electrical current. The signal output of a quantum detector is very small and is overshadowed by noise generated internally to the device at room temperatures. Since this noise within a semiconductor is partly proportional to temperature, quantum detectors are operated at cryogenic temperatures [i. e. down to 77 K (liquid nitrogen) or 4 K (liquid helium)] to minimize noise. This cooling requirement is a significant disadvantage in the use of quantum detectors. However, their superior electronic performance still makes them the detector of choice for the bulk of thermal imaging applications. Some systems can detect temperature differences as small as 0.07°C.
Quantum detectors can be further subdivided into photoconductive and photovoltaic devices. The function of photoconductive detectors are based on the photogeneration of charge carriers (electrons, holes or electron-hole pairs). These charge carriers increase the conductivity of the device material. Possible materials used for photoconductive detectors include indium antimonide (InSb), quantum well infrared photodetector (QWIP), mercury cadmium telluride (mercad, MCT), lead sulfide (PbS), and lead selenide (PbSe).
Photovoltaic devices require an internal potential barrier with a built-in electric field in order to separate photo-generated electron-hole pairs. Such potential barriers can be created by the use of p-n junctions or Schottky barriers. Examples of photovoltaic infrared detector types are indium antimonide (InSb), mercury cadmium telluride (MCT), platinum silicide (PtSi), and silicon Schottky barriers.
Detector Cooling
There are several different ways of cooling the detector to the required temperature. In the early days of thermal imaging, liquid nitrogen was poured into imagers to cool the detector. Although satisfactory, the logistical and safety implications led to the development of other cooling methods. High pressure gas can be used to cool a detector to the required temperatures. The gas is allowed to rapidly expand in the cooling systems and this expansion results in the significant reduction in the temperature of a gas. Mechanical cooling systems are the standard for portable imaging systems. These have the logistical advantage of freeing the detection system from the requirements of carrying high pressure gases or liquid nitrogen
Equipment - Imaging Technology
Imaging Systems
Thermal imaging instruments measure radiated infrared energy and convert the data to corresponding maps of temperatures. A true thermal image is a gray scale image with hot items shown in white and cold items in black. Temperatures between the two extremes are shown as gradients of gray. Some thermal imagers have the ability to add color, which is artificially generated by the camera's video enhancement electronics, based upon the thermal attributes seen by the camera. Some instruments provide temperature data at each image pixel. Cursors can be positioned on each point, and the corresponding temperature is read out on the screen or display. Images may be digitized, stored, manipulated, processed and printed out. Industry-standard image formats, such as the tagged image file format (TIFF), permit files to work with a wide array of commercially available software packages.
Images are produced either by scanning a detector (or group of detectors) or by using with focal plane array. A scanning system in its simplest form could involve a single element detector scanning along each line in the frame (serial scanning). In practice, this would require very high scan speeds, so a series of elements are commonly scanned as a block, along each line. The use of multiple elements eases the scan speed requirement, but the scan speed and channel bandwidth requirements are still high. Multiple element scans do, however, result in a high degree of uniformity. The frame movement can be provided by frame scanning optics (using mirrors) or in the case of line scan type imagers, by the movement of the imager itself. Another method is to use a number of elements scanning in parallel (parallel scanning). These scanners have one element per line and scan several lines simultaneously. Scan speeds are lower but this method can give rise to poor image uniformity.
Another way thermal images are produced is with focal plane arrays (FPA), which are also known as staring arrays. A focal plane array is a group of sensor elements organized into a rectangular grid. A high magnification image of a portion of a mirobolometer focal plane array is shown to the right. The entire scene is focused on the array, each element cell then provides an output dependent upon the infrared radiation falling upon it. The spatial resolution of the image is determined by the number of pixels of the detector array. Common formats for commercial infrared detectors are 320 by 240 pixels (320 columns, 240 rows), and 640 by 480. The latter format is nearly the resolution obtained by a standard TV. Spatial resolution, the ability to measure temperatures on small areas, can be as fine as 15 microns. Temperature resolution, the ability to measure small temperature differences, can be as fine as 0.1° C.
The advantage of FPAs is that no moving mechanical parts are needed and that the detector sensitivity and speed can both be slower. The drawback is that the detector array is more complicated to fabricate and manufacturing costs are higher. However, improvements in semiconductor fabrication practices are driving the cost down and the general trend is that infrared camera systems will be based on FPAs, except for special applications. A microbolometer is the latest type of thermal imaging FPA, and consists of materials that measure heat by changing resistance at each pixel. The most common microbolometer material is vanadium oxide (VOX). Amorphous silicon is another relatively new microbolometer material.
Applications extend from microelectronic levels to scanning wide areas of the earth from space. Airborne systems can be used to see through smoke in forest fires. Portable, hand-held units can be used for equipment monitoring in preventative maintenance and flaw detection in nondestructive testing programs.
Equipment for Establishing Heat Flow
In some inspection applications, such as corrosion or flaw detection, the components being inspected may be at ambient temperature and heat flow must be created. This can be accomplished by a variety of means. Heating can be accomplished by placing the part in a warm environment, such as a furnace, or directing heat on the surface with a heat gun or with flash lamps. Alternately, cooling can be accomplished by placing the component in a cold environment or cooling the surface with a spray of cold liquid or gas.
Image Capturing and Analysis
IR cameras alone or used with an external heat source can often detect large, near-surface flaws. However, repeatable, quantifiable detection of deeper, subtler features requires the additional sensitivity of a sophisticated computerized system. In these systems, a computer is used to capture a number of time sequence images which can be stepped through or viewed as a movie to evaluate the thermal changes in an object as a function of time. This technique is often referred to as thermal wave imaging.
The image to the right shows a pulsed thermography system. This system uses a closely controlled burst of thermal energy from a xenon flash lamp to heat the surface. The dissipation of heat is then tracked using a high speed thermal imaging camera. The camera sits on top of the gray box in the foreground. The gray box houses the xenon flash lamp and it is held against the surface being inspected. The equipment was designed to inspect the fuselage skins of aircraft for corrosion damage and can make quantitative measurements of material loss. It has also been shown to detect areas of water incursion in composites and areas where bonded structure have separated.
Image Interpretation
Most thermal imagers produce a video output in which white indicates areas of maximum radiated energy and black indicates areas of lower radiation. The gray scale image contains the maximum amount of information. However, in order to ease general interpretation and facilitate subsequent presentation, the thermal image can be artificially colorized. This is achieved by allocating desired colors to blocks of grey levels to produce the familiar colorized images. This enables easier image interpretation to the untrained observer. Additionally, by choosing the correct colorization palette the image may be enhanced to show particular energy levels in detail.

Many thermal imaging applications are qualitative in nature. The inspection simply involves comparing the temperatures at various locations within the field of view. The effects of the sun, shadows, moisture and subsurface detail must all be taken into account when interpreting the image, but this type of inspection is straightforward. However, great care must be exercised when using an infrared imager to make quantitative temperature measurements. As mentioned previously, the amount of infrared radiation emitted from a surface depends partly upon the emissivity of that surface. Accurate assessment of surface emissivity is required to acquire meaningful quantitative results.
Techniques and Select Industrial Applications
of Thermal Imaging
Some thermal imaging techniques simply involve pointing a camera at a component and looking at areas of uneven heating or localized hot spots. The first two example applications discussed below fall into this category. For other applications, it may be necessary to generate heat flow within the component and/or evaluate heat flow as a function of time. A variety of thermal imaging techniques have been developed to provide the desired information. A few of these techniques are highlighted below.
Electrical and Mechanical System Inspection
Electrical and mechanical systems are the backbone of many manufacturing operations. An unexpected shutdown of even a minor piece of equipment could have a major impact on production. Since nearly everything gets hot before it fails, thermal inspection is a valuable and cost-effective diagnostic tool with many industrial applications.
With the infrared camera, an inspector can see the change in temperature from the surrounding area, identify whether or not it is abnormal and predict the possible failure. Applications for infrared testing include locating loose electrical connections, failing transformers, improper bushing and bearing lubrication, overloaded motors or pumps, coupling misalignment, and other applications where a change in temperature will indicate an undesirable condition. Since typical electrical failures occur when there is a temperature rise of over 50°C, problems can be detected well in advance of a failure.
The image on the right above shows three electrical connections. The middle connection is hotter than the others. Connections can become hot if they are loose or if corrosion causes an increase in the electrical resistance.
Electronic Component Inspection
In electronics design and manufacturing, a key reliability factor is semiconductor junction temperature. During operation, a semiconductor generates heat and this heat will flow from the component. The heat will flow from the component in all directions, but will flow particularly well along thermally conductive connectors. This leads to an increase in temperature at the junctions where the semiconductor attaches to the board. Components with high junction temperatures typically have shorter life spans. Thermal imaging can be used to evaluate the dissipation of heat and measure the temperature at the junctions.
Corrosion Damage (Metal Thinning)
IR techniques can be used to detect material thinning of relatively thin structures since areas with different thermal masses with absorb and radiate heat at different rates. In relatively thin, thermally conductive materials, heat will be conducted away from the surface faster by thicker regions. By heating the surface and monitoring its cooling characteristics, a thickness map can be produced. Thin areas may be the result of corrosion damage on the backside of a structure which is normally not visible. The image to the right shows corrosion damage and disbonding of a tear strap/stringer on the inside surface of an aircraft skin. This type of damage is costly to detect visually because a great deal of the interior of the aircraft must be disassembled. With IR techniques, the damage can be detected from the outside of the aircraft.
Flaw Detection
Infrared techniques can be used to detect flaws in materials or structures. The inspection technique monitors the flow of heat from the surface of a solid and this flow is affected by internal flaws such as disbonds, voids or inclusions. Sound material, a good weld, or a solid bond will see heat dissipate rapidly through the material, whereas a defect will retain the heat for longer.
A new technique call vibrothermograph or thermosonic testing was recently introduced by researchers at Wayne State University for the detection of cracks. A solid sample is excited with bursts of high-energy, low-frequency acoustic energy. This causes frictional heating at the faces of any cracks present and hotspots are detected by an infrared camera.
Despite the apparent simplicity of the scheme, there are a number of experimental considerations that can complicate the implementation of the technique. Factors including acoustic horn location, horn-crack proximity, horn-sample coupling, and effective detection range all significantly affect the degree of excitation that occurs at a crack site for a given energy input.
Below are two images from an IR camera showing a 0.050" thick 7075 aluminum plate sample with a prefabricated crack being inspected using a commercial vibrothermography system. The image on the left is the IR image with a pre-excitation image subtracted. A crack can be seen in the middle of the sample and just to the right of the ultrasonic horn. Also seen is heating due to the horn tip, friction at various clamping sites, and reflection from the hole at the right edge of the sample. The image on the right is the same data with image processing performed to make the crack indication easier to distinguish.