If the forces acting on a rigid body do not produce any acceleration, they must neutralize each other, i.e., form a system of forces in equilibrium.

Equilibrium is said to be stable when the body with the forces acting upon it returns to its original position after being displaced a very small amount from that position; unstable when the body tends to move still farther from its original position than the very small displacement; and neutral when the forces retain their equilibrium when the body is in its new position.

External and Internal Forces
The forces by which the individual particles of a body act on each other are known as internal forces. All other forces are called external forces.

If a body is supported by other bodies while subject to the action of forces, deformations and forces will be produced at the points of support or contact and these internal forces will be distributed throughout the body until equilibrium exists and the body is said to be in a state of tension, compression, or shear.

The forces exerted by the body on the supports are known as reactions. They are equal in magnitude and opposite in direction to the forces with which the supports act on the body, known as supporting forces.

The supporting forces are external forces applied to the body. In considering a body at a definite section, it will be found that all the internal forces act in pairs, the two forces being equal and opposite. The external forces act singly.

General Law
When a body is at rest , the forces acting externally to it must form an equilibrium system. This law will hold for any part of the body, in which case the forces acting at any section of the body become external forces when the part on either side of the section is considered alone.

In the case of a rigid body, any two forces of the same magnitude, but acting in opposite directions in any straight line, may be added or removed without change in the action of the forces acting on the body, provided the strength of the body is not affected.


An engineer must routinely assure that designs will endure anticipated loading histories with no significant change in geometry or loss in load-carrying capability. Anticipating service-load histories can require experience and/or testing. Techniques for load estimation are as diverse as any other aspect of the design process.

The design or allowable stress is generally defined as the tension or compressive stress (yield point or ultimate) depending on the type of loading divided by the safety factor. In fatigue the appropriate safety factor is used based on the number of cycles.

Also when wear, creep, or deflections are to be limited to a prescribed value during the life of the machine element, the design stress can be based upon values different from above.

The magnitude of the design factor of safety, a number greater than unity, depends upon the application and the uncertainties associated with a particular design. In the determination of the factor of safety, the following should be considered:

1. The possibility that failure of the machine element may cause injury or loss of human life

2. The possibility that failure may result in costly repairs

3. The uncertainty of the loads encountered in service

4. The uncertainty of material properties

5. The assumptions made in the analysis and the uncertainties in the determination of the stress-concentration
factors and stresses induced by sudden impact and repeated loads

6. The knowledge of the environmental conditions to which the part will be subjected

7. The knowledge of stresses which will be introduced during fabrication (e.g., residual stresses), assembly, and shipping of the part

8. The extent to which the part can be weakened by corrosion

Many other factors obviously exist. Typical values of design safety factors range from 1.0 (against yield) in the case of aircraft, to 3 in typical machine-design applications, to approximately 10 in the case of some pressure vessels.

It is to be noted that these safety factors allow us to compute the allowable stresses given and are not in lieu of the stress-concentration factors which are used to compute stresses in service.


The design of a component implies a design framework and a design process. A typical design framework requires consideration of the following factors: component function and performance, producibility and cost, safety, reliability, packaging, and operability and maintainability.

The designer should assess the consequences of failure and the normal and abnormal conditions, loads, and environments to which the component may be subjected during its operating life.

On the basis of the requirements specified in the design framework, a design process is established which may include the following elements: conceptual design and synthesis, analysis and gathering of relevant data, optimization, design and test of prototypes, further optimization and revision, final design, and monitoring of component performance in the field.

When loads are imposed on an engineering component, stresses and strains develop throughout. Many analytical techniques are available for estimating the state of stress and strain in a component.

A comprehensive treatment of this subject is beyond the scope of this chapter. However, the topic is overviewed for engineering design situations.

An engineering definition of “stress” is the force acting over an infinitesimal area. “Strain” refers to the localized deformation associated with stress. There are several important practical aspects of stress in an engineering component:

1. A state of stress-strain must be associated with a particular location on a component.

2. A state of stress-strain is described by stress-strain components, acting over planes.

3. A well-defined coordinate system must be established to properly analyze stressstrain.

4. Stress components are either normal (pulling planes of atoms apart) or shear (sliding planes of atoms across each other).

5. A stress state can be uniaxial, but strains are usually multiaxial (due to the effect described by Poisson’s ratio).

The Malcolm Baldrige National Quality Award (MBNQA) Basic Information And Tutorials

The criteria for the MBNQA asks companies to describe how new products are designed, and to describe how production processes are designed, implemented, and improved. Regarding design processes, the criteria further asks “how design and production processes are coordinated to ensure trouble-free introduction and delivery of products.”

The winners of the MBNQA and other world-class companies have very specific processes for product design and product production. Most have an integrated product and process design process that requires early estimates of manufacturability. Following the Six Sigma methodology will enable design teams to estimate the quantitative measure of manufacturability.

What is the Malcolm Baldrige National Quality Award?
Congress established the award program in 1987 to recognize U.S. companies for their achievements in quality and business performance and to raise awareness about the importance of quality and performance excellence as a competitive edge. The award is not given for specific products or services.

Two awards may be given annually in each of three categories: manufacturing, service, and small business. While the Baldrige Award and the Baldrige winners are the very visible centerpiece of the U.S. quality movement, a broader national quality program has evolved around the award and its criteria. A report,

Building on Baldrige: American Quality for the 21st Century, by the private Council on Competitiveness, states, “More than any other program, the Baldrige Quality Award is responsible for making quality a national priority and disseminating best practices across the United States.”

The U.S. Commerce Department’s National Institute of Standards and Technology (NIST) manages the award in close cooperation with the private sector.

How does the Baldrige Award differ from ISO 9000?
The purpose, content, and focus of the Baldrige Award and ISO 9000 are very different. Congress created the Baldrige Award in 1987 to enhance U.S. competitiveness. The award program promotes quality awareness, recognizes quality achievements of U.S. companies, and provides a vehicle for sharing successful strategies.

The Baldrige Award criteria focus on results and continuous improvement. They provide a framework for designing, implementing, and assessing a process for managing all business operations. ISO 9000 is a series of five international standards published in 1987 by the International Organization for Standardization (ISO), Geneva, Switzerland. Companies can use the standards to help determine what is needed to maintain an efficient quality conformance system.

For example, the standards describe the need for an effective quality system, for ensuring that measuring and testing equipment is calibrated regularly, and for maintaining an adequate record-keeping system. ISO 9000 registration determines whether a company complies with its own quality system. Overall, ISO 9000 registration covers less than 10 percent of the Baldrige Award criteria.


“In 1981, Bob Galvin, then chairman of Motorola, challenged his company to achieve a tenfold improvement in performance over a five-year period. While Motorola executives were looking for ways to cut waste, an engineer by the name of Bill Smith was studying the correlation between a product’s field life and how often that product had been repaired during the manufacturing process.

In 1985, Smith presented a paper concluding that if a product were found defective and corrected during the production process, other defects were bound to be missed and found later by the customer during the early use by the consumer.

Additionally, Motorola was finding that best-in-class manufacturers were making products that required no repair or rework during the manufacturing process. (These were Six Sigma products.)

In 1988, Motorola won the Malcolm Baldrige National Quality Award, which set the standard for other companies to emulate. (This author had the opportunity to examine some of Motorola’s processes and products that were very near Six Sigma.

These were nearly 2,000 times better than any products or processes that we at Texas Instruments (TI) Defense Systems and Electronics Group (DSEG) had ever seen. This benchmark caused DSEG to re examine its product design and product production processes. Six Sigma was a very important element in Motorola’s award winning application.

TI’s DSEG continued to make formal applications to the MBNQA office and won the award in 1992. Six Sigma was a very important part of the winning application.)

As other companies studied its success, Motorola realized its strategy to attain Six Sigma could be further extended.” (Reference 3)

Galvin requested that Mikel J. Harry, then employed at Motorola’s Government Electronics Group in Phoenix, Arizona, start the Six Sigma Research Institute (SSRI), circa 1990, at Motorola’s Schaumburg, Illinois campus. With the financial support and participation of IBM, TI’s DSEG, Digital Equipment Corporation (DEC), Asea Brown Boveri Ltd. (ABB), and Kodak, the SSRI began developing deployment strategies, and advanced applications of statistical methods for use by engineers and scientists.

Six Sigma Academy President, Richard Schroeder, and Harry joined forces at ABB to deploy Six Sigma and refined the breakthrough strategy by focusing on the relationship between net profits and product quality, productivity, and costs.

The strategy resulted in a 68% reduction in defect levels and a 30% reduction in product costs, leading to $898 million in savings/cost reductions each year for two years. (Reference 13)

Schroeder and Harry established the Six Sigma Academy in 1994. Its client list includes companies such as Allied Signal, General Electric, Sony, Texas Instruments DSEG (now part of Raytheon), Bombardier, Crane Co., Lockheed Martin, and Polaroid. These companies correlate quality to the bottom line.


Reversible Process
A reversible process for a system is defined as a process that, once having taken place, can be reversed, and in so doing leaves no change in either the system or surroundings. In other words the system and surroundings are returned to their original condition before the process took place.

In reality, there are no truly reversible processes; however, for analysis purposes, one uses reversible to make the analysis simpler, and to determine maximum theoretical efficiencies. Therefore, the reversible process is an appropriate starting point on which to base engineering study and calculation.

Although the reversible process can be approximated, it can never be matched by real processes. One way to make real processes approximate reversible process is to carry out the process in a series of small or infinitesimal steps.

For example, heat transfer may be considered reversible if it occurs due to a small temperature difference between the system and its surroundings. For example, transferring heat across a temperature difference of 0.00001 °F "appears" to be more reversible than for transferring heat across a temperature difference of 100 °F.

Therefore, by cooling or heating the system in a number of infinitesamally small steps, we can approximate a reversible process. Although not practical for real processes, this method is beneficial for thermodynamic studies since the rate at which processes occur is not important.

Irreversible Process
An irreversible process is a process that cannot return both the system and the surroundings to their original conditions. That is, the system and the surroundings would not return to their original conditions if the process was reversed.

For example, an automobile engine does not give back the fuel it took to drive up a hill as it coasts back down the hill. There are many factors that make a process irreversible.

Four of the most common causes of irreversibility are friction, unrestrained expansion of a fluid, heat transfer through a finite temperature difference, and mixing of two different substances. These factors are present in real, irreversible processes and prevent these processes from being reversible.


• The First Law of Thermodynamics states that energy can neither be created nor destroyed, only altered in form.

• In analyzing an open system using the First Law of Thermodynamics, the energy into the system is equal to the energy leaving the system.

• If the fluid passes through various processes and then eventually returns to the same state it began with, the system is said to have undergone a cyclic process. The first law is used to analyze a cyclic process.

• The energy entering any component is equal to the energy leaving that component at steady state.

• The amount of energy transferred across a heat exchanger is dependent upon the temperature of the fluid entering the heat exchanger from both sides and the flow rates of thse fluids.

• A T-s diagram can be used to represent thermodynamic processes.

The First Law of Thermodynamics is referred to as the Conservation of Energy principle, meaning that energy can neither be created nor destroyed, but rather transformed into various forms as the fluid within the control volume is being studied.

The energy balance spoken of here is maintained within the system being studied. The system is a region in space (control volume) through which the fluid passes.

The various energies associated with the fluid are then observed as they cross the boundaries of the system and the balance is made.

A system may be one of three types: isolated, closed, or open. The open system, the most general of the three, indicates that mass, heat, and external work are allowed to cross the control boundary.

The balance is expressed in words as: all energies into the system are equal to all energies leaving the system plus the change in storage of energies within the system.


Critical Point
At a pressure of 3206.2 psia, represented by line MNO, there is no constant-temperature vaporization process. Rather, point N is a point of inflection, with the slope being zero.

This point is called the critical point, and at the critical point the saturated-liquid and saturated-vapor states are identical. The temperature, pressure, and specific volume at the critical point are called the critical temperature, critical pressure, and critical volume.

A constant pressure process greater than the critical pressure is represented by line PQ. There is no definite change in phase from liquid to vapor and no definite point at which there is a change from the liquid phase to the vapor phase.

For pressures greater than the critical pressure, the substance is usually called a liquid when the temperature is less than the critical temperature (705.47°F) and a vapor or gas when the temperature is greater than the critical temperature. In the figure, line NJFB represents the saturated liquid line, and the line NKGC represents the saturated vapor line.

Suppose the cylinder contained 1 lbm of ice at 0°F, 14.7 psia. When heat is transferred to the ice, the pressure remains constant, the specific volume increases slightly, and the temperature increases until it reaches 32°F, at which point the ice melts while the temperature remains constant.

In this state the ice is called a saturated solid. For most substances, the specific volume increases during this melting process, but for water the specific volume of the liquid is less than the specific volume of the solid.

This causes ice to float on water. When all the ice is melted, any further heat transfer causes an increase in temperature of the liquid. The process of melting is also referred to as fusion. The heat added to melt ice into a liquid is called the latent heat of fusion.

Thermodynamic Systems and Processes Basic Information And Tutorials

• A thermodynamic system is a collection of matter and space with its boundaries defined in such a way that the energy transfer across the boundaries can be best understood.

• Surroundings are everything not in the system being studied.

• Systems are classified into one of three groups:

Isolated system - neither mass nor energy can cross the boundaries
Closed system - only energy can cross the boundaries
Open system - both mass and energy can cross the

• A control volume is a fixed region of space that is studied as a thermodynamic system.

• Steady state refers to a condition where the properties at any given point within the system are constant over time. Neither mass nor energy are accumulating within the system.

• A thermodynamic process is the succession of states that a system passes through.

Processes can be described by any of the following terms:

Cyclic process - a series of processes that results in the system returning to its original state

Reversible process - a process that can be reversed resulting in no change in the system or surroundings

Irreversible process - a process that, if reversed, would result in a change to the system or surroundings

Adiabatic process - a process in which there is no heat transfer across the system boundaries

Isentropic process - a process in which the entropy of the system remains unchanged

Polytropic process - the plot of Log P vs. Log V is a straight line, PVn = constant

Throttling process - a process in which enthalpy is constant h1 = h2, work = 0, and which is adiabatic, Q=0.


Heat and Work
Distinction should also be made between the energy terms heat and work. Both represent energy in transition. Work is the transfer of energy resulting from a force acting through a distance.

Heat is energy transferred as the result of a temperature difference. Neither heat nor work are thermodynamic properties of a system. Heat can be transferred into or out of a system and work can be done on or by a system, but a system cannot contain or store either heat or work. Heat into a system and work out of a system are considered positive quantities.

When a temperature difference exists across a boundary, the Second Law of Thermodynamics indicates the natural flow of energy is from the hotter body to the colder body. The Second Law of Thermodynamics denies the possibility of ever completely converting into work all the heat supplied to a system operating in a cycle. 

The Second Law of Thermodynamics, described by Max Planck in 1903, states that:

It is impossible to construct an engine that will work in a complete cycle and produce no other effect except the raising of a weight and the cooling of a reservoir.

The second law says that if you draw heat from a reservoir to raise a weight, lowering the weight will not generate enough heat to return the reservoir to its original temperature, and eventually the cycle will stop. 

If two blocks of metal at different temperatures are thermally insulated from their surroundings and are brought into contact with each other the heat will flow from the hotter to the colder. 

Eventually the two blocks will reach the same temperature, and heat transfer will cease. Energy has not been lost, but instead some energy has been transferred from one block to another.

Modes of Transferring Heat
Heat is always transferred when a temperature difference exists between two bodies. There are three basic modes of heat transfer:

1. Conduction involves the transfer of heat by the interactions of atoms or molecules of a material through which the heat is being transferred.

2. Convection involves the transfer of heat by the mixing and motion of macroscopic portions of a fluid.

3. Radiation, or radiant heat transfer, involves the transfer of heat by electromagnetic radiation that arises due to the temperature of a body.

Structural Dynamics and Vibration in Practice: An Engineering Handbook Free E-Book Download Link

This straightforward text, primer and reference introduces the theoretical, testing and control aspects of structural dynamics and vibration, as practised in industry today.

Written by an expert engineer of over 40 years experience, the book comprehensively opens up the dynamic behavior of structures and provides engineers and students with a comprehensive practice based understanding of the key aspects of this key engineering topic.

Key features
. Worked example based makes it a thoroughly practical resource
. Aimed at those studying to enter, and already working in industry;
. Presents an applied practice and testing based approach while remaining grounded in the theory of the topic
. Makes the topic as easy to read as possible, omitting no steps in the development of the subject;
. Includes the use of computer based modelling techniques and finite elements
. Covers theory, modelling testing and control in practice

Written with the needs of engineers of a wide range of backgrounds in mind, this book will be a key resource for those studying structural dynamics and vibration at undergraduate level for the first time in aeronautical, mechanical, civil and automotive engineering. It will be ideal for laboratory classes and as a primer for readers returning to the subject, or coming to it fresh at graduate level.

It is a guide for students to keep and for practicing engineers to refer to: its worked example approach ensures that engineers will turn to Thorby for advice in many engineering situations.

1. Presents students and practitioners in all branches of engineering with a unique structural dynamics resource and primer, covering practical approaches to vibration engineering while remaining grounded in the theory of the topic
2. Written by a leading industry expert, with a worked example lead approach for clarity and ease of understanding
3. Makes the topic as easy to read as possible, omitting no steps in the development of the subject; covers computer based techniques and finite elements

About the Author
Retired Aeronautical engineer, ex-senior dynamics engineer at British Aerospace. The author has 40 years of experience of structural dynamics and vibration in the British and American aerospace industries (structural dynamicist for the US Harrier; T45 US Navy Hawk trainer; Lockheed-Martin JSF). This included five years as the UK representative on the Structures and Materials Panel of NATO's Advisory Group for Aerospace Research & Development (AGARD).

Purchase The Book Here:

Free Download Link Only:

Needs Signing Up.


To aid in the selection of the proper belt for each application, manufacturers provide technical and performance data about their belts.

In addition, the Rubber Manufacturers’ Association (RMA) and the Mechanical Power Transmission Association (MPTA) have worked together to publish engineering standards and bulletins for most types of belts and drive hardware (see Bibliography).These publications contain information that supplements design catalogs.

There are four basic questions that need to be answered in the drive design:
1. What horsepower is required of the drive?
2. What is the speed (rpm) of the driver shaft?
3. What is the speed (rpm) of the driven shaft?
4. What is the approximate desired center distance?

In addition to the basic elements, there are a number of special drive characteristics that may require consideration. These might include:

● Special environmental conditions such as abrasives, chemicals, and so on.
● Overhung load (OHL) considerations for gear motors and reducers
● Driven pulley inertia (WR2) requirements for equipment such as piston compressors, crushers, and so on.
● Special drive characteristics such as shock loads, inherent misalignment, clutching requirements, and so on.

Also, while selecting and evaluating the drive, consider the following points:

● Selecting larger diameter pulleys will keep drive face width to a minimum.
● Selecting larger diameter pulleys will keep drive tensions and shaft load at a minimum.
● Larger diameter pulleys will often give a more economical drive, but should not be so large that multiple V-belt capability is sacrificed.
● If space is limited, consider using the smallest diameter drive. However, pulleys on electric motors must be at least as large as the National Electric Manufacturers Association (NEMA) minimum recommended standards.
● When the drive is between two belt cross-section sizes, the larger section will usually be more economical.

However, it is always recommended to check drives in both cross sections. Selecting an optimum belt drive involves many factors, but drive selection can be readily accomplished using manufacturers’ design literature. Many manufacturers also offer computer programs for drive design selection.


Belt drives are the most widely used method of flexible power transmission. Improvements in materials and methods of manufacture have allowed the introduction of new belts with much broader application capabilities.

Belt drives basically fit into four types: flat,V,V-ribbed, and synchronous. Although each of these basic belt types are more suitable in specific application areas, most applications can be successfully designed with more than one type of belt.

Flat belts, one of the earliest forms of flexible power transmission, are generally more suited to high speed, low-horsepower applications. At low speeds and high loads, flat-belt drives usually become too large to be cost-effective.

V belts are the most commonly used today, and they are the only belts that can be used on variable pitch and variable speed drives. V-ribbed belts are described by some as guided flat belts. Their thinner cross section makes them suitable for operation on smaller diameters at higher speeds.

Synchronous (timing) belts are specifically designed as alternatives to roller chains and gears on drives, which require exact speed ratios and synchronization between the driver and driven machines. They are also widely used today in low-maintenance, energy-efficient applications.

Flat belts are still widely used for power transmission. Their thin, flexible cross section allows them to operate over small diameters and, in some cases, at very high speeds.

Many different sizes and constructions are available for a wide variety of uses. Flat belts are made either of fully molded or woven construction and may or may not have a tensile member.

One significant disadvantage of flat belts is that they depend entirely on friction in order to transmit power. Thus they require higher belt tension to do the same work, which results in higher shaft and bearing loads. The need for higher tension may cause more belt stretch, causing the belt to slip more easily than V belts.

The problems of high tension with flat belts led to the development of V belts. Unlike flat belts, which depend only on friction, V belts have deep V-shaped cross sections that wedge into the sheave groove to provide added horsepower capacity.

Because of the wedging action, V belts are highly stable and can operate at tensions considerably lower than those needed by flat belts.Thus,V-belt drives can be more compact and allow for smaller shafts and bearings.


To select the proper shaft, determine the following items:
● Torque requirement
● Shaft radius of curvature
● Length of path between driving and driven systems
● Operating speed
● Acceptable torsional deflection or backlash

Determining the torque requirement on a shaft may not be easy. Remote-control shaft requirements can be determined by attaching a lever of known length to the device to be turned and pulling the end of the lever with a spring scale. Use the highest torque as the required torque.

Measuring torque on a power shaft is more difficult because the requirement is dynamic. The best way is to instrument-load with a torque cell and measure the torque under operating conditions. Component manufacturers usually have these data available. A shaft efficiency of about 90 percent should be attainable for most applications; therefore, increasing the output torque by about 10 percent leads to the input torque.

The shaft path should then be examined.A drawing should be made that shows the path in a true view. The MOR should be determined from this drawing. A full-scale prototype can be used. The length of the shaft can also be determined at this time.

For power shafts, the operating speed is generally given. For remote-control shafts, in which the speed is essentially zero, the amount of torsional deflection acceptable between the turning device and the turned device is the important consideration.

Used intelligently, flexible shafts are the product designer’s allies. They are more flexible than universal joints; they are more versatile than gear systems because they are totally unaffected by the exact angle or offset necessary; finally, flexible shafts offer an inherent shock absorption capability, ease of installation, and maintenance unmatched by other forms of rotary-motion transmission.

However, flexible shafts do have to be treated carefully if they are to provide the service life built into them. They cannot be bent completely out of shape and must be used at radii equal to or greater than the MOR specified by the manufacturer.

They should be secured approximately every 18 in (46 cm) to prevent the possibility of helixing. They must be lightly lubricated as preventive maintenance at regular intervals. Replacement must match the original design because a remote-control shaft is not interchangeable with a power shaft.

The type of service expected must be clearly specified. Two differently built shafts, even of the same diameter and length, are not interchangeable. Above all, flexible shafts must be designed for the task they are to perform. For this reason, early consultation between the user’s engineering department and the shaft manufacturer is highly recommended.

Follow these simple, basic suggestions, and the design flexibility of a rotary-motion flexible shaft will be applied to utmost advantage.


Since the lubricant affects bearing life and operation, selecting the proper lubricant is an important design function. The purpose of lubrication in bearing applications is to:

1. Minimize friction at points of contact within the bearings
2. Protect the highly finished bearing surfaces from corrosion
3. Dissipate heat generated within the bearings
4. Remove foreign matter or prevent its entry into the bearings

Two basic types of lubricants used with antifriction bearings are oils and greases. Each has its advantages and limitations.

Since oil is a liquid, it lubricates all surfaces and is able to dissipate heat from these surfaces more readily. Because oil retains its physical characteristics over a wider range of temperatures, it may be used for high-speed and high-temperature applications.

The quantity of oil supplied to the bearing may be accurately controlled. Oil lubricants can be circulated, cleaned, and cooled for more effective lubrication. Grease, which is easier to retain in the bearing housing, aids as a sealant against foreign matter and corrosive fumes.

Bearing Oil Lubrication

1. Oil is a better lubricant for high speeds or high temperatures. It can be cooled to help reduce bearing temperature.

2. Oil is easier to handle, and with oil it is easier to control the amount of lubricant reaching the bearing. It is harder to retain in the bearing. Lubricant losses may be higher than with grease.
3. As a liquid, oil can be introduced into the bearing in many ways, such as drip feed, wick feed, pressurized circulating systems, oil bath, or air-oil mist. Each is suited to certain types of applications.

4. Oil is easier to keep clean for recirculating systems.

Bearing Grease Lubrication

1. This type of lubrication is restricted to lower-speed applications within the operatingtemperature limits of the grease.

2. Grease is easily confined in the housing.This is important in the food, textile, and chemical industries.

3. Bearing enclosure and seal design is simplified.

4. Grease improves the efficiency of mechanical seals to give better protection to the bearing.

For all new applications, a competent bearing or lubrication engineer should be consulted to recommend the specific lubricant and method of lubrication for the specific bearing’s operating and ambient conditions.


The standard materials for ball and roller bearings are usually ANSI 52100 or equivalent “ball bearing steel” or case carburized steels. Industrial demands for special bearings to meet abnormal service requirements spur the continual search for new and improved bearing materials.

High temperatures, corrosive atmospheres, massive size, marginal lubrication, complex design, and space and weight limitations are typical abnormal requirements.

Conventional bearing steels are often inadequate when these problems are present. Sustained high operating temperatures reduce hardness, wear resistance, yield strength, and, therefore, bearing life. Conventional bearing steels also lack resistance to the oxidation that takes place at elevated temperatures.

Materials such as 440-C stainless or corrosion-resistant coated steels may be required for severely adverse environments. For extremely high speeds and high temperatures, special alloy steels and materials such as metallic carbides and ceramics are used.

The combination of bearing size, complexity of design, and space and weight limitations can be a governing factor in the selection of bearing material. For example, a large-diameter, thin-section bearing with integral gear teeth and bolt holes would require a material that could be selectively hardened.

Monel and beryllium copper are not as hardenable as bearing steels and are nonmagnetic and resist saltwater corrosion. These qualities make them excellent materials for marine applications.

Although low-carbon steels are the most popular, a variety of other materials is also used for bearing cages. Molded nylon, synthetic resin-impregnated fabrics, and bronze/brass are popular in the normal temperature ranges.

High-performance polymers, carbon steel, certain stainless steels, and iron-silicon bronze are used for higher temperatures.


The key element in water jet machining (WJM) is a water jet, which travels at velocities as high as 900 m/s (approximately Mach 3). When the stream strikes a workpiece surface, the erosive force of water removes the material rapidly. The water, in this case, acts like a saw and cuts a narrow groove in the workpiece material.

# It has multidirectional cutting capacity.
# No heat is produced.
# Cuts can be started at any location without the need for predrilled holes.
# Wetting of the workpiece material is minimal.
# There is no deflection to the rest of the workpiece.
# The burr produced is minimal.
# The tool does not wear and, therefore, does not need sharpening.
# The process is environmentally safe.
# Hazardous airborne dust contamination and waste disposal problems
that are common when using other cleaning methods are eliminated.
# There is multiple head processing.
# Simple fixturing eliminates costly and complicated tooling, which reduces turnaround time and lowers the cost.
# Grinding and polishing are eliminated, reducing secondary operation costs.
# The narrow kerf allows tight nesting when multiple parts are cut from a single blank.
# It is ideal for roughing out material for near net shape.
# It is ideal for laser reflective materials such as copper and aluminum.
# It allows for more accurate cutting of soft material.
# It cuts through very thick material such as 383 mm in titanium and 307 mm in Inconel.

# Hourly rates are relatively high.
# It is not suitable for mass production because of high maintenance requirements.


Purpose of Piston Rings

Piston rings serve three important functions:

(a) They provide a seal between the piston and the cylinder wall to prevent the force of the exploding gases from leaking into the crankcase from the combustion chamber. This leakage is referred to as blow by.

Blow by is detrimental to engine performance because the force of the exploding gases will merely bypass the piston rather than push it down. It also contaminates the lubricating oil.

(b) They prevent the lubricating oil from bypassing the piston and getting into the combustion chamber from the crankcase.

(c) They provide a solid bridge to conduct the heat from the piston to the cylinder wall. About one third of the heat absorbed by the piston passes to the cylinder wall through the piston rings.

Piston rings are split to allow for installation and expansion, and they exert an outward pressure on the cylinder wall when installed. They fit into grooves that are cut into the piston, and are allowed to float freely in these grooves.

A properly formed piston ring, working in a cylinder that is within limits for roundness and size, will exert an even pressure and maintain a solid contact with the cylinder wall around its entire circumference.

Although piston rings have been made from many materials, cast iron has proved most satisfactory as it withstands heat, forms a good wearing surface, and retains a greater amount of its original elasticity after considerable use.

There are two basic classifications of piston rings.

a. The compression ring seals the force of the exploding mixture into the combustion chamber.

b. The Oil Control Ring. The oil control ring prevents the engine's lubrication oil from getting into the combustion chamber.

Piston rings are arranged on the pistons in three basic configurations. They are:

(a) The three ring piston has two compression rings near the head, followed by one oil control ring. This is the most common piston ring configuration.

(b) The four ring piston has three compression rings near the head, followed by one oil control ring. This configuration is common in diesel engines because they are more prone to blow by, due to the much higher pressures generated during the power stroke.

(c) The fourring piston has two compression rings near the head, followed by two oil control rings. The bottom oil control ring may be located above or below the piston pin.

This is not a very common configuration in current engine design. In addition to the configurations mentioned, there are some diesel engines that use five or more piston rings on each piston to control the higher operating pressures.


Energy can exist in numerous forms such as thermal, mechanical, kinetic, potential, electrical, magnetic, chemical, and nuclear, and their sum constitutes the total energy E (or e on a unit mass basis) of a system. The forms of energy related to the molecular structure of a system and the degree of the molecular activity are referred to as the microscopic energy.

The sum of all microscopic forms of energy is called the internal energy of a system, and is denoted by U (or u on a unit mass basis). The international unit of energy is joule (J) or kilojoule (1 kJ = 1000 J). In the English system, the unit of energy is the British thermal unit (Btu), which is defined as the energy needed to raise the temperature of 1 lbm of water at 60°F by 1°F.

The magnitudes of kJ and Btu are almost identical (1 Btu = 1.055056 kJ). Another well-known unit of energy is the calorie (1 cal = 4.1868 J), which is defined as the energy needed to raise the temperature of 1 gram of water at 14.5°C by 1°C.

Internal energy may be viewed as the sum of the kinetic and potential energies of the molecules. The portion of the internal energy of a system associated with the kinetic energy of the molecules is called sensible energy or sensible heat.

The average velocity and the degree of activity of the molecules are proportional to the temperature. Thus, at higher temperatures the molecules will possess higher kinetic energy, and as a result, the system will have a higher internal energy.

The internal energy is also associated with the intermolecular forces between the molecules of a system. These are the forces that bind the molecules to each other, and, as one would expect, they are strongest in solids and weakest in gases.

If sufficient energy is added to the molecules of a solid or liquid, they will overcome these molecular forces and simply break away, turning the system to a gas. This is a phase change process and because of this added energy, a system in the gas phase is at a higher internal energy level than it is in the solid or the liquid phase. The internal energy associated with the phase of a system is called latent energy or latent heat.

The changes mentioned above can occur without a change in the chemical composition of a system. Most heat transfer problems fall into this category, and one does not need to pay any attention to the forces binding the atoms in a molecule together.

The internal energy associated with the atomic bonds in a molecule is called chemical (or bond) energy, whereas the internal energy associated with the bonds within the nucleus of the atom itself is called nuclear energy. The chemical and nuclear energies are absorbed or released during chemical or nuclear reactions, respectively.

In the analysis of systems that involve fluid flow, we frequently encounter the combination of properties u and Pv. For the sake of simplicity and convenience, this combination is defined as enthalpy h.


Mild steel is an excellent structural material - cheap, easily formed and strong mechanically. But at low temperatures it rusts, and at high, it oxidises rapidly. There is a demand, for applications ranging from kitchen sinks via chemical reactors to superheater tubes, for a corrosion-resistant steel.

In response to this demand, a range of stainless irons and steels has been developed. When mild steel is exposed to hot air, it oxidises quickly to form FeO (or higher oxides).

A considerable quantity of this foreign element is needed to give adequate protection. The best is chromium, 18% of which gives a very protective oxide film: it cuts down the rate of attack at 900°C, for instance, by more than 100 times.

Other elements, when dissolved in steel, cut down the rate of oxidation, too. A1203 and SiOz both form in preference to FeO and form protective films. Thus 5% A1 dissolved in steel decreases the oxidation rate by 30 times, and 5% Si by 20 times.

The same principle can be used to impart corrosion resistance to other metals. We shall discuss nickel and cobalt in the next case study - they can be alloyed in this way. So, too, can copper; although it will not dissolve enough chromium to give a good Cr,03 film, it will dissolve enough aluminium, giving a range of stainless alloys called 'aluminium bronzes'.

Even silver can be prevented from tarnishing (reaction with sulphur) by alloying it with aluminium or silicon, giving protective A1,03 or Si02 surface films. And archaeologists believe that the Delhi Pillar – an ornamental pillar of cast iron which has stood, uncorroded, for some hundreds of years in a particularly humid spot - survives because the iron has some 6% silicon in it.

Ceramics themselves are sometimes protected in this way. Silicon carbide, Sic, and silicon nitride, Si3N4 both have large negative energies of oxidation (meaning that they oxidise easily). But when they do, the silicon in them turns to SiO, which quickly forms a protective skin and prevents further attack.

This protection-by-alloying has one great advantage over protection by a surface coating (like chromium plating or gold plating): it repairs itself when damaged. If the protective film is scored or abraded, fresh metal is exposed, and the chromium (or aluminium or silicon) it contains immediately oxidises, healing the break in the film.


The members of a class have features in common: similar properties, similar processing routes, and, often, similar applications.

Metals have relatively high moduli. They can be made strong by alloying and by mechanical and heat treatment, but they remain ductile, allowing them to be formed by deformation processes.

Certain high-strength alloys (spring steel, for instance) have ductilities as low as 2%, but even this is enough to ensure that the material yields before it fractures and that fracture, when it occurs, is of a tough, ductile type. Partly because of their ductility, metals are prey to fatigue and of all the classes of material, they are the least resistant to corrosion.

Ceramics and glasses, too, have high moduli, but, unlike metals, they are brittle. Their ‘strength’ in tension means the brittle fracture strength; in compression it is the brittle crushing strength, which is about 15 times larger.

And because ceramics have no ductility, they have a low tolerance for stress concentrations (like holes or cracks) or for high contact stresses (at clamping points, for instance). Ductile materials accommodate stress concentrations by deforming in a way which redistributes the load more evenly; and because of this, they can be used under static loads within a small margin of their yield strength. Ceramics and glasses cannot.

Brittle materials always have a wide scatter in strength and the strength itself depends on the volume of material under load and the time for which it is applied. So ceramics are not as easy to design with as metals. Despite this, they have attractive features.

They are stiff, hard and abrasion-resistant (hence their use for bearings and cutting tools); they retain their strength to high temperatures; and they resist corrosion well. They must be considered as an important class of engineering material.

Polymers and elastomers are at the other end of the spectrum. They have moduli which are low, roughly SO times less than those of metals, but they can be strong - nearly as strong as metals. A consequence of this is that elastic deflections can be large.

They creep, even at room temperature, meaning that a polymer component under load may, with time, acquire a permanent set. And their properties depend on temperature so that a polymer which is tough and flexible at 20°C may be brittle at the 4°C of a household refrigerator, yet creep rapidly at the 100°C of boiling water.

None have useful strength above 200°C. If these aspects are allowed for in the design, the advantages of polymers can be exploited. And there are many. When combinations of properties, such as strength per-unit-weight, are important, polymers are as good as metals.

They are easy to shape: complicated parts performing several functions can be moulded from a polymer in a single operation. The large elastic deflections allow the design of polymer components which snap together, making assembly fast and cheap. And by accurately sizing the mould and pre colouring the polymer, no finishing operations are needed. Polymers are corrosion resistant, and they have low coefficients of friction. Good design exploits these properties.

Composites combine the attractive properties of the other classes of materials while avoiding some of their drawbacks. They are light, stiff and strong, and they can be tough. Most of the composites at present available to the engineer have a polymer matrix - epoxy or polyester, usually – reinforced by fibres of glass, carbon or Kevlar.

They cannot be used above 250°C because the polymer matrix softens, but at room temperature their performance can be outstanding. Composite components are expensive and they are relatively difficult to form and join. So despite their attractive properties the designer will use them only when the added performance justifies the added cost.



A major goal is to help you learn how to solve engineering problems that involve thermodynamic principles. To maximize the results of your efforts, it is necessary to develop a systematic approach. You must think carefully about your solutions and avoid the temptation of starting problems in the middle by selecting some seemingly appropriate equation, substituting in numbers, and quickly “punching up” a result on your calculator.

Such a haphazard problem-solving approach can lead to difficulties as problems become more complicated. Accordingly, we strongly recommend that problem solutions be organized using the five steps in the box below.

1. Known:
State briefly in your own words what is known. This requires that you read the problem carefully and think about it.

2. Find:
State concisely in your own words what is to be determined.

3. Schematic and Given Data:
Draw a sketch of the system to be considered. Decide whether a closed system or control volume is appropriate for the analysis, and then carefully identify the boundary. Label the diagram with relevant information from the problem statement.

Record all property values you are given or anticipate may be required for subsequent calculations. Sketch appropriate property diagrams, locating key state points and indicating, if possible, the processes executed by the system.

The importance of good sketches of the system and property diagrams cannot be overemphasized. They are often instrumental in enabling you to think clearly about the problem.

4. Assumptions:
To form a record of how you model the problem, list all simplifying assumptions and idealizations made to reduce it to one that is manageable. Sometimes this information also can be noted on the sketches of the previous step.

5. Analysis:
Using your assumptions and idealizations, reduce the appropriate governing equations and relationships to forms that will produce the desired results. It is advisable to work with equations as long as possible before substituting numerical data.

When the equations are reduced to final forms, consider them to determine what additional data may be required. Identify the tables, charts, or property equations that provide the required values. Additional property diagram sketches may be helpful at this point to clarify states and processes.

When all equations and data are in hand, substitute numerical values into the equations. Carefully check that a consistent and appropriate set of units is being employed. Then perform the needed calculations.

Finally, consider whether the magnitudes of the numerical values are reasonable and the algebraic signs associated with the numerical values are correct.

Indeed, as a particular solution evolves you may have to return to an earlier step and revise it in light of a better understanding of the problem. For example, it might be necessary to add or delete an assumption, revise a sketch, determine additional property data, and so on.


1-15C The radiator should be analyzed as an open system since mass is crossing the boundaries of the system.

1-16C A can of soft drink should be analyzed as a closed system since no mass is crossing the boundaries
of the system.

1-17C Intensive properties do not depend on the size (extent) of the system but extensive properties do.

1-18C For a system to be in thermodynamic equilibrium, the temperature has to be the same throughout
but the pressure does not. However, there should be no unbalanced pressure forces present. The increasing
pressure with depth in a fluid, for example, should be balanced by increasing weight.

1-19C A process during which a system remains almost in equilibrium at all times is called a quasiequilibrium
process. Many engineering processes can be approximated as being quasi-equilibrium. The
work output of a device is maximum and the work input to a device is minimum when quasi-equilibrium
processes are used instead of nonquasi-equilibrium processes.

1-20C A process during which the temperature remains constant is called isothermal; a process during
which the pressure remains constant is called isobaric; and a process during which the volume remains
constant is called isochoric.

1-21C The state of a simple compressible system is completely specified by two independent, intensive

1-22C Yes, because temperature and pressure are two independent properties and the air in an isolated
room is a simple compressible system.

1-23C A process is said to be steady-flow if it involves no changes with time anywhere within the system
or at the system boundaries.

1-24C The specific gravity, or relative density, and is defined as the ratio of the density of a substance to
the density of some standard substance at a specified temperature (usually water at 4°C, for which ρH2O =
1000 kg/m3). That is, SG = ρ / ρ H2O . When specific gravity is known, density is determined from
ρ = SG×ρ H2O .


Factors in Battery-Powered-Vehicle Selection and Use

Battery-Electric Equipment. 
This is mechanically simpler in design than engine-driven equipment. Typically, the high-torque dc electric drive motor is coupled directly to the drive axle through a constant-mesh drive train.

An electronic silicon controlled rectifier (SCR) speed-control device regulates the motor’s revolutions per minute through operator foot control. Direction is reversed electrically with a delay interlock to avoid reversing motor direction while in motion.

Storage Battery.
These must be replenished frequently either by recharging or by exchanging them for fully charged batteries. Batteries used in a given piece of equipment should provide ample power to operate effectively for an 8-h day as determined by their ampere-hour (Ah) rates.

The Ah rating, to some degree, limits the effective operating range of battery-operated equipment and requires that routine schedules for replenishment are followed. Also, because of the weight of a large storage battery, equipment application is sometimes adversely limited.

Advantages of Battery Vehicles. 
The advantages are low fume emission and heat contamination, quietness and cleanliness, and generally lower maintenance requirements.

Types of Batteries.
The two primary types of batteries used are lead-acid and nickel-ironalkaline. A lead-acid battery will provide 2.0 to 2.3 V per cell, while the nickel-iron-alkaline battery will provide 1.2 V per cell.Voltages used for modern battery-powered mobile equipment are 12, 24, 36, 48, and 72, with some higher voltages used in larger equipment.

The advantages of the lead-acid battery are a lower initial cost, high ampere-hour capacity, and low resistance to self-discharge.The nickel-iron-alkaline battery is desirable because of its longer life expectancy, resistance to physical damage, noncorrosive electrolyte (KOH), and more rapid and less critical recharge rates.

Recharging Times. 
These are adjusted for different batteries by dividing the Ah rating of the battery by the 8-h Ah rating of the charger and multiplying by 8. For example, a battery having a 600-Ah rating and a 450-Ah charger will require

(600 ÷ 450) × 8 = 10.64 h

Battery Charging Area. 
Warehouses utilizing multishift operations require remote charging of the equipment batteries. These areas require hoists, conveyors, chargers, and required safety features (e.g., exhaust hoods and fans, shower and eyewash station, etc.).


Factors in Internal-Combustion-Engine Selection and Use

Internal Combustion Engines. 
These are used in outdoor applications, in well-vented interiors, and in nonhazardous environments. They are generally powered by gas or liquid propane gas, although compressed natural gas (CNG) is a promising alternative. In anticipation of new government regulations, manufacturers are redesigning engines with reduced emissions and improved fuel efficiency.

Industrial Engine.
Typically, this heavier engine is designed to operate in a lower rpm range than an automobile engine. It can be expected to give about 10,000 h of useful life before overhaul. At an equivalent operating speed of 20 mi/h (32 km/h) in an automobile, this would equate to 200,000 mi (321,800 km).

Automotive Engine. 
This is of lighter construction than the industrial engine and, because of the quantities in which it is produced, is of relatively lower cost. It generally operates most efficiently in a higher rpm range than the industrial engine and can be expected to give about 7000 h of useful life prior to overhaul. This life is equivalent to about 140,000 mi (225,260 km) of automobile travel. An advantage of this type of engine is the availability of replacement parts through automotive supply firms.

Air-Cooled Engine.
This is restricted to lighter-duty applications where weight, size, and initial cost are the prime concerns. The absence of a separate cooling system is a distinct advantage, although this engine’s life expectancy is a relatively short 1500 to 2000 h of operation.

Diesel Engine.
Typically, this type is installed in large pieces of equipment where the additional size and cost is not significant. However, because of recent improvements in engine design, diesel engines are becoming more prominent in smaller trucks. This is largely due to the reduced need for periodic maintenance, greater fuel economy per hour of operation, and longer expected life—up to 20,000 h.

Compressed Natural Gas Engine.
This engine design is ideal for indoor use due to its low noise and low emissions. CNG is a low-cost fuel and the truck can run a full shift before requiring fuel. Other benefits include fewer oil changes and lower maintenance costs. This truck is suited for most types of use and can accommodate loads up to 6000 lb (2700 kg).


Factors in Wheel Selection and Use

Solid Wheels.
These are made in semisteel, forged steel, or molded plastic, hard rubber, and composite materials.They should be limited to small diameters and low-speed movement and should not be used to transmit power. They have low resistance to roll, but a short life span when overloaded or subjected to rough floor conditions. They will cause load vibration because of a lack of cushioning.

Rubber-Cushioned Tired Wheels.
These consist of a metal wheel having a machined diameter onto which a rubber tire is pressed or molded. It has the lightest load-carrying capacity of those used on mobile equipment. Minimal power is required to move material, since rolling friction is minimized.

Oil-Resistant Tired Wheels. 
The tires are made of special oil-resistant rubber compounds which will resist the degrading effects of oil on rubber.

High-Traction Tired Wheels.
The tires are made of rubber impregnated with abrasive or other materials to give additional traction on ice or in wet conditions.

Low-Power Tired Wheels.
The tires are fabricated from rubber compounds that offer minimum roll resistance and have lower power requirements, causing less drain on battery operated quipment.

Nonmarking Tired Wheels. 
The tires use a rubber compound filler other than carbon to avoid floor marking and contamination.

Conductive Tired Wheels.
The tires avoid the chance of static sparking in hazardous or explosive environments by maintaining vehicle-to-floor conductivity.

Laminated Tired Wheels. 
The tires for these wheels are made up of sections of pneumatic tire carcasses threaded onto a steel band. Such tires are extremely tough, with a harsh ride. They are well suited to littered environments, such as scrap yards, and trash handling.

Polyurethane Tired Wheels. 
Though more expensive than rubber, these wheels have a significantly higher load-carrying capacity and are less susceptible to cuts than most rubber and rubber-compound wheels.Wheel hardness of polyurethane tires results in a harsher ride and increased plant floor damage.


Microfiltration (MF), like ultrafiltration, is a pressure-driven membrane process. In the case of microfiltration, the pore size is typically in the 0.03- to 0.1-μm range. In this range, individual bacteria and viruses will pass through the membrane.

However, colloidal suspended solids, floc particles, and parasite cysts are prevented from passing. No removal of dissolved solids is accomplished by microfiltration membranes. Microfiltration membranes come in both in-line or cartridge filter arrangements and crossflow arrangements.

For cross-flow systems, the rejection rate is usually 5 to 10 percent, that is, 90 to 95 percent recovery. Higher recovery rates are feasible; however, the overall flux through the membrane is reduced. Cross-flow microfiltration membranes can be further subdivided into tubular and immersed-membrane types.

Tubular membranes are arranged such that the raw-water source is introduced under pressure in a tube that surrounds the membrane.Typically, the permeate water proceeds from the outside of the membrane into the lumen in the center of the membrane, where the permeate is then conducted back to a manifold for collection.

The solids remain on the outside of the membrane in the pressure tube and are periodically blown down from the system. Immersed membranes can be placed directly in a process tank where the permeate is removed through the membrane by a suction pump on the permeate collection manifold.This arrangement is possible because of the low transmembrane pressure exhibited by this type of membrane.

The flux through the membrane during operation is dependent on suspended solids in the feedwater as well as temperature (viscosity). A typical flux range for the membrane surface is 10 to 50 gal/day ⋅ ft2. Increasing transmembrane pressure also increases the flux.

Microfiltration membranes are typically periodically cleaned by backpulsing these membranes. This can be accomplished through a variety of ways, some incorporating air and some using just product water or water containing a small amount of hypochlorite. Some immersedmembrane systems also use diffused air to agitate the membranes and prevent solids from caking on the membrane surface.

Capital costs for microfiltration membrane system capacity range from $0.50 to approximately $1.00 per gallon per day.This range is primarily a function of solids concentration in the feed stream.Also note that membrane systems must be sized on peak flow rather than average daily flow, which can significantly affect the cost of a microfiltration membrane system.

Operational costs are primarily associated with power and vary from 50 to 120 hp/mgd. Pretreatment chemicals include coagulants, and when biofouling is a problem, hypochlorite solution can be used on some membranes for cleaning. Citric acid can also be used for membranes that do not tolerate chlorine. Microfiltration membranes are typically not compatible with polymer addition, which is not required, since even small “pin flocs” cannot permeate the membranes.



Ultrafiltration (UF) is a pressure-driven membrane process similar in some ways to reverse osmosis. However, in this case, as opposed to the situation encountered in an RO system, the flow of water through the membrane is generally through pores and not through the space between the lattices in the polymer, so osmotic pressure is not a factor.

Furthermore, there is little or no chemical interaction between the transported species and the membrane itself. UF membranes may be tailor-made to meet virtually any type of removal specification. Although they cannot reject any dissolved salts or other low-molecular-weight soluble matter, UF systems can remove very fine particulate material and high-molecular-weight organic matter from water streams.

To remove any low-molecular-weight soluble species with a UF membrane, a process must occur to convert the soluble matter to particulate form.As examples, soluble phosphorus may be precipitated with a metal salt, soluble organics may be adsorbed onto powdered activated carbon, and soluble iron may be oxidized to particulate form.

All of these processes and others will allow a UF membrane to remove even soluble matter. There are several different membrane module geometries on the market. Spiral-wound membrane modules are similar to RO membranes. Although they have a high membrane density, it is not possible to feed them with a high suspended solids concentration because of the narrow passages available through the module.

This geometry is suitable for the separation of high-molecular-weight organics in combination with low suspended solids. A second geometry is tubular, in which the feed flow passes through tubes at high velocity. The tube diameters range from about 0.5 to 1 in (12 to 25 mm). Because of the large tube diameter, tubular membranes are able to effectively separate liquid from biomass slurries of relatively high concentration, about 2 to 3 percent solids by weight.

The permeate flow in the tubes is from the inside to the outside. Pressure, typically on the order of 70 to 90 psig (483 to 621 kPa), is required to force the permeate through the membrane pores. A high feed-flow velocity [12 to 15 fps (3.7 to 4.6 m/s)] through the tubes is required to ensure sufficient shear at the membrane surface to keep the membrane clean and reduce concentration polarization.

A third variety of geometry is immersed hollow fiber, in which the membranes are submerged directly in the feed fluid without the need for a pressure vessel. Hundreds of small-diameter [0.07-inch (1.9-mm)] vertically oriented hollow fibers are supported at the top and bottom.The permeate flow is from the outside to the inside so that fibers can handle very high solids concentrations, as only clean, pure water flows inside the membranes.

The vacuum required to operate the hollow fiber design is very small, about −2 to −4 psig (−13.8 to 27.6 kPa). This vacuum is normally provided by a standard pump connected to the membranes via a piping manifold.To create the required shear on the membrane surface, low-pressure air is diffused intermittently under the fibers.The air rises through the fiber bundle, providing the necessary shear.

The nature of the solids–liquid separation requirement will dictate the kind of membrane module geometry that is best suited for a project. For small flows and very low suspended solids, the spiral-wound membrane can be used. Care must be taken to ensure that high solids concentrations do not develop within the membrane module by keeping the recirculation flow high.

For small flows and high suspended or emulsified solids concentrations, including tramp oils, tubular membranes should be selected. Free oils are very fouling to the membrane and must be prevented from coming in contact with the membrane. For high flows and high suspended solids concentrations, immersed hollow-fiber membranes are usually the most economical because of their lower pressure operation and hence lower energy requirements.


The CA and TC should have a relatively minor involvement in the construction process. The primary responsibility of performing construction inspection should remain with the project manager and design professionals.

The CA’s and TC’s involvement is that of a second set of eyes in relation to the design professionals and project manager. They should become aware of how and where systems are going in, which allows the TC to interface the testing, balancing, and commissioning functions into the project schedule so all work can be completed before owner occupancy.

Basic CA service includes periodic site visits and scheduling the owner’s staff participation in the commissioning activity. During this time the CA will relay any supplemental observations of construction activities to either the construction manager, the design professionals, or the owner for attention and remedy.

Detailed punch lists, enforcement of contractor work to meet the schedule, or other activities required to bring the job to a state of completeness prior to testing, balancing, and commissioning are not intended, but may be necessary to meet the owner’s occupancy date.

The CA’s site inspections are primarily focused on ensuring that the systems being installed are accessible and lend themselves to being tested, commissioned, and maintained once the job is complete. Often general contractors will indicate the work is done when, in fact, perhaps the power is not connected, or the controls that make it function are not ready.

False starts create delays and frustration.

It is the CA’s responsibility to inform the owner of whether the contractor is meeting the schedule. Often there will be a disagreement between the contractor and the CA with regard to how well the contractor is meeting the schedule.

Although this can place the CA in an uncomfortable position, it is the CA’s responsibility to always give the owner an honest and unbiased analysis of the situation.


The owner negotiates an agreement for commissioning services directly with the selected commissioning services provider. This agreement shall incorporate provisions relating to conflicts of interest, the scope of commissioning services, lines of communication, and authority. The owner must have the full allegiance of the commissioning authority during the project.

Accordingly, the agreement prohibits the commissioning authority from having any business affiliation with, financial interest in, or contract with the design professionals, the contractor, subcontractors, or suppliers for the duration of the agreement. Violation of such prohibitions constitutes a conflict of interest and is cause for the owner to terminate the agreement.

The scope of services includes responsibilities during the design, construction, and post occupancy periods. In the design process, the commissioning authority should review each design submittal for commissioning related qualities.

These qualities include design consistency with design intent, design criteria, maintainability, serviceability, and physical provisions for testing. The services provided should also include commissioning specifications, with emphasis on identifying systems to be tested and the associated test criteria.

The commissioning authority participates in onboard review sessions and various other design meetings with the design professionals. The intent is to ensure that the commissioning authority has as much familiarity with the design as is feasible. This allows an understanding of the design for effective reviewing, providing input to the design professionals regarding commissioning requirements which would not be readily evident to many design professionals.

Commissioning authority participation in the design process results in increased effectiveness during the construction and postoccupancy phases of the project. During construction, the commissioning authority performs a quality-assurance role relative to the contractor’s commissioning activities.

The scope includes review of the qualifications of the contractor’s selected testing contractor, all equipment submittals and shop drawings related to systems to be commissioned, commissioning submittals, O&M and systems manuals, and training plans. Commissioning submittals include the commissioning plan and schedule, static and component testing procedures (verification testing), and systems functional performance testing procedures.

The commissioning authority’s scope also includes witnessing and verifying the results of air and hydronic balancing, static tests, component tests, and systems functional performance tests. To the extent the owner’s staff is involved in witnessing the balancing, equipment testing, and systems functional performance testing, the commissioning authority’s scope can be reduced to witnessing critical functional performance tests and a sample of other verification and functional tests.

The owner’s staff benefits from witnessing as much of the balancing and functional performance testing as possible. The commissioning authority’s function, then, is to ensure that the testing contractor and test technicians properly understand and execute verification and systems functional performance testing procedures.

The commissioning authority’s agreement should also include analysis of the functional performance test results, review of the contractor’s proposed corrective measures when test results are not acceptable, and recommendation of alternate or additional corrective measures, as appropriate in the commissioning authority’s scope.

Clear lines of communication and authority must be indicated. Communications and authority of the commissioning authority should be tailored to the level of involvement of the owner in the project. If the owner is intimately involved in all aspects of design and construction, the owner should manage the commissioning authority’s involvement.

In this case, the commissioning authority would communicate formally with the design professionals through the owner. During construction, the commissioning authority should communicate formally with the contractor only through the established lines of communication—that is, directly through the design professional, or indirectly through the design professional via the owner. In either case, it is essential that the owner be kept informed of problems and decisions evolving during the commissioning process.

In cases where the owner is only marginally involved in the day-to-day business of the project, it may be desirable to allow the commissioning authority to communicate with the design professionals directly on commissioning issues. This is recommended only when the owner is very confident of the expertise and judgment of the selected commissioning authority, and only when the commissioning authority and design professionals have a good working relationship.

The authority of the commissioning authority should be limited to recommending improvements to the design or operation of the systems, solutions to problems encountered, and acceptance or rejection of test results. The commissioning authority should not directly order the contractor, or design professionals, to make changes. Only the design professionals may make changes in the design or order construction changes. The owner must speak with only one voice.


The reciprocating steam engine utilizes economically a greater amount of the available energy of the steam at high pressures than at low pressures. The turbine, on the other hand, has the advantage at low pressures and in cases where large expansion of the steam are uses.

To obtain the same steam expansion in a reciprocating engine, the cylinders would be very large with greater cylinder losses. The greater expansion of steam in turbines permits the recovery of a large amount of power from what otherwise (in the reciprocating steam engine) would be waste steam.

The most important energy losses that occur in a turbine include: steam leakage, wheel and blade rotation, left-over velocity of the steam leaving the blades, heat content of the exhaust steam, radiation, and friction.

The steam consumption of the reciprocating engine and the turbine are approximately the same, but the steam turbine operates on the complete expansion cycle, the reciprocating engine operates on a cycle with incomplete expansion.

The turbine, therefore, converts into work a greater amount of heat per pound of steam used.

Although steam pressures and temperatures, and turbine designs continue to improve the overall efficiencies of power plants, nevertheless the thermal efficiency of steam turbines hovers around 35 percent.

The steam turbine, however, is well adapted to the driving of electric generators of the alternating current type as the desirable speeds of the two machines are the same, and is almost universally used in steam plants larger than about 1000 kW capacity.


Efficiency may be defi ned as the ratio of output to input. In a boiler plant, the input is the total amount of heat in the fuel consumed while the output is represented by the total amount of heat in the steam generated.

The difference between the input and the output in any case represents the various losses, some of which are controllable and some uncontrollable. The heat equivalent of these losses, especially the avoidable ones, is of great importance to the power plant engineer, as a careful study of the variables affecting these losses often leads to clues for reducing them.

To this end, a heat balance is of inestimable value, for it provides the engineer with a complete accounting of the heat supplied to the plant and its distribution to the various units.

A complete heat balance is composed of the following items, each of which will be discussed individually:

1. Loss resulting from the evaporation of moisture in the fuel (mois- ture loss, fuel).

2. Loss from the evaporation of moisture in the air supplied for com- bustion (moisture loss, air).

3. Loss of the heat carried away by the steam formed by the burning of the hydrogen in the fuel (hydrogen loss).

4. Loss of the heat carried away by the dry fl ue gases (dry gas loss).

5. Loss from burning to carbon monoxide instead of carbon dioxide (incomplete combustion loss).

6. Loss from unburned combustible in the ash and refuse (combus- tible loss).

7. Loss due to unconsumed hydrogen, hydrocarbons, radiation, and unaccounted (unaccounted loss).

8. Heat absorbed by the boiler.

9. Total heat in 1 lb. of fuel consumed.

These items are computed on the basis of the number of Btu per lb. of fuel; either, “As Received,” “As Fired,” “Dry,” or “Combustible” being selected as the pound base. Whichever base is selected, all items in the heat balance have the same base.

For example, suppose that the “As Fired” base is chosen as a basis for computations, then whenever the words or symbols indicating “per lb. of fuel” appear, the inference “per lb. of fuel AS FIRED” is intended.


In a boiler, the saturated or slightly wet steam is made to pass from the steam drum to a separate set of tubes inside the boiler where the steam receives more heat than is generated in the furnace.

The superheated steam is collected in another drum from which it passes on to the turbine room; in some instances, the drum is omitted and the superheated steam passes directly from the superheater to the turbine room.

In the superheater, the temperature of the steam is thereby increased but the pressure remains the same, or drops slightly on account of the friction in the superheater tubes and piping to the turbine room.

The particular section of a boiler in which the saturated steam from the steam drum is heated to a higher temperature (or to a superheated condition) is known as the superheater. Superheaters are broadly classed according to the source of heat:

1. Convection type—This is usually placed in the gas passages of the boiler where the heat is transmitted by convection.

2. Radiant type—This is situated in the boiler walls which receive radiant heat.

The degree of superheat is largely determined by the position of the superheater, the amount of superheating surface, and the velocity of the steam through the superheater tubes. In the case of radiant types, the temperature is affected by furnace temperatures.

The radiant type has the advantage that it can be added to existing installations.

Often both types of superheaters are used in conjunction with each other as the combination of the two will generally result in a greater utilization of the heat from the burning fuel and in more efficient overall operation of the boiler.

Since there is no water inside the superheater tubes, they will be subjected to more severe service than the other tubes in a boiler. Special attention is given to the materials used in their design, manufacture and installation.


The heat liberated by the combustion of the fuel in the furnace is immediately absorbed, partly in heating the fresh fuel, but mainly by the gaseous products of combustion, causing a rise in their temperatures.

The heat evolved and contained in the gaseous products of combustion is transferred through the gas filled space and then transmitted through the heating plates or tubes into the boiler water. The process of transmission takes place in three distinct ways.

Heat is first imparted to the dry surface of the heating plates or tubes in two ways: first, by radiation from the hot fuel bed, furnace walls, and the flames; and second, by convection from the hot moving gaseous products of combustion.

When the heat reaches the dry surface, it passes through the soot, metal, and scale to the wet surface purely by conduction. From the wet surface of the plate or tube, the heat is carried into the boiler water mainly by convection (but also by some conduction).

The metal of the plate or pipe is covered with a layer of soot on the gas side and a layer of scale on the water side. In addition, a layer of motionless gas is entrapped in the soot while a layer of water and steam adheres to the scale, or, if the boiler is clean, to the metal.

In practically all boilers, only a small portion of the heating surface is so exposed to radiation from the fuel bed, flames, and furnace walls, as to receive heat both by radiation and convection. By far the greater part of the surface receives heat only by convection from the moving gaseous products of combustion.

The greatest resistance to the fl ow of heat, that is, the greatest drop in the temperature of the gases, takes place before the hot gases reach the dry surface of the heating metal plate or pipe. If a boiler is even moderately clean, the resistance of the metal itself to the flow of heat (the drop in temperature) through it is very small.

The resistance to the passage of heat from the metal into the water (loss of temperature) is also very small. Hence, practically all of the heat imparted to the dry surfaces is transmitted to the boiler water. Increasing the rate at which heat is imparted to the dry surface of the heating metal plate or pipe increases the rate of steam production in the same proportion.

If the initial temperature of the moving gases remains constant, an increase in the velocity with which they pass over the heating metal plate or pipe increases, in an almost direct ratio, the rate at which heat is imparted to the dry surface and, therefore, increase almost directly the rate at which steam is produced.

To increase the capacity of any boiler more gases are passed over its heating surfaces. A boiler that has its heating surfaces so arranged that the gas passage are long and of small cross-section is more efficient than a boiler in which the gas passages are short and of large cross-section.

To increase the efficiency of water tube boilers, the pipes are bent to increase their length and baffles are inserted in such a way that the heating surfaces are arranged in series with reference to the gas flow, thus making the gas passage longer.

Boilers are rated in boiler-horsepower (BHP), one BHP comprising 10 square feet of the boiler heating surfaces; for example, a boiler with 5000 square feet of heating surface would be rated at 500 BHP.

free counters