Aluminum alloys are readily machined and offer such advantages as almost unlimited cutting speed, good dimensional control, low cutting force, and excellent life. Relative machinability of commonly used alloys are classified as A, B, C, D, or E.

Cutting Tools
Cutting tool geometry is described by seven elements: top or back rake angle, side rake angle, end relief angle, side relief angle, end cutting edge angle, and nose radius.

The depth of cut may be in the range of 1/i6-1/4 in. for small work up to l/2-\l/2 in. for large work. The feed depends on finish. Rough cuts vary from 0.006 to 0.080 in. and finishing cuts from 0.002 to 0.006 in.

Speed should be as high as possible, up to 15,000 fpm.

Cutting forces for an alloy such as 6061-T651 are 0.30-0.50 hp/in.3/min for a 0° rake angle and 0.25-0.35 hp/in.3/min for a 20° rake angle.

Lubrication such as light mineral or soluble oil is desirable for high production. Alloys with a machinability rating of A or B may not need lubrication.

The main types of cutting tool materials include water-hardening steels, high-speed steels, hardcast alloys, sintered carbides and diamonds:

1. Water-hardening steels (plain carbon or with additions of chromium, vanadium, or tungsten) are lowest in first cost. They soften if cutting edge temperatures exceed 300^0O0F; have low resistance to edge wear; and are suitable for low cutting speeds and limited production runs.

2. High-speed steels are available in a number of forms, are heat treatable, permit machining at rapid rates, allow cutting edge temperatures of over 100O0F, and resist shock better than hard cast or sintered carbides.

3. Hard-cast alloys are cast closely to finish size, are not heat treated, and lie between high speed steels and carbides in terms of heat resistance, wear, and initial cost. They will not take severe shock loads.

4. Sintered carbide tools are available in solid form or as inserts. They permit speeds 10-30 times faster than for high-speed steels. They can be used for most machining operations.

They should be used only when they can be supported rigidly and when there is sufficient power and speed. Many types are available.

5. Mounted diamonds are used for finishing cuts where an extremely high-quality surface is required.


The motion between gear teeth as they go through mesh is a combination of sliding and rolling. The type of gear, the operating load, speed, temperature, method of application of the lubricant, and metallurgy of the gears are all important considerations in the selection of a lubricant.

Industrial gearing may be enclosed, where the gears and the bearings that support them are operated off the same lubricant system; or open, where the bearings are lubricated separately from the gears themselves.

Due to the high sliding contact encountered in enclosed worm and hypoid gears, lubricant selection for these should be considered separately from lubrication of other types of enclosed gears.

As with all equipment, the first rule in selecting a gear lubricant is to follow the manufacturer’s recommendation, if at all possible. In general, one of the following types of oils is used:

Rust- and Oxidation-Inhibited (R & O) Oils.
R & O oils are high-quality petroleum-based oils containing rust and oxidation inhibitors. These oils provide satisfactory protection for most lightly to moderately loaded enclosed gears.

Extreme-Pressure (EP) Oils.
EP oils are usually high-quality petroleum-based oils containing sulfur- and phosphorus-based extreme-pressure additives. These products are especially helpful when high-load conditions exist and are a must in the lubrication of enclosed hypoid gears.

Compounded Oils.
These are usually petroleum-based oils containing 3 to 5 percent fatty or synthetic fatty oils (usually animal fat or acid less tallow).They are usually used for worm gear lubrication, where the fatty content helps reduce the friction generated under high sliding conditions.

Heavy Open-Gear Compounds.
These are very heavy bodied tarlike substances designed to stick tenaciously to the metal surfaces. Some are so thick they must be heated or diluted with a solvent to soften them for application. These products are used in cases where the lubricant application is intermittent.

A number of gear lubrication models and viscosity selection guides exist. In the United States, the most widely used selection method employs the American Gear Manufacturers Association (AGMA) standards. Under its specifications for enclosed industrial gear drives, the AGMA has defined lubricant numbers, which designate viscosity grades for gear oils.

Open gears operate under conditions of boundary lubrication. The lubricant can be applied by hand or via drip-feed cups, mechanical force-feed lubricators, or sprays.

Heavy bodied residual oils with good adhesive and film-strength properties are required to survive the relatively long, slow, heavy tooth pressure while maintaining some film between applications of lubricant. Several PC software programs exist to aid in lubricant selection to reduce wear, scuffing, and pitting of gear-tooth surfaces.


Antifriction or rolling-element bearings use balls or rollers to substitute rolling friction for sliding friction. This type of bearing has closer tolerances than do plain bearings and is used where precision, high speeds, and heavy loads are encountered.

In antifriction bearings, a lubricant facilitates easy rolling; reduces the friction generated between the rolling elements and the cages or retainers; prevents rust and corrosion; and, in the case of grease, serves as a seal to prevent the entry of foreign material.

High-quality rust- and oxidation-inhibited (R & O) oils are generally recommended for bearings. Extreme-pressure and antiwear additives may also be desirable under conditions of heavy or high-shock loads. Oils with no additives can easily oxidize and turn gummy. Only once-through drip applicators can justify the use of straight nonadditivized oils. In fact, R & O oils are probably just as inexpensive today.

When temperature control and cooling are a consideration, oil-circulating systems are the choice. A bearing lubricated with either oil or grease does not carry away heat on its own. Table below gives general guidelines for the selection of proper viscosity oils for antifriction bearings.

Most antifriction bearings are grease lubricated because of the economics of simple seal and housing designs. Greased bearings offer adequate protection from dirt and water, and require infrequent attention.

The selection of the proper type and grade of grease depends on the operating conditions and the method of application. Generally, soft greases (e.g., NLGI 1) are preferred for use at low temperatures and in central systems.

Harder greases (e.g., NLGI 2 or 3) perform better at high speeds. Ball bearings do best with NLGI 2 or 3 grease, while spherical, cylindrical, needle, or tapered roller bearings with broad face line contact design require NLGI 2 or less.

Care should be taken not to overgrease antifriction bearings. Generally, the bearing housing should be one-third to one-half full. Overfilling can lead to several problems: ruptured seals, excessive temperature buildup, and eventual failure due to starvation of the bearing for lubricant.

With repeated greasing and continued high temperature, the oil in the grease may be driven off, leaving the soap in the bearing. Soap makes up about 8 to 10 percent of grease. It is easy to see that if the oil portion of the grease repeatedly is eliminated, the soap can rapidly fill the voids in the bearing.

Eventually the bearing will accept no more grease because it becomes filled with soap, a non lubricant. The bearing then soon fails, and the unlucky grease sales representative is told that the grease is poor quality.

Many PC software programs are available that can aid in the selection of lubricants in the initial design and in failure analysis. These can be obtained from many lubricant manufacturers.


What Are the Health Effects of Combustion Products?

The major categories of products resulting from combustion can be listed as carbon monoxide (CO), nitrogen oxides (NOx), particulate material, and polynuclear aromatic hydrocarbons (PAH). Carbon monoxide (CO) is an odorless, tasteless and colorless gas.

Nitrogen oxides (NOx) includes nitrogen compounds NO, NO2, N2O, OONO, ON(O)O, N2O4 and N2O5. All are irritant gases, which can impact on human health.

Particulates represents a broad class of chemical and physical particles, including liquid droplets. Combustion conditions can affect the number, particle size and chemical speciation of the particles.

Polynuclear aromatic hydrocarbons (PAH) concentrations are usually low indoors. PAH concerns stem from their potential to act synergistically, antagonistically or in an additive fashion with each other and other contaminants. The chemical composition and concentrations of these compounds vary with combustion conditions.

Combustion products are released under conditions where incomplete combustion can occur, including: wood, gas and coal stoves, heaters and cooking surfaces; unvented kerosene heaters or appliances; unvented grilles; portable generators; fireplaces under downdraft conditions; and environmental tobacco smoke. Vehicle exhaust is a primary source, particularly from underground or attached garages, as well as from the outdoor air.

Health Effects
The impact on human health varies with the category of combustion product; so they are treated separately below. Carbon monoxide (CO) has about 250 times the affinity for hemoglobin than oxygen has. When carboxyhemoglobin (COHb) is formed, it reduces the hemoglobin available to carry oxygen to body tissues.

CO, therefore, acts as an asphyxiating agent. Common symptoms are dizziness, dull headache, nausea, ringing in the ears and pounding of the heart. Should CO inhalation induce unconsciousness, damage to the central nervous system, the brain and the circulatory system may occur.

Acute exposure can be fatal. Young children and persons with asthma, anemia, heart and hypermetabolic diseases are more susceptible. The extent to which nitrogen oxides (NOx) affect human health is unclear. The most information is available about nitrogen dioxide (NO2).

NO2 symptoms are irritation to eyes, nose and throat, respiratory infections and some lung impairment. Altered lung function and acute respiratory symptoms and illness have been observed in controlled human exposure studies and in epidemiological studies of homes using gas stoves.

Studies in the United States and Britain have found that children exposed to elevated levels of NO2 have twice the incidence of respiratory illness as children not exposed. Combustion particulates can affect lung function. The smaller respirable particles (less than 2.5 micron μm in size) present a greater risk as they are taken deeper into the lungs.

Particles may serve as carriers of contaminants, such as PAH, or as mechanical irritants that interact with chemical contaminants. The health effects of polynuclear aromatic hydrocarbons (PAH) are very difficult to determine or predict. PAH’s propensity to act in concert with other contaminants complicates any effort to attribute singular cause and effect. It is known that some PAHs are carcinogens while others exhibit co-carcinogenic potential. 


The basic safety relay arrangement uses an internal latching sequence to set up two or more output relays into an energized condition when all circuits are healthy and after a reset contact has been closed. The relays remain latched in until the input circuit is broken either by the guard door switch or by the E-stop.

A typical hardware-based implementation of a guard door safety function will link the guard door switches in series with an E-stop switch to provide an input to a latching relay. The latching relay will trip when the guard door is opened or when the E-stop is pressed.

To improve the safety of the circuits an additional relay is used to prevent the latching relay from being reset unless the safety control circuits are healthy (i.e. free of dangerous faults). For example, in figure below a simplified safety relay is shown where K3 is a relay that must be energized before the latching relay K1 can be set. K3 will not energize unless the power control contactor(s) C has been released, proving that it is not held in by another stray circuit or by a mechanical defect.

In practice relay K1 is usually duplicated by a second channel or redundant relay K2 and both relays must be energized and latched to close the output circuits. K3 is often arranged with multiple contacts and expansion units to enable many drives to be interlocked from the same logic.

The example shown in figure above uses a safety monitoring relay unit to perform the essential logic functions required to provide safety integrity. These are:

• Checks on the state of input signals. They must be safe before latching reset is allowed.
• Detection of stuck contactors through monitoring of the auxiliary contacts.
• Wiring faults in the input and output circuits.
• Timing and logic for controlling the reset actions, etc.

The safety monitoring relay modules ensure that the safety interlocks and E-stop functions are able to operate independent of the basic control system actions at all times. This is one of the most essential features of any safety control system.


Arc-flash boundaries need to be established around electrical equipment such as switchboards, panel boards, industrial control panels, motor control centers, and similar equipment if you plan to work on or in the proximity of exposed energized components.

Parts are considered exposed if they are energized and not enclosed, shielded, covered, or otherwise protected from contact. Work on these parts includes activities such as examinations, adjustment, servicing, maintenance, or troubleshooting.

Equipment energized below 240 V does not require arc-flash boundary calculation unless it is powered by a 112.5 KVA transformer or larger. The arc-flash boundary is the limit at which a person working on energized parts can be standing at the time of an arc-flash without risking permanent injury unless they are wearing flame-resistant clothing.

Permanent injury results from an arc-flash that causes an incident energy of 1.2 calories/centimeter2 (cal/cm2) or greater and causes a minimum of second-degree burns. This distance can only be effectively determined by calculating the destructive potential of an arc.

First you must determine the magnitude of the arc based on the available short circuit current, then estimate how long the arc will last based on the interrupting time of the fuse or circuit breaker. Finally, you will need to calculate how far away an individual must be to avoid being exposed to an incident energy of 1.2 cal/cm2.

It may sound like a lot of math and factoring in of potentials, but believe me the extra time you take to determine the arc flash boundary is well worth your safety and well-being.

Calculating flash protection boundaries for systems over 600 V requires performing a flash hazard analysis coupled with either the NFPA 70E Hazard Risk Category/PPE tables or the Incident Energy Formula.

Additionally, Section 4 of IEEE 1584 Guide for Arc Flash Hazard Calculations states that the results of the arc flash hazard analysis are used to identify the flash-protection boundary and the incident energy at assigned working distances throughout any position or level in the overall electrical system. The purpose is to establish safe work distances and the PPE required to protect workers from injury. A flash-hazard analysis is comprised of the following three different electrical system studies:

- A short circuit study
- A protective device time-current coordination study
- The flash-hazard analysis and application of the data

Arc flash hazard analysis
To perform an arc flash hazard analysis, you need to start by gathering information on the building’s power distribution system. This data should include the arrangement of components on a one-line drawing with nameplate specifications of every device on the system and the types and sizes of cables.

The local utility company should be contacted so that you can get the minimum and maximum fault currents entering the facility. Next you will want to perform a short circuit analysis and a coordination study. You will need this information to put into the equations provided in NFPA 70E or the IEEE Standard 1584.

These equations will give you the flash protection boundary distances and incident energy potentials you will need to determine your minimum PPE requirements. In many ways an arc fault analysis is actually a study in risk management.

You can be very conservative in your analysis and the results will almost always indicate the need for category 4 PPE. On the other hand, you can perform the analysis and make adjustments to reduce the arc fault conditions resulting in reduced PPE requirements.

However, use caution when adjusting your calculations. Reducing the bolted fault current can reduce the arc fault current, but it can actually result in a worse situation. For example, if you reduce the current applied to a motor from 4000 to 1800 A, the arc fault energy is increased from 0.6 to 78.8 cal/cm2. This is the exact opposite outcome that you might expect to achieve before doing the math.

Keep in mind that you are risking OSHA violations and fines if you choose nominal compliance. On the other hand, you can actually be increasing the risk of injury if you force workers to unnecessarily wear cumbersome PPE.

This can also result in little or no high voltage maintenance being performed, which will eventually compromise safety and proper equipment operation. It might prove beneficial to get a registered professional engineering firm to perform arc flash hazard calculations on your behalf and have them recommend appropriate actions and the lowest appropriate category of PPE.


In general, the cause of most plant fires is the exposure of a fuel to a source of heat. Where the fuel, such as accumulations of trash or debris, is not necessary plant operation, fires can be prevented by removal of the fuel.

Where the exposed fuel, such as raw materials or finished products, is essential, the source of heat must be protected or controlled. Some of the most common sources of heat and fuel that cause plant fires are heating and cooking equipment, smoking, electric equipment, burning, flammable liquids, open flames and sparks, incendiary (arson), spontaneous ignition, gas fires, and explosions.

These sources of heat are summarized here.

Heating and Cooking Equipment
Defective or Overheated Equipment. This includes improperly maintained or operated furnaces, smoke pipes, vents, portable and stationary heaters, industrial commercial furnaces, and incinerators.

Chimneys and Flues. Fire can arise from ignition of accumulated soot or inadequate separation from combustible material.

Hot Ashes and Coals. These can cause problems when improper disposal or disposal in combustible containers or with combustible debris occurs.

Improper Location. This can mean installation too close to combustible or accumulation of combustibles near an appliance.

Electric Equipment
Wiring and Distribution Equipment. These include short-circuit faults, arcs, and sparks from damaged, defective, or improperly installed components.

Motors and Appliances. These include careless use, improper installation, and poor maintenance.

Flammable Liquids
Storage and Handling. These hazards include careless spills, leaking fuel, and overturned tanks.

Inadequate Safeguards. Fires can be started by improper storage containers or facilities, improper electrical equipment near open processes, or improper bonding and grounding of transfer processes.

Open Flames and Sparks
Trash and Rubbish. Burning trash and rubbish can furnish the fuel for accidental ignition; careless burning ignites other material.

Sparks and Embers. Problems include ignition of roof coverings by sparks from chimneys, incinerators, rubbish fires, locomotives, etc.

Welding and Cutting. Hazards include ignition of combustibles by the arc or flame itself, heat conduction through the metals being welded or cut, molten slag and metal from the cut, and sparks.

Friction, Sparks from Machinery. Friction heat or sparks resulting from impact between two hard surfaces are a hazard.

Thawing Pipes. Open-flame devices are a hazard when used in the dangerous practice of thawing pipes.

Other Open Flames. These include ignition sources such as candles, locomotive sparks, incinerator sparks, and chimney sparks.

Lightning. This includes building fires caused by the effects of lightning.

Exposure. Exposure fires are those originating in places other than buildings, but which ignite buildings.

Incendiary, Suspicious. These are fires that are known to be or thought to have been set, fires set to defraud insurance companies, fires set by mentally disturbed persons, and fires set by malicious persons.

Spontaneous Ignition. This means fires resulting from the uncontrolled spontaneous heating of materials.

Gas Fires and Explosions. These are fires and explosions that involve gas that has escaped from piping, storage tanks, equipment, or appliances and fires caused by misuse or faulty operation of gas appliances.

Smoking. The use of smoking materials in flammable or explosive atmospheres, or discarding smoking materials in combustible debris.


Through the years, body odor and tobacco smoke were the prime factors in assessing the perceived quality of the indoor air. Cleanliness has not always been next to Godliness. The decline in the practice of bathing during the Middle Ages, which persisted through the Renaissance, can be attributed to the attitude of the Christians toward cleanliness.

The Church attacked the preoccupation with body comfort and attractiveness offered by bathing as tending toward sin. The “odor of sanctity” prevailed and lice were called “pearls of God.” St. Paul is said to have observed, “The purity of the body and its garments means the impurity of the soul.”

More frequently the lack of personal hygiene during the Renaissance was an economic concern. Poor people worked and slept in the only clothing they owned. While the rich owned more changes of clothes, there is little evidence that they were laundered between wearings.

This historical perspective helps explain the current emphasis upon the occupant as the primary pollution source. This has resulted in occupant population driven ventilation rates.

Smoking tobacco has alternately been accepted and rejected by society and the law. King James I was the first to denounce the habit as a “corruption of health and manners.” During the 17th century, most of Europe severely penalized or forbade the consumption of tobacco. In 1911, 14 states prohibited cigarettes for moral and/or health reasons.

Today these concerns are reflected in the prohibitions governing the sale of tobacco to minors, advertising restrictions, and the increasing limitations on areas where smoking is permitted. Many locales have demonstrated their concerns about the potential health effects of tobacco smoke by severely restricted smoking practices in public access buildings, especially following the aborted OSHA ruling in 1994.

Communicable disease followed a similar historical path as cleanliness and bathing. Yet, knowing nothing about bacteria, virus, and other pathogens, early settlers found that swamps brought disease and conversely, clean mountain air purged the body of fatal “consumption.”

This led to the development of sanitariums for TB rehabilitation and the use of high ventilation rates in health care facilities that were highly congested and contaminated with communicable pathogens. These three indoor air quality factors, body odor, communicable Smoke which refers to the side stream fumes and the exhaled smoke from the active smoker that becomes part of the common breathing air stream), strongly influence our standards today.

The need for fresh air has historically been measured in the need to counteract these human-generated pollutants; thus, the common ventilation requirements are for so many cubic feet of air per minute per occupant. The sources of many contaminants today; e.g., building materials, combustion, cleaning supplies, etc. have very little relationship to the number of occupants in a building.

As we move into more modern times, it becomes clear that indoor air quality has had its own “Back to the Future” scenario. With the development of the first nuclear powered submarine in 1954, submarines suddenly had the capability to remain submerged for weeks (or months) at a time. This required a means of controlling, cleaning and revitiating the quality of indoor air.

Through the use of special ventilation and filter systems, air conditioning, chilled water systems, main oxygen supply, CO2 removal, CO-H2 burners and atmosphere analyzing systems, the internal air in a modern submarine can be maintained at a designated quality level. The designers of the Nautilus’ environmental system were ahead of today’s building designers by nearly half a century.

More recently these advancements have again been illustrated by the space shuttles in which comfortable and safe environments have been designed, installed, and maintained in outer space. Thus, the technology currently exists to create acceptable and safe indoor environments without access to fresh outdoor air.


With these devices the guard door is linked mechanically to the contacts of the switch using positive mode operation in compliance with standards and as applicable to E-stop switches. There are three main types of mechanical actuation. These are:

1. Tongue-operated
2. Hinge-operated
3. Cam/plunger-operated.

Tongue-operated limit switches
Features: These devices comprise two separate elements: a switch body and actuator tongue. Tongues are metal probes specially shaped to fit into the switch rather like a key.

These are to be fitted to edge of a sliding door or on to a removable guard. When the tongue enters the switch body it engages a mechanism that closes or opens internal electrical contacts.

The mechanism is designed to prevent easy bypassing or ‘cheating’ of the switch since the tongue is coded like a key. Usually these types have self-ejecting spring-loaded mechanisms so that the tongue will only remain in place if it is attached to the door of the guard. If the tongue were to be removed from the door and just pushed into the switch it would not stay in place.

• Requires reasonably accurate alignment
• Should not be subjected to constant high-amplitude vibration
• Can be used on sliding, hinged and lift-off guards.

Advantages: Low cost, versatile, certified for safety. Almost tamper-proof if the tongue design is good.

Disadvantages: Not suited to pharmaceutical applications and some food applications where good cleaning is essential.

Hinge-actuated limit switches
The device is mounted over the hinge-pin of a hinged guard. The opening of the guard is transmitted via a positive mode operating mechanism to the control circuit contacts.

Advantages: When properly installed these types of switches are ideal for most hinged guard doors where there is access to the hinge centerline. They can isolate the control circuit within 3° of guard movement and they are extremely difficult to defeat without dismantling the guard.

Disadvantages: Care must be taken on large wide guard doors as an opening movement of only 3° can still result in a significant gap at the opening edge on very wide guard doors. It is also important to ensure that a heavy guard does not put undue strain on the switch actuator shaft.

Cam-operated limit switches
This type of arrangement usually takes the form of a positive mode acting limit (or position) switch and a linear or rotary cam. It is usually used on sliding guards and when the guard is opened the cam forces the plunger down to open the control circuit contacts.

Features: The simplicity of the system enables the switch to be both small and reliable. It is extremely important that the switch plunger can only extend when the guard is fully closed. This means that it may be necessary to fit stops to limit the guard movement in both directions.

Advantages: Wide range of low-cost switches are available. Available in very wide range of sizes. Can be made extremely durable and rugged. Easy for maintenance crews to inspect and repair.

Disadvantages: Relatively easy to defeat. It cannot be used on lift-off guards. Requires careful installation and design of strikers: For example: it is necessary to fabricate a suitably profiled cam which must operate within defined tolerances. This system can be prone to failures due to wear especially in the presence of abrasive materials or with badly profiled cams.



Steam has some design and operating advantages over hot water heating systems. For instance, one pound of steam at 212°F when condensed (latent heat of condensation) into one pound of hot water gives up approximately 1000 Btu per pound of steam.

On the other hand, a hot water heating system with supply water temperatures at 200°F and return water temperatures at 180°F only gives up 20 Btu per pound of water (1 Btu/lb/°F). Another advantage is that steam, based on its operating pressure, flows throughout the system on its own while a pump and motor is needed to circulate hot water.

In an open vessel, at standard atmospheric pressure (sea level), water vaporizes or boils into steam at a temperature of 212°F. But the boiling temperature of water, or any liquid, is not constant. The boiling temperature can be changed by changing the pressure on the liquid.

If the pressure is to be changed, the liquid must be in a closed vessel. In the case of water in a heating system, the vessel is the boiler. Once the water is in the boiler it can be boiled at a temperature of 100°F or 250°F or 300°F as easily as at 212°F.

The only requirement is that the pressure in the boiler be changed to the one corresponding to the desired boiling point. For instance, if the pressure in the boiler is 0.95 pounds per square inch absolute (psia), the boiling temperature of the water will be 100°F. If the pressure is raised to 14.7 psia, the boiling temperature is raised to 212°F.

If the pressure is raised again to 67 psia, the temperature is correspondingly raised to 300°F. A common low pressure HVAC steam heating system will operate at 15 pounds per square inch gage pressure (psig), which is a pressure of 30 psia and a temperature of 250°F.

The amount of heat required to bring the water to its boiling temperature is its sensible heat. Additional heat is then required for the change of state from water to steam.

This addition of heat is steam’s latent heat content or “latent heat of vaporization.” To vaporize one pound of water at 212°F to one pound of steam at 212°F requires 970 Btu. The amount of heat required to bring water from any temperature to steam is called “total heat.”

It is the sum of the sensible heat and latent heat. The total heat required to convert one pound of water at 32°F to one pound of steam at 212°F is 1150 Btu. The calculation is as follows: the heat required to raise one pound of water at 32°F to water at 212°F is 180 Btu of sensible heat.

970 Btu of latent heat is added to one pound of water at 212°F to convert it to one pound of 212°F steam. Notice that the latent heat is over 5 times greater than sensible heat (180 Btu × 5.39 = 970 Btu). The total heat is 1150 Btu (180 + 970).

Refer to the figure below:

Point 1 — One pound of ice (a solid) at 0°F.

Point 1 to Point 2 — 16 Btu of sensible heat added to raise the temperature of the ice from 0°F to 32°F. Specific heat of ice is 0.5 Btu/lb/°F.

Point 2 to Point 3 — Ice changing to water (a liquid) at 32°F. It takes 144 Btu of latent heat to change one pound of ice to one pound of water.

Point 3 to Point 4 — 180 Btu of sensible heat added to raise the temperature of the water from 32°F to 212°F. Specific heat of water is 1.0 Btu/lb/°F.

Point 4 to Point 5 — Water changing to steam (a vapor) at 212°F. It takes 970 Btu of latent heat to change one pound of water to one pound of steam.

Point 5 to Point X — X amount of Btu of sensible heat added to raise the temperature of the steam from 212°F to X°F. This is called superheating the steam and the result is “superheated steam.” For example, if the final temperature of the superheated steam is 250°F then 19 Btu of sensible heat would have to be added (250°F – 212°F = 38°F. 38°F × 0.5 Btu/lb/°F specific heat for steam × 1 lb of steam = 19 Btu).


About half the coal presently mined in the United States is cleaned mechanically to remove impurities and supply a marketable product. Mechanical mining has increased the proportion of fine coal and noncoal minerals in the product.

At the preparation plant run-of-mine coal is usually given a preliminary size reduction with roll crushers or rotary breakers. Large or heavy impurities are then removed by hand picking or screening.

Tramp iron is usually removed by magnets. Before washing, the coal may be given a preliminary size fractionation by screening.

Nearly all preparation practices are based on density differences between coal and its associated impurities. Heavy-medium separators using magnetite or sand suspensions in water come closest to ideal gravity separation conditions. Mechanical devices include jigs, classifiers, washing tables, cyclones, and centrifuges.

Fine coal, less than 1⁄4 in (6.3 mm) is usually treated separately, and may be cleaned by froth flotation. Dewatering of the washed and sized coal may be accomplished by screening, centrifuging, or filtering, and finally, the fine coal may be heated to complete the drying.

Before shipment the coal may be dustproofed and freezeproofed with oil or salt. Removal of sulfur from coal is an important aspect of preparation because of the role of sulfur dioxide in air pollution.

Pyrite, the main inorganic sulfur mineral, is partly removed along with other minerals in conventional cleaning. Processes to improve pyrite removal are being developed. These include magnetic and electrostatic separation, chemical leaching, and specialized froth flotation.

Coal may heat spontaneously, with the likelihood of self-heating greatest among coals of lowest rank. The heating begins when freshly broken coal is exposed to air. The process accelerates with increase in temperature, and active burning will result if the heat from oxidation is not dissipated.

The finer sizes of coal, having more surface area per unit weight than the larger sizes, are more susceptible to spontaneous heating.

The prevention of spontaneous heating in storage poses a problem of minimizing oxidation and of dissipating any heat produced. Air may carry away heat, but it also brings oxygen to create more heat.

Spontaneous heating can be prevented or lessened by (1) storing coal underwater; (2) compressing the pile in layers, as with a road roller, to retard access of air; (3) storing large-size coal; (4) preventing any segregation of sizes in the pile; (5) storing in small piles; (6) keeping the storage pile as low as possible (6 ft is the limit for many coals); (7) keeping storage away from any external sources of heat; (8) avoiding any draft of air through the coal; (9) using older portions of the storage first and avoiding accumulations of old coal in corners. It is desirable to watch the temperature of the pile.

A thermometer inserted in an iron tube driven into the coal pile will reveal the temperature. When the coal reaches a temperature of 50°C (120°F), it should be moved.

Using water to put out a fire, although effective for the moment, may only delay the necessity of moving the coal. Furthermore, this may be dangerous because steam and coal can react at high temperatures to form carbon monoxide and hydrogen.


Coal is mined by either underground or surface methods. In underground mining the coal beds are made accessible through shaft, drift, or slope entries (vertical, horizontal, or inclined, respectively), depending on location of the bed relative to the terrain.

The most widely used methods of coal mining in the United States are termed continuous and conventional mining. The former makes use of continuous miners which break the coal from the face and load it onto conveyors, shuttle cars, or railcars in one operation.

Continuous miners are of ripping, boring, or milling types or hybrid combinations of these. In conventional mining the coal is usually broken from the face by means of blasting agents or by pressurized air or carbon dioxide devices.

In preparation for breaking, the coal may be cut horizontally or vertically by cutting machines and holes drilled for charging explosives. The broken coal is then picked up by loaders and discharged to conveyors or cars.

A method that is increasing in use is termed long-wall mining. It employs shearing or plowing machines to break coal from more extensive faces. Eighty long-wall mines are now in operation.

Pillars to support the roof are not needed because the roof is caved under controlled conditions behind the working face. About half the coal presently mined underground is cut by machine and nearly all the mined coal is loaded mechanically.

An important requirement in all mining systems is roof support. When the roof rock consists of strong sandstone or limestone, relatively uncommon, little or no support may be required over large areas.

Most mine roofs consist of shales and must be reinforced. Permanent supports may consist of arches, crossbars and legs, or single posts made of steel or wood.

Screw or hydraulic jacks, with or without crossbars, often serve as temporary supports. Long roof bolts, driven into the roof and anchored in sound strata above, are used widely for support, permitting freedom of movement for machines.

Drilling and insertion of bolts is done by continuous miners or separate drilling machines. Ventilation is another necessary factor in underground mining to provide a proper atmosphere for personnel and to dilute or remove dangerous concentrations of methane and coal dust.

The ventilation system must be well-designed so that adequate air is supplied across the working faces without stirring up more dust. When coal occurs near the surface, strip or open-pit mining is often more economical than underground mining.

This is especially true in states west of the Mississippi River where coal seams are many feet thick and relatively low in sulfur. The proportion of coal production from surface mining has been increasing rapidly and now amounts to over 60 percent.

In preparation for surface mining, core drilling is conducted to survey the underlying coal seams, usually with dry-type rotary drills. The overburden must then be removed. It is first loosened by ripping or drilling and blasting.

Ripping can be accomplished by bulldozers or scrapers. Overburden and coal are then removed by shovels, draglines, bulldozers, or wheel excavators. The first two may have bucket capacities of 200 yd3 (153 m3).

Draglines are most useful for very thick cover or long dumping ranges. Hauling of stripped coal is usually done by trucks or tractor-trailers with capacities up to 240 short tons [218 metric tons (t)]. Reclamation of stripped coal land is becoming increasingly necessary.

This involves returning the land to near its original contour, replanting with ground cover or trees, and sometimes providing water basins and arable land.


What is total quality management?

The term quality may simply be defined as providing customers with products and services that meet their needs in an effective manner. TQM focuses on customer satisfaction. The three words that make up this concept—"total," "quality," and "management"—are discussed separately below.

This calls for the involvement of all the aspects of the organization in satisfying the customer, a goal that can only be accomplished if the usefulness is recognized of having partnership environment at each stage of the business process both within and outside the organization, as applicable. With respect to the outside stage of the business process, the important critical factors for a successful supplier-customer relationship are

1. Development of a customer-supplier relationship based on mutual trust, respect, and benefit
2. Development of in-house requirements by customers
3. Customers making suppliers clearly understand their requirements
4. Customers selecting their potential suppliers with mechanisms in place to achieve zero defects
5. Regular monitoring of suppliers' processes and products by the customers

Any company or organization in pursuit of TQM must define the term quality clearly and precisely. It may be said that quality is deceptively simple but endlessly complicated, and numerous definitions have been proposed, such as "quality = people + attitude"; "providing error-free products and services to customers on time"; and "satisfying the requirements and expectations of customers".

Another definition is offered here: "quality means providing both external and internal customers with innovative goods and services that meet their needs effectively." This definition has three important dimensions:

1. It focuses on satisfying the needs of customers
2. Organizations using this definition provide both products and services, which jointly determine the customer's perception of the company in question
3. The concerned companies have both external and internal customers

According to a survey reported in Ref. 1, 82% of the definitions indicated that quality is defined by the customer, not by the supplier. The top five quality measures identified by the respondents were customer feedback (22.92%), customer complaints (16.67%), net profits (10.42%), returning customers (10.42%), and product defects (8.33%).

The approach to management is instrumental in determining companies' ability to attain corporate goals and allocate resources effectively. TQM calls for a radical change in involving employees in company decision-making, as their contribution and participation are vital to orienting all areas of business in providing quality products to customers.

It must be remembered that over the years the Fortune 1000 companies in the United States have reported such benefits of employee-involvement as increased employee trust in management, improved product quality, improved employee safety/ health, increase in productivity, improved management decision-making, increased worker satisfaction, improvement in employee quality of work life, improved union-management relations, improved implementation of technology, improved organization processes, eliminated layers of management, and better customer service.

Companies considering the introduction of TQM will have to see their employees in a new way, for the change in management philosophy needed to truly manage total quality is nothing short of dramatic. Furthermore, it is important that the management infrastructure lay the foundation for involving the entire workforce in the pursuit of customer satisfaction.


One of the pioneers of the TQM concept has expressed his views on improving quality. His fourteen point approach is as follows:4

1. Establish consistency of purpose for improving services.

2. Adopt the new philosophy for making the accepted levels of defects, delays, or mistakes unwanted.

3. Stop reliance on mass inspection as it neither improves nor guarantees quality. Remember that teamwork between the firm and its suppliers is the way for the process of improvement.

4. Stop awarding business with respect to the price.

5. Discover problems. Management must work continually to improve the system.

6. Take advantage of modern methods used for training. In developing a training program, take into consideration such items as

• Identification of company objectives
• Identification of the training goals
• Understanding of goals by everyone involved
• Orientation of new employees
• Training of supervisors in statistical thinking
• Team-building
• Analysis of the teaching need

7. Institute modern supervision approaches.

8. Eradicate fear so that everyone involved may work to his or her full capacity.

9. Tear down department barriers so that everyone can work as a team member.

10. Eliminate items such as goals, posters, and slogans that call for new productivity levels without the improvement of methods.

11. Make your organization free of work standards prescribing numeric quotas.

12. Eliminate factors that inhibit employee workmanship pride.

13. Establish an effective education and training program.

14. Develop a program that will push the above 13 points every day for never-ending improvement.


Temperature The first step in evaluating a chilled water system is to determine the required chilled water supply temperature. For any HVAC system to provide simultaneous control of space temperature and humidity, the supply air temperature must be low enough to simultaneously satisfy both the sensible and latent cooling loads imposed.

Sensible cooling is the term used to describe the process of decreasing the temperature of air without changing the moisture content of the air. However, if moisture is added to the room by the occupants, infiltrated outdoor air, internal processes, etc., the supply air must be cooled below its dew point to remove this excess moisture by condensation.

The amount of heat removed with the change in moisture content is called latent cooling. The sum of the two represents the total cooling load imposed by a building space on the chilled water cooling coil.

The required temperature of the supply air is dictated by two factors:
1. The desired space temperature and humidity setpoint and
2. The sensible heat ratio (SHR) defined by dividing the sensible cooling load by the total cooling load.

On a psychrometric chart, the desired space conditions represents one end point of a line connecting the cooling coil supply air conditions and the space conditions. The slope of this line is defined by the SHR. An SHR of 1.0 indicates that the line has no slope since there is no latent cooling.

The typical SHR in comfort HVAC applications will range from about 0.85 in spaces with a large number of people to approximately 0.95 for the typical office. The intersection between this “room” line and the saturation line on the psychrometric chart represents the required apparatus dewpoint (ADP) temperature for the cooling coil.

However, since no cooling coil is 100% efficient, the air leaving the coil will not be at a saturated condition, but will have a temperature about 1–28F above the ADP temperature. While coil efficiencies as high as 98% can be obtained, the economical approach is to select a coil for about 95% efficiency, which typically results in the supply air wet bulb temperature being about 18F lower than the supply air dry bulb temperature.

Based on these typical coil conditions, the required supply air temperature can determined by plotting the room conditions point and a line having a slope equal to the SHR passing through the room point, determining the ADP temperature intersection point, and then selecting a supply air condition on this line based on a 95% coil efficiency.

For a chilled water cooling coil, approach is defined as the temperature difference between the entering chilled water and the leaving (supply) air. While this approach can range as low as 38F to as high as 108F, the typical value for HVAC applications is approximately 78F. Therefore, to define the required chilled water supply temperature, it is only necessary to subtract 78F from the supply air dry bulb temperature.


In many instances, the decision will be made to replace the existing heating and cooling system rathe than rehabilitate it. The old system may be well beyond its expected life.

Many newer systems are more efficient and can quickly pay for themselves in reduced energy bills. The availability of fuels may have changed (e.g., natural gas may now be available) since the system was originally designed and installed.

If the old heating and/or cooling system in the house being rehabilitated is beyond retrofitting and needs to be replaced, there are two primary reasons why it should not simply be replaced with another system of the same size.

The old philosophy of “bigger is better” no longer applies. Systems were traditionally oversized, causing them to cycle on and off frequently. Cycling that results from oversizing is inefficient and hard on the equipment.

Also, rehab work may also include the addition of more or better insulation, and better performing windows and doors. This will reduce the heating and cooling loads and allow for a smaller capacity system to be installed.

A design load analysis should be conducted to determine the current heating and cooling capacity needs. There are various methods and levels of sophistication for performing these analyses. Most equipment vendors are equipped with worksheets or computer software to estimate the appropriate size of the system for the home.

They will typically perform a sizing calculation as part of the sales process. While such a service from the dealer is available at no cost, it should be remembered that the dealer is selling equipment, not efficiency.

Methods are often over-simplified with factors of safety built in, resulting in over-sized equipment. An alternative is to size the system yourself. There is a multitude of books available that provide instructions, data tables, and examples for performing system sizing calculations.

It is recommended that calculations be performed more than once with different methods and sources to provide confidence in the results. While sizing the system may cost a modest amount of time, lack of experience by the novice estimator may result in mistakes.

Basic estimating techniques may also not properly account for unique aspects of the home. Another alternative is to hire a consultant to size the system. Professional energy specialists and auditors can evaluate the home and provide recommendations on the size and type of equipment.

The advantage here is the benefit of an experienced professional who is focused on energy efficiency, but consulting fees may be hefty.


HVAC systems usually involve a minimum of three design stages. In the preliminary design phase, the
most general combinations of comfort needs and climate characteristics are considered:

Activity comfort needs are listed.
An activity schedule is developed.
Site energy resources are analyzed.
Climate design strategies are listed.
Building form alternatives are considered.
Combinations of passive and active systems are considered.
One or several alternatives are sized by general design guidelines.

For smaller buildings, this analysis is often done by the architect alone. For innovative or unusual systems in smaller buildings, and especially for larger, multiple-zone buildings, consultants such as engineers and
landscape architects often are included.

The team approach is particularly valuable in assessing the strengths of various design alternatives. The architect and the consultants have very different perspectives, and when mutual goals can be clearly agreed on early in the design process, these perspectives are not only mutually supporting but can produce striking innovations whose benefits extend far beyond services to the clients of a particular building.

By setting an example, the team can make available better environments for less energy for hundreds of
subsequent buildings. With inspired teamwork, the distribution of HVAC services can enhance building
form, as many examples in this chapter show.

By the time the design development phase is reached, one of the design alternatives has probably been chosen as the most promising combination of aesthetic, social, and technical solutions for the program.

The consulting engineer (or architect, on a smaller job) is furnished with the latest set of drawings
and the program. Typically, the architectural or mechanical engineer then:

I. Establishes design conditions.
A. By activity, lists the range of acceptable air and surface temperatures, air motions, relative humidities, lighting levels, and background noise levels.
B. Establishes the schedule of operations.

II. Determines the HVAC zones, considering:
A. Activities.
B. Schedule.

C. Orientation.
D. Internal heat gains.

III. Estimates the thermal loads on each zone:
A. For worst winter conditions.
B. For worst summer conditions.
C. For the average condition or conditions that represent the great majority of the building’s
operating hours.
D. Frequently, an estimate of annual energy consumption is made.

IV. Selects the HVAC systems.
Often, several systems will be used within one large building because orientation, activity, or scheduling differences may dictate different mechanical solutions. Especially common is one system for the all-interior zones of large buildings and another system for the perimeter zones.

V. Identifies the HVAC components and their locations.
A. Mechanical rooms.
B. Distribution trees—vertical chases, horizontal runs.
C. Typical in-space components, such as under-window fan-coil units, air grilles,
and so on.

VI. Sizes the components.

VII. Lays out the system. At this stage, conflicts with other systems (structure, plumbing, fire safety, circulation, etc.) are most likely to become evident. Because insufficient vertical clearance is one of the most common building coordination problems with HVAC systems, the layouts must include sections as well as plans.

Opportunities for integration with other systems also become more apparent at this stage: air ducts can also help distribute daylighting, act as sunshading devices, or fulfill other functions.

After the architect and the other consultants hold conferences in which HVAC system layout drawings are compared to those for other primary systems (structure, plumbing, electrical, etc.), design finalizing occurs.

At this final stage, the HVAC system designer verifies the match between the loads on each component and the component’s capacity to meet the load. Final layout drawings then are completed.


The pressure gage is one of the service person’s most valuable tool. Thus, the quality of the work depends on the accuracy of the gages used. Most are precision-made instruments that will give many years of dependable service if properly treated.

The test gage set should be used primarily to check pressures at the low and high side of the compressor. The ammonia gage should be used with a steel Bourdon tube tip and socket to prevent damage.

Once you become familiar with the construction of your gages, you will be able to handle them more efficiently.  Drawn brass is usually used for case material. It does not corrode.

However, some gages now use high-impact plastics. A copper alloy Bourdon tube with a brass tip and socket is used for most refrigerants. Stainless steel is used for ammonia. Engineers have found that moving parts involved in rolling contact will last longer if made of unlike metals.

That is why many top-grade refrigeration gages have bronze-bushed movements with a stainless steel pinion and arbor. The socket is the only support for the entire gage. It extends beyond the case.

The extension is long enough to provide a wrench flat enough for use in attaching the gage to the pressure source. Never twist the case when threading the gage into the outlet. This could cause misalignment or permanent damage to the mechanism.

NOTE: Keep gages and thermometers separate from other tools in your service kit. They can be knocked out of alignment by a jolt from a heavy tool.

Most pressure gages for refrigeration testing have a small orifice restriction screw. The screw is placed in the pressure inlet hole of the socket. It reduces the effects of pulsations without throwing off pressure readings. If the orifice becomes clogged, the screw can be easily removed for cleaning.

Gage recalibration
Most gages retain a good degree of accuracy in spite of daily usage and constant handling. Since they are precision instruments, however, you should set up a regular program for checking them. If you have a regular program, you can be sure that you are working with accurate instruments.

Gages will develop reading errors if they are dropped or subjected to excessive pulsation, vibration, or a violent surge of overpressure. You can restore a gage to accuracy by adjusting the recalibration screw (see figure below).

If the gage does not have a recalibration screw, remove the ring and glass. Connect the gage you are testing and a gage of known accuracy to the same pressure source. Compare readings at mid-scale.

If the gage under test is not reading the same as the test gage, remove the pointer and reset.

This type of adjustments on the pointer acts merely as a pointer setting device. It does not reestablish the original even increment (linearity) of pointer travel. This becomes more apparent as the correction requirement becomes greater.

If your gage has a recalibrator screw on the face of the dial as in figure, remove the ring and glass. Relieve all pressure to the gage. Turn the recalibration screw until the pointer rests at zero.

The gage will be as accurate as when it left the factory if it has a screw recalibration adjustment. Resetting the dial to zero restores accuracy throughout the entire range of dial readings. If you cannot calibrate the gage by either of these methods, take it to a qualified specialist for repair.


The function of the boiler furnace is to convert into heat all the latent chemical energy of the fuel. External heat is applied to the fuel to cause its ignition initially; subsequently the heat is generally supplied by the furnace walls and, in the case of coal, from the bed of glowing fuel.

While combustion is taking place, if the temperature of the elements is lowered, by whatever means, below that of ignition, combustion will become imperfect or cease. Gases developed in a furnace passing too quickly among the tubes of a boiler may be similarly chilled and thus combustion be stopped, causing a waste of fuel and production of large deposits of soot.

Part of the heat developed in the furnace goes from the fuel bed or flame directly into the metal of the tubes by radiation. The rest of the heat raises the temperature of the gases resulting from the combustion—carbon dioxide, nitrogen and water.

These gases pass among the tubes transmitting their heat through the tube walls to the water and steam. Thus the gases are cooled and, since they cannot leave the boiler at a lower temperature than that of the water and steam in the tubes, the amount of heat which can be released by the gases is directly dependent on the temperature of the gases when they enter among the tubes.

It is important, therefore, that the gases be raised to as high a temperature as possible in the furnace. Hence, every factor affecting this temperature should be considered carefully.

The maximum temperature attained depends on compromises:

1. Excess air is required to achieve complete combustion of the fuel, but as more excess air is supplied, the temperature tends to decrease; if the amount of excess air is decreased to too low a point, the amount of heat liberated will be decreased since incomplete combustion results.

2. So much heat can be generated even with the lowest possible excess air that the temperatures reached may breakdown the enclosing refractory brick of the furnace. The absorption of heat by the enclosing brick together with a large area of water-cooled surface exposed to the heat, may lower the temperature of the furnace and result in poor efficiency.

3. Since the quantity of heat radiated from a burning fuel is dependent upon the duration as well as the temperature, the temperature of the fire will increase as the rate of combustion increases (the relation of fuel to air remaining constant). Rates of combustion of fuel must be matched by appropriate amounts of excess air so as not to produce excessively high temperatures that may cause rapid deterioration of the refractory brick of the furnace.

To protect the brickwork, temperatures are held down by water screens surrounding the furnace, or by water circulated in piping behind or within the brickwork. This permits higher temperatures of combustion to exist with greater heat absorption by the boiler heating surfaces (carrying the water to be turned into steam) both by radiation and by convection from the hotter gases emanating from the fire. The result is greater boiler efficiency.

Furnace Volume
The furnace portion of the boiler and its cubical volume include that portion between the heat sources (grates for coal, jets for pulverized coal, oil and gas) and the fi rst place of entry into or between the first bank of boiler tubes.

The most suitable furnace volume of a boiler is largely influenced by the following:

1. Kind of fuel;
2. Rate of combustion;
3. Excess air; and,
4. Method of air admission.

Kind of Fuel
Anthracite coal needs no special provision for a combustion volume until the boiler is forced to high ratings. The fixed carbon, which comprises a large percentage of the coal, is burned near or on the grate. The carbon monoxide gas rising from the fuel bed requires some combustion space to mix with the air thoroughly, but not much volume in comparison with the highly volatile ingredients found in oil and gas.

Bituminous coal is high in volatile content and requires considerable furnace volume because a large portion of the volatile combustibles must be burned above the fuel bed. In burning high volatile coal, the distillation of the volatile matter takes place at a comparatively low temperature.

This condition is favorable to the light hydrocarbons which are more readily oxidized than the heavier compounds which distill off at higher temperatures. Slow and gradual heating of the coal is necessary to bring about the desired results. Once the volatile matter starts distilling off, it must be completely oxidized by the proper amount of air in order to approach smokeless combustion.

In burning pulverized coal, oil or gaseous substances, the important factors are the volume of fuel to be burned, the length of the flame travel, and turbulence. In general, the greater the percentage of volatile matter present in fuel, the larger must be the combustion space but these two factors are not directly proportional.

Rate of Combustion
When a boiler is operating at a low rating, the fuel and air have quite a period in which to mix and burn completely. As the rating is increased, both the fuel and air are increased. A proportion of the uniting of oxygen and carbon monoxide takes place in the furnace chamber.

If the boiler rating is increased to such an extent that the mixing has too short a duration in the furnace volume, the gases will enter the tube area and ignite, producing what is known as secondary combustion, unless they have been cooled below ignition temperatures by the ex- change to the tubes. The higher rating expected from a boiler, therefore, the larger should be the combustion space.

Excess Air
To produce effi cient combustion each certain grade of fuel requires a defi nite amount of air to unite with a pound of the fuel. The amount of air varies according to the ingredients of the fuel. The fur- nace volume must be such that the air required has sufficient time to unite with the fuel as well as to take care of the expanded gas at the furnace temperature.

Method of Admission
The method of air admission is dependent upon the air required which, in turn, depends upon the kind of fuel. The furnace volume for pulverized coal, oil or gas is large in comparison with a stoker installation not only to admit additional air properly but to provide for the long flames.

The different ducts required for air admission necessarily have an effect on the shape of the furnace volume.


In many respects, the four stroke cycle gasoline engine and the four stroke cycle diesel engine are very similar. They both follow an operating cycle consisting of intake, compression, power, and exhaust strokes. They also share the same system for intake and exhaust valves.

The main differences between gasoline engines and diesel engines follow:

(1) In a diesel engine the fuel and air mixture is ignited by the heat generated by the compression stroke, versus the use of a spark ignition system in a gasoline engine. The diesel engine therefore needs no ignition system. For this reason, the gasoline engine is referred to as a spark ignition engine and a diesel engine is referred to as a compression ignition engine.

(2) In a diesel engine the fuel and air mixture is compressed to about onetwentieth of its original volume. In contrast, the fuel and air mixture in a gasoline engine is compressed to about one eighth of its original volume. The diesel engine must compress the mixture this tightly to generate enough heat to ignite the fuel and air mixture.

(3) The gasoline engine mixes the fuel and air before it reaches the combustion chamber. A diesel engine takes in only air through the intake port. Fuel is put into the combustion chamber directly through an injection system. The air and fuel then mix in the combustion chamber.

(4) The engine speed and the power output of a diesel engine are controlled by amount of air is constant. This contrasts with the gasoline engine where the speed and power output are regulated by limiting the air entering the engine.

(1) The diesel engine is much more efficient than a gasoline engine due to the much tighter compression of the fuel and air mixture. The diesel engine produces tremendous lowspeed power, and gets much greater fuel mileage than its gasoline counterpart. This makes the engine very suitable for large trucks.

(2) The diesel engine requires no ignition tuneups because there is no ignition system.

(3) Because diesel fuel is of an oily consistency and is less volatile than gasoline, it is not as likely to explode in a collision.

d. Disadvantages.
(1) The diesel engine must be made very heavy to have enough strength to withstand the tighter compression of the fuel and air mixture.

(2) The diesel engine is very noisy.

(3) Diesel fuel creates a large amount of fumes.

(4) Because diesel fuel is not very volatile, cold weather starting is difficult.

(5) A diesel engine operates well only in lowspeed ranges in relation to gasoline engines. This creates problems when using them in passenger cars, which require a wide speed range.

e. Usage. Diesel engines are widely used in all types of heavy trucks, trains, and boats. In recent years, more attention has been focused on using diesels in passenger cars.

f. Multifuel Engine. The multifuel engine is basically a four stroke cycle diesel engine with the capability of operating on a wide variety of fuel oils without adjustment or modification. The fuel injection system is equipped with a device called a fuel density compensator.

Its job is to vary the amount of fuel, keeping the power output constant regardless of the fuel being used. The multifuel engine uses a spherical combustion chamber to aid in thorough mixing, complete combustion, and minimized knocks.


Psychrometrics is the science of the properties of moist air, i.e., air mixed with water vapor. This subset of thermodynamics is important to the HVAC industry since air is the primary environment for all HVAC work.

Whereas oxygen, nitrogen, and other components of dry air behave similarly in only a vapor phase in the HVAC temperature range, water will undergo a change of state in the same temperature range based on pressure, or in the same pressure range based on temperature.

In the human comfort temperature range, the comfort of people and the quality of the environment for health, for structures, and for preservation of materials are also related to the moisture in the air. Control of the moist-air condition is a primary objective of the HVAC system.

Remember the following:

Air is considered to be saturated with moisture when the evaporation of water into the air at a given temperature and atmospheric pressure is offset by a concurrent condensation of water vapor to liquid. Cooling of saturated air results in dew, fog, rain, or snow. Warm air can hold more moisture than cold air.

Percent relative humidity measures how much water vapor is in the air compared to how much there would be if the air were saturated at the same temperature. The adjective relative is appropriate because the absolute amount of water that air can hold is relative to both temperature and barometric pressure. Changes in barometric pressure related to altitude or to weather conditions affect the moisture-holding capacity of air.

A psychrometric chart which presents properties of mixtures of moist air on a single graph is a most useful tool for quantitatively calculating and analyzing HVAC processes. Familiarity and facility with these charts are a must for the HVAC designer.

It is impossible to remove moisture from air in a heat exchangecooling process without bringing the air near to the saturation line. Moisture may be removed by desiccants without approaching saturation.

Optimum conditions for human health and comfort range from 70 to 75 F and 40 to 50 percent relative humidity. In terms of perceived comfort, a little higher relative humidity can offset a little lower
ambient temperature.

Moist air in cold climates is a problem and a liability for building designers. Since the inside environment usually is moister than the outside air, insulation and vapor barriers are required to prevent condensation in the structural cavities. Failure to respect this liability may lead to early deterioration of a building. Swimming pools and humidified buildings (hospitals, etc.) are particularly vulnerable.


Symptoms of humidifier fever are “flu-like,” lethargy, arthralgia (neuralgic pain in the joints), myalgia and fever. Sometimes headaches, polyuria (excessive urination) and weight loss occur. More severe symptoms
include shortage of breath and coughing. These systemic and respiratory symptoms occur with initial exposure; i.e., the first day of the work week.

They progressively improve during exposure and in the absence of exposure, only to recur on re-exposure. In a work setting, symptoms appear on Monday, improve during the week and weekend and recur the following Monday. This pattern clearly distinguishes humidifier fever from hypersensitivity pneumonitis.

During the height of the reaction, medical examinations reveal the presence of late inspiratory crackles during auscultation (“listening” to the chest) and impaired gas transfer in the lungs. The chest radiograph is always normal and the lung function is normal between attacks.The cause or causes of humidifier fever are not known.

Some organisms have been isolated and incriminated during outbreaks, but as yet not indisputably identified. However, immunological investigations almost always reveal the presence of precipitating antibodies to antigens extracted from the humidifiers. In bronchial provocation tests, water from the humidifier usually reproduces the symptoms and physiological changes.

Attack rates vary considerably in reported outbreaks. Age appears to be a factor and appears to be associated with the duration of exposure and the development of antibodies. Highest incidence rate is in the winter (probably due to seasonal humidifying activity).

In addition to the tolerance pattern, or the “Monday Morning Phenomenon” of humidifier fever, it is distinguished from hypersensitive pneumonitis in other ways. Humidifier fever shows no decrease in lung function or pulmonary fibrosis, there are no radiographic changes and it seems to be brought on by comparatively low levels of antigen whereas hypersensitivity pneumonitis is associated with massive antigen


Sick Building Syndrome (SBS) is a worldwide and complex problem. SBS initially was used to describe a building where a set of varied symptoms were experienced predominantly by people working in an
air conditioned environment.

Subsequent early study by Finnegan and others has shown that SBS is not limited to air conditioned facilities and can, in fact, be observed in naturally ventilated buildings. More recent joint research in England by British architect Alexi Marmot and her husband Michael, an epidemiology professor, concludes complex SBS involvement with the occupant’s home environment, poor facility management, and worker satisfaction.

A syndrome, by definition, is a group of signs and symptoms that occur together and characterize a particular abnormality. Frequently they form an identifiable pattern. This makes diagnosis by exclusion possible. For example, organic lesions are not associated with SBS. If an occupant has persistent organic lesions, it can be assumed that the cause of the sickness is not SBS.

A building is said to be “sick,” when 20 percent or more of the occupants voluntarily complain of discomfort symptoms for periods exceeding two weeks, and affected occupants obtain rapid relief away from the building. The 20 percent figure is an arbitrary level derived from earlier ASHRAE efforts to define comfort.

This acceptability level, when transferred to the IAQ arena as a level of temperature acceptability, may mislead managers who look to “20 percent” as a guideline for action. A knowledgeable investigator would ask, “If you have 3,000 employees and only 10 percent are ill, do you wait?

That’s 300 people who could be suffering needlessly from a pollutant in the work place.” A problem closely associated to SBS is building related illness (BRI). Medical diagnosis can identify specific health effects that result in known disease etiology, such as Legionnaire’s disease, that are a direct result of building conditions.

Once diagnosed, a BRI can help identify the source contaminants and may reveal ways to remedy the situation. A BRI facility has almost always passed through the SBS stage and usually still has other contributing contaminants or causes at work. The five symptom complexes associated with SBS are discussed below; followed by the major building related illnesses.


Sensible Heat
People, lights, motors, heating equipment and outdoor air are examples of substances that give off sensible heat. A seated person in an office, for instance, gives off approximately 225 Btuh of sensible heat into the conditioned space.

Enthalpy units of sensible heat are in Btu/lb°F. The change in the sensible heat level as measured with an ordinary thermometer is sensible temperature. Sensible temperature is measured in degrees Fahrenheit (°F) and it is indicated as dry bulb (db) temperature. Sensible temperatures are written °Fdb. For example, 55°Fdb.

Latent Heat
The definition of latent or hidden heat is: heat that is known to be added to or removed from a substance but no temperature change is recorded.” The heat released by boiling water is an example of latent heat.

Once water is brought to the boiling point, adding more heat only makes it boil faster; it does not raise the temperature of the water. The level of latent heat is measured in degrees Fahrenheit (°F) and it is indicated as dew point (dp) temperature (for example, 60°Fdp). Enthalpy is in Btu/lb°F. People, water equipment, and outdoor air are examples of substances that give off latent heat. A seated person in an office gives off approximately 225 Btuh of latent heat into the conditioned space.

Total Heat
Total heat is the sum of sensible heat and latent heat. It is measured in degrees Fahrenheit (°F) and it is indicated as wet bulb (wb) temperature. For example, 54°Fwb. Total heat level is measured with an ordinary thermometer; however, the thermometer tip is covered with a sock made from a water-absorbing material.

The sock is wetted with distilled water and the thermometer is placed in the air stream in the air handling unit or duct. As air moves across the wet sock, some of the water is evaporated. Evaporation cools the remaining water in the sock and cools the thermometer. The decrease in the temperature of the wet bulb thermometer is called “wet bulb depression.”

For room wet bulb temperature the wet bulb thermometer is typically in an instrument such as a sling- or power-psychrometer along with a dry bulb thermometer. Enthalpy is in Btu/lb°F. A seated person gives off approximately 450 Btuh of total heat (225 Btuh sensible heat plus 225 Btuh latent heat).

free counters