Following are the general considerations in designing a machine component :

1. Type of load and stresses caused by the load. The load, on a machine component, may act in several ways due to which the internal stresses are set up.

2. Motion of the parts or kinematics of the machine. The successful operation of any machine depends largely upon the simplest arrangement of the parts which will give the motion required.

The motion of the parts may be :
(a) Rectilinear motion which includes unidirectional and reciprocating motions.
(b) Curvilinear motion which includes rotary, oscillatory and simple harmonic.
(c) Constant velocity.
(d) Constant or variable acceleration.

3. Selection of materials. It is essential that a designer should have a thorough knowledge of the properties of the materials and their behaviour under working conditions. Some of the important characteristics of materials are : strength, durability, flexibility, weight, resistance to heat and corrosion, ability to cast, welded or hardened, machinability, electrical conductivity, etc.

4. Form and size of the parts. The form and size are based on judgement. The smallest practicable cross-section may be used, but it may be checked that the stresses induced in the designed cross-section are reasonably safe. In order to design any machine part for form and size, it is necessary to know the forces which the part must sustain. It is also important to anticipate any suddenly applied or impact load which may cause failure.

5. Frictional resistance and lubrication. There is always a loss of power due to frictional resistance and it should be noted that the friction of starting is higher than that of running friction. It is, therefore, essential that a careful attention must be given to the matter of lubrication of all surfaces which move in contact with others, whether in rotating, sliding, or rolling bearings.

6. Convenient and economical features. In designing, the operating features of the machine should be carefully studied. The starting, controlling and stopping levers should be located on the basis of convenient handling.

The adjustment for wear must be provided employing the various takeup devices and arranging them so that the alignment of parts is preserved. If parts are to be changed for different products or replaced on account of wear or breakage, easy access should be provided and the necessity of removing other parts to accomplish this should be avoided if possible.

The economical operation of a machine which is to be used for production, or for the processing of material should be studied, in order to learn whether it has the maximum capacity consistent with the production of good work.

7. Use of standard parts. The use of standard parts is closely related to cost, because the cost of standard or stock parts is only a fraction of the cost of similar parts made to order. The standard or stock parts should be used whenever possible ; parts for which patterns are already in existence such as gears, pulleys and bearings and parts which may be selected from regular shop stock such as screws, nuts and pins.

Bolts and studs should be as few as possible to avoid the delay caused by changing drills, reamers and taps and also to decrease the number of wrenches required.

8. Safety of operation. Some machines are dangerous to operate, especially those which are speeded up to insure production at a maximum rate. Therefore, any moving part of a machine which is within the zone of a worker is considered an accident hazard and may be the cause of an injury.

It is, therefore, necessary that a designer should always provide safety devices for the safety of the operator. The safety appliances should in no way interfere with operation of the machine.

9. Workshop facilities. A design engineer should be familiar with the limitations of his employer’s workshop, in order to avoid the necessity of having work done in some other workshop.

It is sometimes necessary to plan and supervise the workshop operations and to draft methods for casting, handling and machining special parts.

10. Number of machines to be manufactured. The number of articles or machines to be manufactured affects the design in a number of ways. The engineering and shop costs which are called fixed charges or overhead expenses are distributed over the number of articles to be manufactured.

If only a few articles are to be made, extra expenses are not justified unless the machine is large or of some special design. An order calling for small number of the product will not permit any undue expense in the workshop processes, so that the designer should restrict his specification to standard parts as much as possible.

11. Cost of construction. The cost of construction of an article is the most important consideration involved in design. In some cases, it is quite possible that the high cost of an article may immediately bar it from further considerations.

If an article has been invented and tests of hand made samples have shown that it has commercial value, it is then possible to justify the expenditure of a considerable sum of money in the design and development of automatic machines to produce the article, especially if it can be sold in large numbers. The aim of design engineer under all conditions, should be to reduce the manufacturing cost to the minimum.

12. Assembling. Every machine or structure must be assembled as a unit before it can function. Large units must often be assembled in the shop, tested and then taken to be transported to their place of service.

The final location of any machine is important and the design engineer must anticipate the exact location and the local facilities for erection.


One of the first tasks in solving any machine design problem is to determine the kinematic configuration(s) needed to provide the desired motions. Force and stress analyses typically cannot be done until the kinematic issues have been resolved.

This text addresses the design of kinematic devices such as linkages, cams, and gears. Each of these terms will be fully defined in succeeding chapters, but it may be useful to show some examples of kinematic applications.

You probably have used many of these systems without giving any thought to their kinematics.

Virtually any machine or device that moves contains one or more kinematic elements such as linkages, cams, gears, belts, chains. Your bicycle is a simple example of a kinematic system that contains a chain drive to  provide torque multiplication and simple cable-operated linkages for braking.

An automobile contains many more examples of kinematic devices. Its steering system, wheel suspensions, and piston-engine all contain linkages; the engine's valves are opened by cams; and the transmission is full of gears.

Even the windshield wipers are linkage-driven. Figure l-la shows a spatial linkage used to control the rear wheel movement of a modem automobile over bumps. Construction equipment such as tractors, cranes, and backhoes all use linkages extensively in their design.

Figure 1-1b shows a small backhoe that is a linkage driven by hydraulic cylinders. Another application using linkages is thatof exercise equipment as shown in Figure I-Ie.

The examples in Figure 1-1 are all of consumer goods which you may encounter in your daily travels. Many other kinematic examples occur in the realm of producer goods-machines used to make the many consumer products that we use.

You are less likely to encounter these outside of a factory environment. Once you become familiar with the terms and principles of kinematics, you will no longer be able to look at any machine or product without seeing its kinematic aspects.


Insulations are used to decrease heat flow and to decrease surface temperatures. These materials are found in a variety of forms, typically loose fill, batt, and rigid. Even a gas, like air, can be a good insulator if it can be kept from moving when it is heated or cooled.

A vacuum is an excellent insulator. Usually, though, the engineering approach to insulation is the addition of a low-conducting material to the surface.

While there are many chemical forms, costs, and maximum operating temperatures of common forms of insulations, it seems that when a higher operating temperature is required, many times the thermal conductivity and cost of the insulation will also be higher.

Loose-fill insulations include such materials as milled alumina-silica (maximum operating temperature of 1260 deg C and thermal conductivities in the range of 0.1 to 0.2 W/m2 K) and perlite (maximum operating temperature of 980 deg C and thermal conductivities in the range of 0.05 to 1.5 W/m2 K).

Batt-type insulations include one of the more common types — glass fiber. This type of insulation comes in a variety of densities, which, in turn, have a profound affect on the thermal conductivity. Thermal conductivities for glass fiber insulations can range from about 0.03 to 0.06 W/m2K.

Rigid insulations show a very wide range of forms and performance characteristics. For example, a rigid insulation in foam form, polyurethane, is very lightweight, shows a very low thermal conductivity (about 0.02 W/m2 K), but has a maximum operating temperature only up to about 120 deg C.

Rigid insulations in refractory form show quite different characteristics. For example, high-alumina brick is quite dense, has a thermal conductivity of about 2 W/m2 K, but can remain operational to temperatures around 1760 deg C.

Many insulations are characterized in the book edited by Guyer (1989). Often, commercial insulation systems designed for high-temperature operation use a layered approach.

Temperature tolerance may be critical. Perhaps a refractory is applied in the highest temperature region, an intermediate-temperature foam insulation is used in the middle section, and a high-performance, low temperature insulation is used on the outer side near ambient conditions.

Analyses can be performed including the effects of temperature variations of thermal conductivity. However, the most frequent approach is to assume that the thermal conductivity is constant at some temperature between the two extremes experienced by the insulation.


Conduction heat transfer phenomena are found throughout virtually all of the physical world and the industrial domain. The analytical description of this heat transfer mode is one of the best understood.

Some of the bases of understanding of conduction date back to early history. It was recognized that by invoking certain relatively minor simplifications, mathematical solutions resulted directly. Some of these were very easily formulated.

What transpired over the years was a very vigorous development of applications to a broad range of processes. Perhaps no single work better summarizes the wealth of these studies than does the book by Carslaw and Jaeger (1959).

They gave solutions to a broad range of problems, from topics related to the cooling of the Earth to the current-carrying capacities of wires. The general analyses given there have been applied to a range of modern-day problems, from laser heating to temperature-control systems.

Today conduction heat transfer is still an active area of research and application. A great deal of interest has developed in recent years in topics like contact resistance, where a temperature difference develops between two solids that do not have perfect contact with each other.

Additional issues of current interest include non-Fourier conduction, where the processes occur so fast that the equation described below does not apply. Also, the problems related to transport in miniaturized systems are garnering a great deal of interest.

Increased interest has also been directed to ways of handling composite materials, where the ability to conduct heat is very directional.

Much of the work in conduction analysis is now accomplished by use of sophisticated computer codes. These tools have given the heat transfer analyst the capability of solving problems in nonhomogenous media, with very complicated geometries, and with very involved boundary conditions.

It is still important to understand analytical ways of determining the performance of conducting systems. At the minimum these can be used as calibrations for numerical codes.


The basis of conduction heat transfer is Fourier’s Law. This law involves the idea that the heat flux is proportional to the temperature gradient in any direction n. Thermal conductivity, k, a property of materials that is temperature dependent, is the constant of proportionality.


In many systems the area A is a function of the distance in the direction n. One important extension is that this can be combined with the first law of thermodynamics to yield the heat conduction equation.

For constant thermal conductivity, this is given as


In this equation, a is the thermal diffusivity and is the internal heat generation per unit volume.

Some problems, typically steady-state, one-dimensional formulations where only the heat flux is desired, can be solved simply from Equation (4.1.1).

Most conduction analyses are performed with Equation (4.1.2). In the latter, a more general approach, the temperature distribution is found from this equation and appropriate boundary conditions.

Then the heat flux, if desired, is found at any location using Equation (4.1.1). Normally, it is the temperature distribution that is of most importance. For example, it may be desirable to know through analysis if a material will reach some critical temperature, like its melting point. Less frequently the heat flux is desired.

While there are times when it is simply desired to understand what the temperature response of a structure is, the engineer is often faced with a need to increase or decrease heat transfer to some specific level. Examination of the thermal conductivity of materials gives some insight into the range of possibilities that exist through simple conduction.

Of the more common engineering materials, pure copper exhibits one of the higher abilities to conduct heat with a thermal conductivity approaching 400 W/m2 K. Aluminum, also considered to be a good conductor, has a thermal conductivity a little over half that of copper.

To increase the heat transfer above values possible through simple conduction, more-involved designs are necessary that incorporate a variety of other heat transfer modes like convection and phase change.

Decreasing the heat transfer is accomplished with the use of insulation.


The great advantage of steel as an engineering material is its versatility, which arises from the fact that its properties can be controlled and changed by heat treatment.Thus, if steel is to be formed into some intricate shape, it can be made very soft and ductile by heat treatment; on the other hand, heat treatment can also impart high strength.

The physical and mechanical properties of steel depend on its constitution, that is, the nature, distribution, and amounts of its metallographic constituents as distinct from its chemical composition.

The amount and distribution of iron and iron carbide determine the properties, although most plain carbon steels also contain manganese, silicon, phosphorus, sulfur, oxygen, and traces of nitrogen, hydrogen, and other chemical elements such as aluminum and copper.

These elements may modify, to a certain extent, the main effects of iron and iron carbide, but the influence of iron carbide always predominates. This is true even of medium-alloy steels, which may contain considerable
amounts of nickel, chromium, and molybdenum.

The iron in steel is called ferrite. In pure iron-carbon alloys, the ferrite consists of iron with a trace of carbon in solution, but in steels it may also contain alloying elements such as manganese, silicon, or nickel. The atomic arrangement in crystals of the allotrophic forms of iron is shown in Fig. 2.1.

Cementite, the term for iron carbide in steel, is the form in which carbon appears in steels. It has the formula Fe3C, and consists of 6.67% carbon and 93.33% iron. Little is known about its properties, except that it is very hard and brittle.

As the hardest constituent of plain carbon steel, it scratches glass and feldspar but not quartz. It exhibits about two-thirds the induction of pure iron in a strong magnetic field.

Austenite is the high-temperature phase of steel. Upon cooling, it gives ferrite and cementite. Austenite is a homogeneous phase, consisting of a solid solution of carbon in the y form of iron. It forms when steel is heated above 79O0C.

The limiting temperatures for its formation vary with composition and are discussed below. The atomic structure of austenite is that of y iron, fee; the atomic spacing varies with the carbon content.

When a plain carbon steel of ~ 0.80% carbon content is cooled slowly from the temperature range at which austenite is stable, ferrite and cementite precipitate together in a characteristically lamellar structure known as pearlite. It is similar in its characteristics to a eutectic structure but, since it is formed from a solid solution rather than from a liquid phase, it is known as a eutectoid structure.

At carbon contents above and below 0.80%, pearlite of ~ 0.80% carbon is likewise formed on slow cooling, but excess ferrite or cementite precipitates first, usually as a grain-boundary network, but occasionally also along the cleavage planes of austenite.

The excess ferrite or cementite rejected by the cooling austenite is known as a proeutectoid constituent. The carbon content of a slowly cooled steel can be estimated from the relative amounts of pearlite and proeutectoid constituents in the microstructure.

Bainite is a decomposition product of austenite consisting of an aggregate of ferrite and cementite. It forms at temperatures lower than those where very fine pearlite forms and higher than those at which martensite begins to form on cooling.

Metallographically, its appearance is feathery if formed in the upper part of the temperature range, or acicular (needlelike) and resembling tempered martensite if formed in the lower part.

Martensite in steel is a metastable phase formed by the transformation of austenite below the temperature called the Ms temperature, where martensite begins to form as austenite is cooled continuously.

Martensite is an interstitial supersaturated solid solution of carbon in iron with a bodycentred tetragonal lattice. Its microstructure is acicular.


Optimization theory finds ready application in all branches of engineering in four primary areas:

1. Design of components of entire systems.
2. Planning and analysis of existing operations.
3. Engineering analysis and data reduction.
4. Control of dynamic systems.

In this section we briefly consider representative applications from the first three areas. In considering the application of optimization methods in design and operations, the reader should keep in mind that the optimization step is but one step in the overall process of arriving at an optimal design or an efficient operation.

Generally, that overall process will, as shown in Fig. 17.1, consist of an iterative cycle involving synthesis or definition of the structure of the system, model formulation, model parameter optimization, and analysis of the resulting solution.

The final optimal design or new operating plan will be obtained only after solving a series of optimization problems, the solution to each of which will have served to generate new ideas for further system structures.

In the interest of brevity, the examples in this section show only one pass of this iterative cycle and focus mainly on preparations for the optimization step. This focus should not be interpreted as an indication of the dominant role of optimization methods in the engineering design and systems analysis process.

Optimization theory is but a very powerful tool that, to be effective, must be used skillfully and intelligently by an engineer who thoroughly understands the system under study. The primary objective of the following example is simply to illustrate the wide variety but common form of the optimization problems that arise in the design and analysis process.

Design Applications
Applications in engineering design range from the design of individual structural members to the design of separate pieces of equipment to the preliminary design of entire production facilities.

For purposes of optimization the shape or structure of the system is assumed known and optimization problem reduces to the selection of values of the unit dimensions and operating variables that will yield the best value of the selected performance criterion.


The simplest method of lubricating a bearing is to apply grease, because of its relatively nonfluid characteristics. The danger of leakage is reduced, and the housing and enclosure can be simpler and less costly than those used with oil.

Grease can be packed into bearings and retained with inexpensive enclosures, but packing should not be excessive and the manufacturer's recommendations should be closely adhered to.

The major limitation of grease lubrication is that it is not particularly useful in high-speed applications. In general, it is not employed for speed factors over 200,000, although selected greases have been used successfully for higher speed factors with special designs.

Greases vary widely in properties depending on the type and grade or consistency. For this reason few specific recommendations can be made.

Greases used for most bearing operating conditions consist of petroleum, diester, polyester, or silicone oils thickened with sodium or lithium soaps or with more recently developed nonsoap thickeners. General characteristics of greases are as follows:

1. Petroleum oil greases are best for general-purpose operation from -34 to 1490C (-30 to 30O0F).
2. Diester oil greases are designed for low-temperature service down to -540C (-650F).
3. Ester-based greases are similar to diester oil greases but have better high-temperature characteristics,\ covering the range from -73 to 1770C (-100 to 35O0F).
4. Silicone oil greases are used for both high- and low-temperature operation, over the widest temperature range of all greases [-73 to 2320C (-100 to 45O0F)], but have the disadvantage of low load-carrying capacity.
5. Fluorosilicone oil greases have all of the desirable features of silicone oil greases plus good load capacity and resistance to fuels, solvents, and corrosive substances. They have a very low volatility in vacuum down to 10^-7 torr, which makes them useful in aerospace applications.
6. Perfluorinated oil greases have a high degree of chemical inertness and are completely nonflammable.

They have good load-carrying capacity and can operate at temperatures as high as 28O0C (55O0F) for long periods, which makes them useful in the chemical processing and aerospace industries, where high reliability justifies the additional cost.

Grease consistency is important since grease will slump badly and churn excessively when too soft and fail to lubricate when too hard. Either condition causes improper lubrication, excessive temperature rise, and poor performance and can shorten machine element life.

A valuable guide to the estimation of the useful life of grease in rolling-element bearings has been published by the Engineering Sciences Data Unit.

It has recently been demonstrated by Aihara and Dowson and by Wilson that the film thickness in grease lubricated components can be calculated with adequate accuracy by using the viscosity of the base oil in the elastohydrodynamic equation.

This enables the elastohydrodynamic lubrication film thickness formulas to be applied with confidence to grease-lubricated machine elements.


Except for a few special requirements, petroleum oils satisfy most operating conditions in machine elements. High-quality products, free from adulterants that can have an abrasive or lapping action, are recommended.

Animal or vegetable oils or petroleum oils of poor quality tend to oxidize, to develop acids, and to form sludge or resinlike deposits on the bearing surfaces. They thus penalize bearing performance or endurance.

A composite of recommended lubricant kinematic viscosities at 380C (10O0F) is shown in Fig. 21.5. The ordinate of this figure is the speed factor, which is bearing bore size measured in millimeters multiplied by the speed in revolutions per minute.

In many rolling-element-bearing applications an oil equivalent to an SAE-IO motor oil [4 X 10^-6 m2/sec, or 40 cS, at 380C (10O0F)] or a light turbine oil is the most frequent choice.

For a number of military applications where the operational requirements span the temperature range -54 to 2040C (—65 to 40O0F), synthetic oils are used. Ester lubricants are most frequently employed in this temperature range.

In applications where temperatures exceed 26O0C (50O0F), most synthetics will quickly break down, and either a solid lubricant (e.g., MoS2) or a polyphenyl ether is recommended. A more detailed discussion of synthetic lubricants can be found in Bisson and Anderson.


Both oils and greases are extensively used as lubricants for all types of machine elements over wide range of speeds, pressures, and operating temperatures. Frequently, the choice is determined by considerations other than lubrication requirements.

The requirements of the lubricant for successful operation of nonconformal contacts such as in rolling-element bearings and gears are considerably more stringent than those for conformal bearings and therefore will be the primary concern in this section.

Because of its fluidity oil has several advantages over grease: It can enter the loaded conjunction most readily to flush away contaminants, such as water and dirt, and, particularly, to transfer heat from heavily loaded machine elements.

Grease, however, is extensively used because it permits simplified designs of housings and enclosures, which require less maintenance, and because it is more effective in sealing against dirt and contaminants.


Numerous industry studies have clearly illustrated positive cost-benefit advantages of implementing ergonomic programs.

After all, what manufacturers cannot attest to some number of "No Problem Found" (NPF) returns of products? A buyer returns a product simply because it "does not work."

Put simply, the product did not fit the user in some manner: perhaps it caused the user discomfort, perhaps he or she just could not figure out how to get the product to work, or perhaps the buyer thought the product was just too complicated or difficult to even try to use.

In all of these scenarios, no fault can be found with the product or design except that it was developed without the user in mind.

Bear in mind that the cost of corrections to a poorly designed product geometrically increases throughout the development process. Therefore, human factors specialists should begin working with engineers and designers in the early stages of product development.

When ergonomists are called in to fix a product that has already been sent to market and failed, costs will escalate.

A manufacturer's decision to adopt an ergonomic orientation will serve to reposition its products from a commodity-based supplier to a supplier of high-value products. Integrating ergonomics into a design program ensures more comfortable, safe, and productive design solutions and a better overall product for the end-user.


A widespread increase in the availability of technology in the second half of the twentieth century has meant that more and more people come in contact with a variety of product designs on a daily basis.

Regardless of this increase in the number and types of human users, many engineers still concentrate their design efforts on the machine or system alone, forcing the user to adjust to fit the product.

Such readjustments on the part of the user can lead to discomfort and dissatisfaction with the design, as well as more serious effects, such as safety hazards and personal injury.

Ergonomics (also called human factors) is an applied science that makes the user central to design by improving the fit between that user and his or her tools, equipment, and environment. Key here is that designs are developed to fit both the physiological and psychological needs of the user.

Ergonomists examine all ranges of the human interface, from static anthropometric measures and movement ranges to users' perceptions of a product. This interface involves both software (displays, electronic controls, etc.) and hardware (knobs, grips, physical configurations, etc.) issues.

Ergonomics grew into a distinct scientific discipline during the second world war. What began as a form of engineering (human engineering or human factors engineering) has come to encompass a wide range of interdisciplinary professions, including psychology, industrial design, medicine, and computer science.

Its practitioners' range in focus includes concept modeling and product design, job performance analysis, functional analysis, workspace and equipment design, computer interfaces, environment design, and so forth.

• There are a variety of areas for ergonomic analysis
- manufacturing - reducing worker stress (physiological) can reduce health problems (lost
days), decrease product cost and increase product quality.
- consumer - increasing ease of use can increase utility of the product.

• Ergonomics is the basis for many design methods such as DFA

• Ergonomics takes into account,
- body proportions
- strength
- desired function

• Non-ergonomic designs typically lead to personal injuries (and hence lawsuits, etc.)

• Typical ergonomic problems in manufacturing are listed along with possible solutions, discomfort - uneeded

strain on worker (e.g. hunching over)
1. training for proper lifting methods
2. rearrange operation locations and sequence to reduce unnatural motions.

efficiency - unnatural motions slow production
1. training for proper lifting methods
2. rearrange operation locations and sequence to reduce unnatural motions.

cummulative trauma disorders - muscle strain injuries (lifting 30lb packages all day)
1. training for proper lifting methods
2. use special lifting equipment

repetitive stress injuries - repeated motions. For example carpel tunnel syndrome in the wrists.
1. rearrange operation locations and sequence to reduce unnatural motions.
2. use ergonomically redesigned equipment (e.g. computer keyboards)

information overload/confusion - excessive, inappropriate or a lack of detail. (e.g. fighter pilots, air traffic controllers)
1. redesign displays to be clear with a minimum amount of good information
2. use of color coding and pictures
3. simplify controls to minimum needed

eye strain - fine focus or bad lighting
1. adjust lighting
2. use magnifying lenses

noise - direct hearing or annoyance. (e.g., piercing tones, just too noisy)
1. special hearing protection equipment
2. redesign workspace to reduce noise reverberation
3. redesign equipment to reduce sound emmisions


Flexible automation is an extension of programmable automation. A flexible automated system is capable of producing a variety of parts (or product) with virtually no time lost for changeovers from one part style to the next. There is no lost production time while reprogramming the system and altering the physical setup (tooting, fixtures, machine settings).

Consequently, the system can produce various combinations and schedules of parts or products instead of requiring that they be made in batches.

What makes flexible automation possible is that the differences between parts processed by the system arc not significant. It is a case of soft that the amount of changeover required between styles is minimal. The features of flexible automation can be summarized as follows:

• high investment for a custom-engineered system
• continuous production of variable mixtures of products
• medium production rate,
• flexibility to deal with product design variations

Examples of flexible automation [Ire the flexible manufacturing systems for performing machining operations that date back to the late 1960s.

The relative povitinns of the three types of automation for different production volumes and product varieties. For low production quantities and\ new product introductions, manual production is competitive with programmable automation.


In programmable automation. the production equipment is designed with the capability to change the sequence of operations to accommodate different product configuration.

The operation sequence is controlled by a program, which is a set of instructions coded so that they can be read and interpreted by the system.

New programs can be: prepared and entered into the equipment to produce new products. Some of the features that characterize programmable automation include:

• high investment in general purpose equipment
• lower production rates than fixed automation
• f'lexihiliry to deal with vartariuns and changes in product configuration
• rnostsuitahlefor balch production

Programmable automated production systems arc used in low- and medium-volume production. The parts or products are typically made in batches.

To produce each new batch of a different product. the system must be reprogrammed with the set of machine instructions that correspond to the new product.

 The physical setup of the machine must also he changed.Tools must be loaded. fixtures must be attached to the machine table.and the required machine settings must be entered. This changeover procedure takes time.

Consequently, the typical cycle for a given product includes a period during which the setup and reprogramming takes place. followed by a period in which the hatch is produced. Examples of programmable automation include numerically controlled (NC) machine tools, industrial robots, and programmable logic controllers.


Fixed automation is a system in which the sequence of processing (or assembly) operations is fixed by the equipment configuration. Each of the operations in the sequence is usually simple, involving perhaps a plain linear or rotational motion or an uncomplicated combination of the two; for example, the feeding of a rotating spindle.

It is the integration and coordination of many such operations into one piece of equipment that makes the system complex. Typical features of fixed automation are:

• high initial investment for custom-engineered equipment
• high production rates
• relatively inflexible in accommodating product variety

The economic justification for fixed automation is found in products that are produced in very large quantities and at high production rates.

The high initial cost of the equipment can be spread over a very large number of units, thus making the unit cost attractive compared with alternative methods of production.

Examples of fixed automation include machining transfer lines and automated assembly machines,


Automated manufacturing systems operate in the factory on the physical product. They perform operations such as processing, assembly, inspection, or material handling, in some cases accomplishing more than one of these operations in the same system.

They are called automated because they perform their operations with a reduced level of human participation compared with the corresponding manual process. In some highly automated systems, there is virtually no human participation.

Examples of automated manufacturing systems include:

• automated machine tools that process parts

• transfer lines that perform a series of machining operations

• automated assembly systems

• manufacturing systems that use industrial robots to perform processing or assernblyoperations

• automatic material handling and storage systems to integrate manufacturing operations

• automatic inspection systems for quality control

Automated manufacturing systems can be classified into three basic types:

(1) fixed automation. (2) programmable automation, and (3) flexible automation.


The same principles used to compute hydrostatic forces on surfaces can be applied to the net pressure force on a completely submerged or floating body. The results are the two laws of buoyancy discovered by Archimedes in the third century B.C.:

1. A body immersed in a fluid experiences a vertical buoyant force equal to the weight of the fluid it displaces.

2. A floating body displaces its own weight in the fluid in which it floats.

These two laws are easily derived by referring to Fig. 2.16. In Fig. 2.16a, the body lies between an upper curved surface 1 and a lower curved surface 2. From Eq. (2.45) for vertical force, the body experiences a net upward force

FB = FV(2) = FV (1)
= (fluid weight above 2) - (fluid weight above 1)
=weight of fluid equivalent to body volume

These are identical results and equivalent to law 1 above. The line of action of the buoyant force passes through the center of volume of the displaced body; i.e., its center of mass is computed as if it had uniform density.

Since liquids are relatively heavy, we are conscious of their buoyant forces, but gases also exert buoyancy on any body immersed in them. For example, human beings have an average specific weight of about 60 lbf/ft3.

We may record the weight of a person as 180 lbf and thus estimate the person’s total volume as 3.0 ft3. However, in so doing we are neglecting the buoyant force of the air surrounding the person. At standard conditions, the specific weight of air is 0.0763 lbf/ft3; hence the buoyant force is approximately 0.23 lbf.

If measured in vacuo, the person would weigh about 0.23 lbf more. For balloons and blimps the buoyant force of air, instead of being negligible, is the controlling factor in the design. Also, many flow phenomena, e.g., natural convection of heat and vertical mixing in the ocean, are strongly dependent upon seemingly small buoyant forces.

Floating bodies are a special case; only a portion of the body is submerged, with the remainder poking up out of the free surface. This is illustrated in Fig. 2.17, where the shaded portion is the displaced volume.

Equation (2.49) is modified to apply to this smaller volume FB ( )(displaced volume) floating-body weight Not only does the buoyant force equal the body weight, but also they are collinear since there can be no net moments for static equilibrium.


The simplest practical application of the hydrostatic formula is the barometer, which measures atmospheric pressure. 

A tube is filled with mercury and inverted while submerged in a reservoir. This causes a near vacuum in the closed upper end because mercury has an extremely small vapor pressure at room temperatures (0.16 Pa at 20°C). 

A barometer measures local absolute atmospheric pressure: (a) the height of a mercury column is proportional to patm; (b) a modern portable barometer, with digital readout, uses the resonating silicon element. (Courtesy of Paul Lupke, Druck Inc.)

Since atmospheric pressure forces a mercury column to rise a distance h into the tube, the upper mercury surface is at zero pressure.

At sea-level standard, with pa 101,350 Pa and M 133,100 N/m3 from Table 2.1, the barometric height is h 101,350/133,100 0.761 m or 761 mm. 

In the United States the weather service reports this as an atmospheric “pressure’’ of 29.96 inHg (inches of mercury). 

Mercury is used because it is the heaviest common liquid. A water barometer would be 34 ft high.


Like most scientific disciplines, fluid mechanics has a history of erratically occurring early achievements, then an intermediate era of steady fundamental discoveries in the eighteenth and nineteenth centuries, leading to the twentieth-century era of “modern practice,” as we self-centeredly term our limited but up-to-date knowledge.

Ancient civilizations had enough knowledge to solve certain flow problems. Sailing ships with oars and irrigation systems were both known in prehistoric times. The Greeks produced quantitative information.

Archimedes and Hero of Alexandria both postulated the parallelogram law for addition of vectors in the third century B.C. Archimedes (285–212 B.C.) formulated the laws of buoyancy and applied them to floating and submerged bodies, actually deriving a form of the differential calculus as part of the analysis.

The Romans built extensive aqueduct systems in the fourth century B.C. but left no records showing any quantitative knowledge of design principles.

From the birth of Christ to the Renaissance there was a steady improvement in the design of such flow systems as ships and canals and water conduits but no recorded evidence of fundamental improvements in flow analysis.

Then Leonardo da Vinci (1452–1519) derived the equation of conservation of mass in one-dimensional steady flow. Leonardo was an excellent experimentalist, and his notes contain accurate descriptions of waves, jets, hydraulic jumps, eddy formation, and both low-drag (streamlined) and high-drag (parachute) designs.

A Frenchman, Edme Mariotte (1620–1684), built the first wind tunnel and tested models in it. Problems involving the momentum of fluids could finally be analyzed after Isaac Newton (1642–1727) postulated his laws of motion and the law of viscosity of the linear fluids now called newtonian.

The theory first yielded to the assumption of a “perfect” or frictionless fluid, and eighteenth-century mathematicians (Daniel Bernoulli, Leonhard Euler, Jean d’Alembert, Joseph-Louis Lagrange, and Pierre-Simon Laplace) produced many beautiful solutions of frictionless-flow problems.

Euler developed both the differential equations of motion and their integrated form, now called the Bernoulli equation. D’Alembert used them to show his famous paradox: that a body immersed in a frictionless fluid has zero drag. These beautiful results amounted to overkill, since perfect-fluid assumptions have very limited application in practice and most engineering flows are dominated by the effects of viscosity.

Engineers began to reject what they regarded as a totally unrealistic theory and developed the science of hydraulics, relying almost entirely on experiment. Such experimentalists as Chézy, Pitot, Borda, Weber,

Francis, Hagen, Poiseuille, Darcy, Manning, Bazin, and Weisbach produced data on a variety of flows such as open channels, ship resistance, pipe flows, waves, and turbines. All too often the data were used in raw form without regard to the fundamental physics of flow.

At the end of the nineteenth century, unification between experimental hydraulics and theoretical hydrodynamics finally began. William Froude (1810–1879) and his son Robert (1846–1924) developed laws of model testing, Lord Rayleigh (1842–1919) proposed the technique of dimensional analysis, and Osborne Reynolds (1842–1912) published the classic pipe experiment in 1883 which showed the importance of the dimensionless Reynolds number named after him.

Meanwhile, viscous-flow theory was available but unexploited, since Navier (1785–1836) and Stokes (1819–1903) had successfully added newtonian viscous terms to the equations of motion. The resulting

Navier-Stokes equations were too difficult to analyze for arbitrary flows. Then, in 1904, a German engineer, Ludwig Prandtl (1875–1953), published perhaps the most important paper ever written on fluid mechanics. Prandtl pointed out that fluid flows with small viscosity, e.g., water flows and airflows, can be divided into a thin viscous layer, or boundary layer, near solid surfaces and interfaces, patched onto a nearly inviscid outer layer, where the Euler and Bernoulli equations apply.

Boundary-layer theory has proved to be the single most important tool in modern flow analysis. The twentieth century foundations for the present state of the art in fluid mechanics were laid in a series of broad based experiments and theories by Prandtl and his two chief friendly competitors, Theodore von Kármán (1881–1963) and Sir Geoffrey I. Taylor (1886– 1975).

Since the earth is 75 percent covered with water and 100 percent covered with air, the scope of fluid mechanics is vast and touches nearly every human endeavor. The sciences of meteorology, physical oceanography, and hydrology are concerned with naturally occurring fluid flows, as are medical studies of breathing and blood circulation.

All transportation problems involve fluid motion, with well-developed specialties in aerodynamics of aircraft and rockets and in naval hydrodynamics of ships and submarines.

Almost all our electric energy is developed either from water flow or from steam flow through turbine generators. All combustion problems involve fluid motion, as do the more classic problems of irrigation, flood control, water supply, sewage disposal, projectile motion, and oil and gas pipelines.

The aim of this book is to present enough fundamental concepts and practical applications in fluid mechanics to prepare you to move smoothly into any of these specialized fields of the science of flow—and then be prepared to move out again as new technologies develop.


Generally, the first thing a fluids engineer should do is estimate the Reynolds number range of the flow under study. Very low Re indicates viscous creeping motion, where inertia effects are negligible. Moderate Re implies a smoothly varying laminar flow.

High Re probably spells turbulent flow, which is slowly varying in the time-mean but has superimposed strong random high-frequency fluctuations. Explicit numerical values for low, moderate, and high Reynolds numbers cannot be stated here.

The pecking order changes considerably, and mercury, the heaviest, has the smallest viscosity relative to its own weight.

All gases have high relative to thin liquids such as gasoline, water, and alcohol. Oil and glycerin still have the highest , but the ratio is smaller.

For a given value of V and L in a flow, these fluids exhibit a spread of four orders of magnitude in the Reynolds number.


This Table is about the Viscosity and Kinematic Viscosity of Eight Fluids at 1 atm and 20°C. The Eight liquids are as follows:

Ethyl alcohol
SAE 30  Oil


The quantities such as pressure, temperature, and density discussed in the previous section are primary thermodynamic variables characteristic of any system. There are also certain secondary variables which characterize specific fluid-mechanical behavior.

The most important of these is viscosity, which relates the local stresses in a moving fluid to the strain rate of the fluid element.

When a fluid is sheared, it begins to move at a strain rate inversely proportional to a property called its coefficient of viscosity (u)  . Consider a fluid element sheared in one plane by a single shear stress , as in Fig. 1.4a.

The shear strain angle will continuously grow with time as long as the stress is maintained, the upper surface moving at speed (d u) larger than the lower. Such common fluids as water, oil, and air show a linear relation between applied shear and resulting strain rate

Equation (1.23) is dimensionally consistent; therefore has dimensions of stress-time: {FT/L2} or {M/(LT)}. The BG unit is slugs per foot-second, and the SI unit is kilograms per meter-second. The linear fluids which follow Eq. (1.23) are called newtonian fluids, after Sir Isaac Newton, who first postulated this resistance law in 1687.

The viscosity of newtonian fluids is a true thermodynamic property and varies with
temperature and pressure. At a given state (p, T) there is a vast range of values among the common fluids. Table 1.4 lists the viscosity of eight fluids at standard pressure and temperature.

There is a variation of six orders of magnitude from hydrogen up to glycerin. Thus there will be wide differences between fluids subjected to the same applied stresses.

Generally speaking, the viscosity of a fluid increases only weakly with pressure. For example, increasing p from 1 to 50 atm will increase of air only 10 percent. Temperature, however, has a strong effect, with increasing with T for gases and decreasing for liquids. Figure A.1 (in App. A) shows this temperature variation for various common fluids. It is customary in most engineering work to neglect the pressure variation.

The variation (p, T) for a typical fluid is nicely shown by Fig. 1.5, from Ref. 14, which normalizes the data with the critical-point state ( c, pc, Tc). This behavior, called the principle of corresponding states, is characteristic of all fluids, but the actual numerical values are uncertain to 20 percent for any given fluid.


From the point of view of fluid mechanics, all matter consists of only two states, fluid and solid. The difference between the two is perfectly obvious to the layperson, and it is an interesting exercise to ask a layperson to put this difference into words.

The technical distinction lies with the reaction of the two to an applied shear or tangential stress.

A solid can resist a shear stress by a static deformation; a fluid cannot. Any shear stress applied to a fluid, no matter how small, will result in motion of that fluid. The fluid moves and deforms continuously as long as the shear stress is applied.

As a corollary, we can say that a fluid at rest must be in a state of zero shear stress, a state often called the hydrostatic stress condition in structural analysis. In this condition, Mohr’s circle for stress reduces to a point, and there is no shear stress on any plane cut through the element under stress.

Given the definition of a fluid above, every layperson also knows that there are two classes of fluids, liquids and gases. Again the distinction is a technical one concerning the effect of cohesive forces. A liquid, being composed of relatively close-packed molecules with strong cohesive forces, tends to retain its volume and will form a free surface in a gravitational field if unconfined from above.

Free-surface flows are dominated by gravitational effects. Since gas molecules are widely spaced with negligible cohesive forces, a gas is free to expand until it encounters confining walls. A gas has no definite volume, and when left to itself without confinement, a gas forms an atmosphere which is essentially hydrostatic.

Gases cannot form a free surface, and thus gas flows are rarely concerned with gravitational effects other than buoyancy.

Figure 1.1 illustrates a solid block resting on a rigid plane and stressed by its own weight. The solid sags into a static deflection, shown as a highly exaggerated dashed line, resisting shear without flow. A free-body diagram of element A on the side of the block shows that there is shear in the block along a plane cut at an angle through A.

Since the block sides are unsupported, element A has zero stress on the left and right sides and compression stress p on the top and bottom. Mohr’s circle does not reduce to a point, and there is nonzero shear stress in the block.

By contrast, the liquid and gas at rest in Fig. 1.1 require the supporting walls in order to eliminate shear stress. The walls exert a compression stress of p and reduce Mohr’s circle to a point with zero shear everywhere, i.e., the hydrostatic condition.

The liquid retains its volume and forms a free surface in the container. If the walls are removed, shear develops in the liquid and a big splash results. If the container is tilted, shear again develops, waves form, and the free surface seeks a horizontal configuration, pouring out over the lip if necessary.

Meanwhile, the gas is unrestrained and expands out of the container, filling all available space. Element A in the gas is also hydrostatic and exerts a compression stress p on the walls.

In the above discussion, clear decisions could be made about solids, liquids, and gases. Most engineering fluid-mechanics problems deal with these clear cases, i.e., the common liquids, such as water, oil, mercury, gasoline, and alcohol, and the common gases, such as air, helium, hydrogen, and steam, in their common temperature and pressure ranges.

There are many borderline cases, however, of which you should be aware. Some apparently “solid” substances such as asphalt and lead resist shear stress for short periods but actually deform slowly and exhibit definite fluid behavior over long periods.

Other substances, notably colloid and slurry mixtures, resist small shear stresses but “yield” at large stress and begin to flow as fluids do. Specialized textbooks are devoted to this study of more general deformation and flow, a field called rheology.

Also, liquids and gases can coexist in two-phase mixtures, such as steam-water mixtures or water with entrapped air bubbles. Specialized textbooks present the analysis of such two-phase flows. Finally, there are situations where the distinction between a liquid and a gas blurs.

This is the case at temperatures and pressures above the socalled critical point of a substance, where only a single phase exists, primarily resembling a gas. As pressure increases far above the critical point, the gaslike substance becomes so dense that there is some resemblance to a liquid and the usual thermodynamic approximations like the perfect-gas law become inaccurate.

The critical temperature and pressure of water are Tc = 647 K and pc = 219 atm^2 so that typical problems involving water and steam are below the critical point. Air, being a mixture of gases, has no distinct critical point, but its principal component, nitrogen, has Tc = 126 K and pc = 34 atm. Thus typical problems involving air are in the range of high temperature and low pressure where air is distinctly and definitely a gas.


Fluid mechanics is the study of fluids either in motion (fluid dynamics) or at rest (fluid statics) and the subsequent effects of the fluid upon the boundaries, which may be either solid surfaces or interfaces with other fluids.

Both gases and liquids are classified as fluids, and the number of fluids engineering applications is enormous: breathing, blood flow, swimming, pumps, fans, turbines, airplanes, ships, rivers, windmills, pipes, missiles, icebergs, engines, filters, jets, and sprinklers, to name a few. When you think about it, almost everything on this planet either is a fluid or moves within or near a fluid.

The essence of the subject of fluid flow is a judicious compromise between theory and experiment. Since fluid flow is a branch of mechanics, it satisfies a set of well documented basic laws, and thus a great deal of theoretical treatment is available. However, the theory is often frustrating, because it applies mainly to idealized situations which may be invalid in practical problems.

The two chief obstacles to a workable theory are geometry and viscosity. The basic equations of fluid motion are too difficult to enable the analyst to attack arbitrary geometric configurations. Thus most textbooks concentrate on flat plates, circular pipes, and other easy geometries.

It is possible to apply numerical computer techniques to complex geometries, and specialized textbooks are now available to explain the new computational fluid dynamics (CFD) approximations and methods.

The second obstacle to a workable theory is the action of viscosity, which can be neglected only in certain idealized flowS). First, viscosity increases the difficulty of the basic equations, although the boundary-layer approximation found by Ludwig Prandtl in 1904 has greatly simplified viscous-flow analyses.

Second, viscosity has a destabilizing effect on all fluids, giving rise, at frustratingly small velocities,
to a disorderly, random phenomenon called turbulence. The theory of turbulent
flow is crude and heavily backed up by experiment (Chap. 6), yet it can be quite
serviceable as an engineering estimate. Textbooks now present digital-computer techniques
for turbulent-flow analysis [32], but they are based strictly upon empirical assumptions
regarding the time mean of the turbulent stress field. Thus there is theory available for fluid-flow problems, but in all cases it should be
backed up by experiment. Often the experimental data provide the main source of information about specific flows, such as the drag and lift of immersed bodies.

Fortunately, fluid mechanics is a highly visual subject, with good instrumentation and the use of dimensional analysis and modeling concepts is widespread.

Thus experimentation provides a natural and easy complement to the theory. You should keep in mind that theory and experiment should go hand in hand in all studies of fluid mechanics.


In the following, we will give some examples of finite element applications. The range of applications of finite elements is too large to list, but to provide an idea of its versatility we list the following:

a. stress and thermal analyses of industrial parts such as electronic chips, electric devices, valves, pipes, pressure vessels, automotive engines and aircraft;
b. seismic analysis of dams, power plants, cities and high-rise buildings;
c. crash analysis of cars, trains and aircraft;
d. fluid flow analysis of coolant ponds, pollutants and contaminants, and air in ventilation systems;
e. electromagnetic analysis of antennas, transistors and aircraft signatures;
f. analysis of surgical procedures such as plastic surgery, jaw reconstruction, correction of scoliosis and many others.

This is a very short list that is just intended to give you an idea of the breadth of application areas for the method. New areas of application are constantly emerging. Thus, in the past few years, the medial community has become very excited with the possibilities of predictive, patient-specific medicine.

One approach in predictive medicine aims to use medical imaging and monitoring data to construct a model of a part of an individual’s anatomy and physiology. The model is then used to predict the patient’s response to alternative treatments, such as surgical procedures.

For example, Figure 1.3(a) shows a hand wound and a finite element model. The finite element model can be used to plan the surgical procedure to optimize the stitches.

Heart models, such as shownin Figure 1.3(b), are still primarily topics of research, but it is envisaged that they will be used to design valve replacements and many other surgical procedures. Another area in which finite elements have been used for a long time is in the design of prosthesis, such as shown in Figure 1.3(c).

Most prosthesis designs are still generic, i.e. a single prosthesis is designed for all patients with some variations in sizes. However, with predictive medicine, it will be possible to analyze characteristics of a particular patient such as gait, bone structure and musculature and custom-design an optimal prosthesis.

FEA of structural components has substantially reduced design cycle times and enhanced overall product quality. For example in the auto industry, linearFEAis used for acoustic analysis to reduce interior noise, for analysis of vibrations, for improving comfort, for optimizing the stiffness of the chassis and for increasing the fatigue life of suspension components, design of the engine so that temperatures and stresses are acceptable, and many other tasks.

NonlinearFEAis used for crash analysis with both models of the car and occupants; a finite element model for crash analysis is shown in Figure 1.4(a) and a finite element model for stiffness prediction is shown in Figure 1.4(c). Notice the tremendous detail in the latter; these models still require hundreds of man-hours to develop. The payoff for such a modeling is that the number of prototypes required in the design process can be reduced significantly.

Figure 1.4(b) shows a finite element model of an aircraft. In the design of aircraft, it is imperative that the stresses incurred from thousands of loads, some very rare, some repetitive, do not lead to catastrophic failure or fatigue failure. Prior to the availability of FEA, such a design relied heavily on an evolutionary process (basing new designs on old designs), as tests for all of the loads are not practical.With FEA, it has become possible to make much larger changes in airframe design, such as the shift to composites.

In a completely different vein, finite elements also play a large role in environmental decision making and hazard mitigation. Forexample, Figure 1.5 is a visualization of the dispersal of a chemical aerosol in the middle of Atlanta obtained by FEA; the aerosol concentration is depicted by color, with the highest concentration in red.

Note that the complex topography of this area due the high-rise buildings, which is crucial to determining the dispersal, can be treated in great detail by this analysis. Other areas of hazard mitigation in which FEA offers great possibilities are the modeling of earthquakes and seismic building response, which is being used to improve their seismic resistance, the modeling of wind effects on structures and the dispersal of heat from power plant discharges.

The advection–diffusion equation can also be used to model drug dispersal in the human body. Of course, the application of these equations to these different topics involves extensive modeling, which is the value added by engineers with experience and knowledge, and constitutes the topic of validation.


Mechanical Engineering is one of the cornerstones of the Engineering Profession. Engineering design in itself is a complex undertaking, requiring many skills. Extensive relationships need to be subdivided into a series of simple tasks.

The complexity of the subject requires a sequence in which ideas are introduced and iterated. Design is an iterative process with many interactive phases. Many resources exist to support the designer, including many sources of information and an abundance of computational design tools. 

The design engineer needs not only to develop competence in their field but must also cultivate a strong sense of responsibility and professional work ethic.

Mechanical engineers are associated with the production and processing of energy and with providing the means of production, the tools of transportation, and the techniques of automation. The skill and knowledge base are extensive. 

Among the disciplinary bases are mechanics of solids and fluids, mass and momentum transport, manufacturing processes, and electrical and information theory. Mechanical engineering design involves all the disciplines of mechanical engineering.

Real problems resist compartmentalization. A simple journal bearing involves fluid flow, heat transfer, friction, energy transport, material selection, thermomechanical treatments, statistical descriptions, and so on. 

A building is environmentally controlled. The heating, ventilation, and air-conditioning considerations are sufficiently specialized that some speak of heating, ventilating, and air-conditioning design as if it is separate and distinct from mechanical engineering design. 

Similarly, internal-combustion engine design, turbomachinery design, and jet-engine design are sometimes considered discrete entities. Here, the leading string of words preceding the word design is merely a product descriptor. 

Similarly, there are phrases such as machine design, machine-element design, machine-component design, systems design, and fluid-power design. 

All of these phrases are somewhat more focused examples of mechanical engineering design.

This site is "designed" and intended for professionals, student, enthusiasts, and anybody and everybody that has the interest for MECHANICAL ENGINEERING, and its related functions.


Privacy Policy for

If you require any more information or have any questions about our privacy policy, please feel free to contact us by email at

At, the privacy of our visitors is of extreme importance to us. This privacy policy document outlines the types of personal information is received and collected by how it is used.

Log Files
Like many other Web sites, use of log files. The information inside the log files includes internet protocol ( IP ) addresses, type of browser, Internet Service Provider ( ISP ), date/time stamp, referring/exit pages, and number of clicks to analyze trends, administer the site, track user’s movement around the site, and gather demographic information. IP addresses, and other such information are not linked to any information that is personally identifiable.

Cookies and Web Beacons use cookies to store information about visitors preferences, record user-specific information on which pages the user access or visit, customize Web page content based on visitors browser type or other information that the visitor sends via their browser.

DoubleClick DART Cookie
.:: Google, as a third party vendor, uses cookies to serve ads on
.:: Google's use of the DART cookie enables it to serve ads to users based on their visit to other sites on the Internet.
.:: Users may opt out of the use of the DART cookie by visiting the Google ad and content network privacy policy at the following URL -

Some of our advertising partners may use cookies and web beacons on our site. Our advertising partners include ....
Google Adsense

These third-party ad servers or ad networks use technology to the advertisements and links that appear on directly to your browsers. They automatically receive your IP address when this occurs. Other technologies ( such as cookies, JavaScript, or Web Beacons ) may also be used by the third-party ad networks to measure the effectiveness of their advertisements and / or to personalize the advertising content that you see. no access to or control over these cookies that are used by third-party advertisers.

You should consult the respective privacy policies of these third-party ad servers for more detailed information on their practices as well as for instructions about how to opt-out of certain practices.'s privacy policy does not apply to, and we cannot control the activities of, such other advertisers or web sites.

If you wish to disable cookies, you may do so through your individual browser options. More detailed information about cookie management with specific web browsers can be found at the browsers' respective websites.
free counters