Unit - 4
Measurements and Control System
Measurement:
The process or act of measurement consists of obtaining a quantitative comparison between a predefined standard and a measurand. Measurand is used to designate the particular phhysical parameter being observed and quantified: this is the input quantity of the measuring process.
Measurement provides a comparison between what was intended and what was actually achieved. Measurement is also a fundamental element of any control process. The concept of control requires the measured discrepancy between the actual and the desired performances. The controlling portion of the system must know the magnitude and direction of the difference in order to react intelligently. Temperature, flows, pressure and vibrational amplitudes must be constantly monitored by measurement to ensure proper performance of the system.
To be useful, measurement must be reliable. Having incorrect information is potentially more damaging than having no information.
The input to a measuring system is known as measurand, the output is called measurement.
Standard: A physical representation of a unit of measurement; a piece of equipment having known measure of physical quantity.
Fundamental methods of measurement:
There are two basic methods of measurement
1. Direct comparison with either a primary or a secondary standard and
2. Indirect comparison through the use of a calibrated system.
Direct and Indirect measurements:
Measurement is a process of comparison of the physical quantity with a reference standard.
1. Direct Measurements:
The value of the physical parameter (measurand) is determined by comparing it directly with reference standards. The physical quantities like mass, length and time are measured by direct comparison.
Types of Errors in Measurement
The error may arise from the different source and are usually classified into the following types. These types are
Their types are explained below in details.
1. Gross Errors
The gross error occurs because of the human mistakes. For examples consider the person using the instruments takes the wrong reading, or they can record the incorrect data. Such type of error comes under the gross error. The gross error can only be avoided by taking the reading carefully.
For example – The experimenter reads the 31.5ºC reading while the actual reading is 21.5Cº. This happens because of the oversights. The experimenter takes the wrong reading and because of which the error occurs in the measurement.
Such type of error is very common in the measurement. The complete elimination of such type of error is not possible. Some of the gross error easily detected by the experimenter but some of them are difficult to find. Two methods can remove the gross error.
Two methods can remove the gross error. These methods are
2. Systematic Errors
The systematic errors are mainly classified into three categories.
2 (i) Instrumental Errors
These errors mainly arise due to the three main reasons.
(a) Inherent Shortcomings of Instruments – Such types of errors are inbuilt in instruments because of their mechanical structure. They may be due to manufacturing, calibration or operation of the device. These errors may cause the error to read too low or too high.
For example – If the instrument uses the weak spring, then it gives the high value of measuring quantity. The error occurs in the instrument because of the friction or hysteresis loss.
(b) Misuse of Instrument – The error occurs in the instrument because of the fault of the operator. A good instrument used in an unintelligent way may give an enormous result.
For example – the misuse of the instrument may cause the failure to adjust the zero of instruments, poor initial adjustment, using lead to too high resistance. These improper practices may not cause permanent damage to the instrument, but all the same, they cause errors.
(c) Loading Effect – It is the most common type of error which is caused by the instrument in measurement work. For example, when the voltmeter is connected to the high resistance circuit it gives a misleading reading, and when it is connected to the low resistance circuit, it gives the dependable reading. This means the voltmeter has a loading effect on the circuit.
The error caused by the loading effect can be overcome by using the meters intelligently. For example, when measuring a low resistance by the ammeter-voltmeter method, a voltmeter having a very high value of resistance should be used.
2 (ii) Environmental Errors
These errors are due to the external condition of the measuring devices. Such types of errors mainly occur due to the effect of temperature, pressure, humidity, dust, vibration or because of the magnetic or electrostatic field. The corrective measures employed to eliminate or to reduce these undesirable effects are
2 (iii) Observational Errors
Such types of errors are due to the wrong observation of the reading. There are many sources of observational error. For example, the pointer of a voltmeter resets slightly above the surface of the scale. Thus, an error occurs (because of parallax) unless the line of vision of the observer is exactly above the pointer. To minimise the parallax error highly accurate meters are provided with mirrored scales.
3. Random Errors
The error which is caused by the sudden change in the atmospheric condition, such type of error is called random error. These types of error remain even after the removal of the systematic error. Hence such type of error is also called residual error.
Calibration of the measuring instrument is the process in which the readings obtained from the instrument are compared with the sub-standards in the laboratory at several points along the scale of the instrument. As per the results obtained from the readings obtained of the instrument and the sub-standards, the curve is plotted. If the instrument is accurate there will be matching of the scales of the instrument and the sub-standard. If there is deviation of the measured value from the instrument against the standard value, the instrument is calibrated to give the correct values.
What Is Pressure?
Pressure is defined as force per unit area that a fluid exerts on its surroundings. Pressure, P, is a function of force, F, and area, A:
P = F/A
The SI unit for pressure is the pascal (N/m2), but other common units of pressure include pounds per square inch (psi), atmospheres (atm), bars, inches of mercury (in. Hg), millimeters of mercury (mm Hg), and torr.
A pressure measurement can be described as either static or dynamic. The pressure in cases with no motion is static pressure. Examples of static pressure include the pressure of the air inside a balloon or water inside a basin. Often, the motion of a fluid changes the force applied to its surroundings. For example, say the pressure of water in a hose with the nozzle closed is 40 pounds per square inch (force per unit area). If you open the nozzle, the pressure drops to a lower value as you pour out water. A thorough pressure measurement must note the circumstances under which it is made. Many factors including flow, compressibility of the fluid, and external forces can affect pressure.
Pressure measurement methods
A pressure measurement can further be described by the type of measurement being performed. The three methods for measuring pressure are absolute, gauge, and differential. Absolute pressure is referenced to the pressure in a vacuum, whereas gauge and differential pressures are referenced to another pressure such as the ambient atmospheric pressure or pressure in an adjacent vessel.
Absolute Pressure
The absolute measurement method is relative to 0 Pa, the static pressure in a vacuum. The pressure being measured is acted upon by atmospheric pressure in addition to the pressure of interest. Therefore, absolute pressure measurement includes the effects of atmospheric pressure. This type of measurement is well-suited for atmospheric pressures such as those used in altimeters or vacuum pressures. Often, the abbreviations Paa (Pascal’s absolute) or psia (pounds per square inch absolute) are used to describe absolute pressure.
Gauge Pressure
Gauge pressure is measured relative to ambient atmospheric pressure. This means that both the reference and the pressure of interest are acted upon by atmospheric pressures. Therefore, gauge pressure measurement excludes the effects of atmospheric pressure. These types of measurements include tire pressure and blood pressure measurements. Similar to absolute pressure, the abbreviations Pag (Pascal’s gauge) or psig (pounds per square inch gauge) are used to describe gauge pressure.
Differential Pressure
Differential pressure is similar to gauge pressure; however, the reference is another pressure point in the system rather than the ambient atmospheric pressure. You can use this method to maintain relative pressure between two vessels such as a compressor tank and an associated feed line. Also, the abbreviations Pad (Pascal’s differential) or PSID (pounds per square inch differential) are used to describe differential pressure.
Temperature Measurement
Temperature is the measure of heat in the body. Temperature characterizes the body as hot or cold. The SI unit used to measure the temperature in Kelvin(K). The other scales used to measure the temperature are Celsius or Fahrenheit. The instrument which is used for measuring temperature is a thermometer.
Devices for Temperature Measurement
Thermometer
A thermometer is an instrument which is used for measuring temperature of a body. The first invention to measure the temperature was by Galileo. He invented the rudimentary water thermometer in 1593. He named this device “Thermoscope”. This instrument was ineffective because water froze at low temperature.
In 1974, Gabriel Fahrenheit invented the mercury thermometer. A mercury thermometer consists of a long, narrow and uniform glass tube called the stem. The scales on which temperature is measured are marked on the stem. There is a small bulb at the end of the stem which contains mercury in it. A capillary tube is there inside glass stem in which mercury gets expanded when a thermometer comes in contact with a hot body.
Mercury is toxic and it is difficult to dispose of it when the thermometer breaks. Nowadays, digital thermometers have come into play which doesn’t contain mercury. The two commonly used thermometers are Clinical thermometers and Laboratory thermometers.
Clinical Thermometer
These thermometers are used in homes, clinics, and hospitals. These thermometers contain kink which prevents mercury to go back when it is taken out of patient’s mouth so that temperature can be noted down easily. There are two temperature scales on either side of mercury thread, one is Celsius scale and other is Fahrenheit scale.
It can give temperature range from a minimum of 35°C or 94°F to the maximum of 42°C or 108°F. Fahrenheit scale is more sensitive than Celsius scale. Thus, the temperature is measured in Fahrenheit(°F).
Laboratory Thermometer
Laboratory thermometers can be used to notice the temperature in school labs or other labs for scientific research purposes. These are also used in industries to measure the temperature of solutions or instruments.
The stem, as well as bulb of laboratory thermometer, is longer when compared to the clinical thermometer. There is no kink in laboratory thermometer. It has only Celsius scale. It can measure temperature from -10°C to 110°C.
In physics and engineering, mass flow rate is the mass of a substance which passes per unit of time. Its unit is kilogram per second in SI units, and slug per second or pound per second in US customary units. The common symbol is (ṁ, pronounced "m-dot"), although sometimes μ (Greek lowercase mu) is used.
Sometimes, mass flow rate is termed mass flux or mass current, see for example Schaum's Outline of Fluid Mechanics.[1] In this article, the (more intuitive) definition is used.
Mass flow rate is defined by the limit
i.e., the flow of mass m through a surface per unit time t.
The over dot on the m is Newton's notation for a time derivative. Since mass is a scalar quantity, the mass flow rate (the time derivative of mass) is also a scalar quantity. The change in mass is the amount that flows after crossing the boundary for some time duration, not the initial amount of mass at the boundary minus the final amount at the boundary, since the change in mass flowing through the area would be zero for steady flow.
A general deformation of a body can be expressed in the form x = F(X) where X is the reference position of material points in the body. Such a measure does not distinguish between rigid body motions (translations and rotations) and changes in shape (and size) of the body. A deformation has units of length.
where I is the identity tensor. As a result, strains are dimensionless and usually expressed as a decimal fraction, a percentage or part-per-notation. Strains measure how much deformation differs locally from rigid-body deformation.
In general, the strain is a tensor quantity. Physical insight into strains can be gained by observing that the strain can be decomposed into normal and shear components. The amount of stretching or compressing along material line elements or fibers is the normal strain, and the amount of distortion associated with sliding flat layers over each other is the shear strain within the deforming body. This could be applied by elongation, shortening, or changes in volume, or angular distortion.
Strain measures
Depending on the amount of strain, or local deformation, the analysis of deformation is subdivided into three deformation theories:
What is Torque?
Torque is the measure of the force that can cause an object to rotate about an axis. Force is what causes an object to accelerate in linear kinematics. Similarly, torque is what causes an angular acceleration. Hence, torque can be defined as the rotational equivalent of linear force. The point where the object rotates is called the axis of rotation. In physics, torque is simply the tendency of a force to turn or twist. Different terminologies such as moment or moment of force are interchangeably used to describe torque.
Common symbols | |
SI Unit | N.m |
In SI base units | |
Dimension | |
Other units | Pound-force-feet, lbf-inch |
What is Force?
Push or pull of an object is considered a force. Push and pull come from the objects interacting with one another. Terms like stretch and squeeze can also be used to denote force.
In Physics, force is defined as:
The push or pull on an object with mass that causes it to change its velocity.
Force is an external agent capable of changing the state of rest or motion of a particular body. It has a magnitude and a direction. The direction towards which the force is applied is known as the direction of the force and the application of force is the point where force is applied.
The Force can be measured using a spring balance. The SI unit of force is Newton(N).
Common symbols: | F→, F |
SI unit: | Newton |
In SI base units: | kg·m/s2 |
Other units: | dyne, poundal, pound-force, kip, kilo pond |
Derivations from other quantities: | F = m a |
Dimension: | LMT-2 |
Accuracy
The ability of the instrument to measure the accurate value is known as accuracy. In other words, the closeness of the measured value to a standard or true value. Accuracy is obtained by taking small readings. The small reading reduces the error of the calculation. The accuracy of the system is classified into three types as follows:
Point Accuracy
The accuracy of the instrument only at a particular point on its scale is known as point accuracy. It is important to note that this accuracy does not give any information about the general accuracy of the instrument.
Accuracy as Percentage of Scale Range
The uniform scale range determines the accuracy of a measurement. This can be better understood with the help of the following example:
Consider a thermometer having the scale range up to 500ºC. The thermometer has an accuracy of ±0.5, i.e., ±0.5 percent of increase or decrease in the value of the instrument is negligible. But if the reading is more or less than 0.5ºC, it is considered a high-value error.
Accuracy as Percentage of True Value
Such type of accuracy of the instruments is determined by identifying the measured value regarding their true value. The accuracy of the instruments is neglected up to ±0.5 percent from the true value.
Precision
The closeness of two or more measurements to each other is known as the precision of a substance. If you weigh a given substance five times and get 3.2 kg each time, then your measurement is very precise but not necessarily accurate. Precision is independent of accuracy. The below example will tell you about how you can be precise but not accurate and vice versa. Precision is sometimes separated into:
Repeatability
The variation arising when the conditions are kept identical and repeated measurements are taken during a short time period.
Reproducibility
The variation arises using the same measurement process among different instruments and operators, and over longer time periods.
Example: A good analogy for understanding accuracy and precision is to imagine a football player shooting at the goal. If the player shoots into the goal, he is said to be accurate. A football player who keeps striking the same goalpost is precise but not accurate. Therefore, a football player can be accurate without being precise if he hits the ball all over the place but still scores. A precise player will hit the ball to the same spot repeatedly, irrespective of whether he scores or not. A precise and accurate football player will not only aim at a single spot but also score the goal.
The top left image shows the target hit at high precision and accuracy. The top right image shows the target hit at a high accuracy but low precision. The bottom left image shows the target hit at a high precision but low accuracy. The bottom right image shows the target hit at low accuracy and low precision.
Limits & Fits: Why study Limits & Fits?
• Exact size is impossible to achieve.
• Establish boundaries within which deviation from perfect form is allowed but still the design intent is fulfilled.
• Enable interchangeability of components during assembly
Definition of Limits: The maximum and minimum permissible sizes within which the actual size of a component lies are called Limits.
When two parts are to be assembled, the relation resulting from the difference between their sizes before assembly is called a fit. A fit may be defined as the degree of tightness and looseness between two mating parts.
(i) Clearance Fit: This means there is a gap between the two mating parts. Let’s see the following schematic representation of clearance fit. The diameter of the shaft is smaller than the diameter of the hole. There is a clearance between the shaft and the hole. Hence the shaft can easily slide into the hole.
(ii) In clearance fit the difference between the maximum size of the hole and the minimum size of the shaft is known as the Maximum clearance and the difference between the minimum size of the hole and the maximum size of the shaft is known as the Minimum clearance. Clearance fit can be sub-classified as follows: Loose Fit: It is used between those mating parts where no precision is required. It provides minimum allowance and is used on loose pulleys, agricultural machineries etc.
Running Fit: For a running fit, the dimension of shaft should be smaller enough to maintain a film of oil for lubrication. It is used in bearing pair etc. An allowance 0.025 mm per 25 mm of diameter of boring may be used. Slide Fit or
Medium Fit: It is used on those mating parts where great precision is required. It provides medium allowance and is used in tool slides, slide valve, automobile parts, etc
(iii) Interference Fit: There is no gap between the faces and there will be an intersecting of material will occur. In the following schematic representation of the Interference fit. The diameter of the shaft is larger than the hole diameter. There will be the intersection of two mating components will be occurred. Hence the shaft will need additional force to fit into the hole. Figure: Interference Fit In Interference fit the difference between the maximum size of the shaft and the minimum size of the hole is known as the Maximum Interference and the difference between the minimum size of the shaft and the maximum size of the hole is known as the Minimum Interference.
The interference fit can be sub-classified as follows: Shrink Fit or Heavy Force Fit: It refers to maximum negative allowance. In assembly of the hole and the shaft, the hole is expanded by heating and then rapidly cooled in its position. It is used in fitting of rims etc. Medium Force Fit: These fits have medium negative allowance. Considerable pressure is required to assemble the hole and the shaft. It is used in car wheels, armature of dynamos etc. Tight Fit or Press Fit: One part can be assembled into the other with a hand hammer or by light pressure. A slight negative allowance exists between two mating parts (more than wringing fit). It gives a semi-permanent fit and is used on a keyed pulley and shaft, rocker arm, etc
A spindle slides freely in a bush. The basic size of the fit is 50 x10– 3 mm. If the tolerances quoted are 0 +62 for the holes and -80 +180 for the shaft, find the upper limit and lower limit of the shaft and the minimum clearance.
Solution: Tolerances are given in units of one thousandth of millimeter, so the upper limit of the hole will be 50.062 mm and lower limit for the hole is the same as the basic size of 50.000 mm.
The shaft upper limit will be (50.000 – 0.080) x 10– 3 = 49.92x10– 3 m
The shaft lower limit will be (50.000 – 0.180) x 10– 3 = 49.82x10– 3 m
The minimum clearance or allowance is (50.000 – 49.920) 10– 3 = 8x10– 3 mm
A 50 mm diameter shaft is made to rotate in the bush. The tolerances for both shaft and bush are 0.050 mm. determine the dimension of the shaft and bush to give a maximum clearance of 0.075 mm with the hole basis system.
Solution:
In the hole basis system, lower deviation of hole is zero, therefore low limit of hole = 50 mm.
High limit of hole = Low limit + Tolerance = 50.00 + 0.050 = 50.050 mm = 50.050 x 10– 3 m
High limit of shaft = Low limit of hole – Allowance = 50.00 – 0.075 = 49.925 mm = 49.925 x 10– 3 m
Low limit of the shaft = High limit – Tolerance = 49.925 – 0.050 = 49.875 mm = 49.875 x 10– 3 m
The dimension of the system is shown in Figure
A control system is a set of mechanical or electronic devices which control the other devices. In control system behaviours of the system is desired by differential equations.
Fig: Block diagram for closed loop control system
The above figure shows a feedback system with a control system. The components of control system are:
i) The actuator takes the signal transforms it accordingly and causes some action.
ii) Sensors are used to measure continuous and discrete process variables.
iii) Controllers are those elements which adjust the actuators in response to the measurement.
Key takeaway:
A control system is a system, which provides the desired response by controlling the output. The following figure shows the simple block diagram of a control system.
Here, the control system is represented by a single block. Since, the output is controlled by varying input, the control system got this name. We will vary this input with some mechanism. In the next section on open loop and closed loop control systems, we will study in detail about the blocks inside the control system and how to vary this input in order to get the desired response.
Examples − Traffic lights control system, washing machine
Traffic lights control system is an example of control system. Here, a sequence of input signal is applied to this control system and the output is one of the three lights that will be on for some duration of time. During this time, the other two lights will be off. Based on the traffic study at a particular junction, the on and off times of the lights can be determined. Accordingly, the input signal controls the output. So, the traffic lights control system operates on time basis.
Classification of Control Systems
Based on some parameters, we can classify the control systems into the following ways.
Continuous time and Discrete-time Control Systems
SISO and MIMO Control Systems
What is an Open Loop Control System?
In this kind of control system, the output doesn’t change the action of the control system otherwise; the working of the system which depends on time is also called the open-loop control system. It doesn’t have any feedback. It is very simple, needs low maintenance, quick operation, and cost-effective. The accuracy of this system is low and less dependable. The example of the open-loop type is shown below. The main advantages of the open-loop control system are easy, needs less protection; operation of this system is fast & inexpensive and the disadvantages are, it is reliable and has less accuracy.
Open Loop Control System
Example
The clothes dryer is one of the examples of the open-loop control system. In this, the control action can be done physically through the operator. Based on the clothing’s wetness, the operator will fix the timer to 30 minutes. So, after that, the timer will discontinue even after the machine clothes are wet.
The dryer in the machine will stop functioning even if the preferred output is not attained. This displays that the control system doesn’t feedback. In this system, the controller of the system is the timer.
What is a Closed-Loop Control System?
The closed-loop control system can be defined as the output of the system that depends on the input of the system. This control system has one or more feedback loops among its input & output. This system provides the required output by evaluating its input. This kind of system produces the error signal and it is the main disparity between the output and input of the system.
Closed-Loop Control System
The main advantages of the closed-loop control system are accurate, expensive, reliable, and requires high maintenance.
Example
The best example of the closed-loop control system is AC or air conditioner. The AC controls the temperature by evaluating it with the nearby temperature. The evaluation of temperature can be done through the thermostat. Once the air conditioner gives the error signal is the main difference between the room and the surrounding temperature. So, the thermostat will control the compressor.
These systems are accurate, expensive, reliable, and requires high maintenance.
References:
1. Basic Mechanical Engineering, G Shanmugam, S Ravindran, McGraw Hill
2. Basic Mechanical Engineering, M P Poonia and S C Sharma, Khanna Publishers
3. Mechatronics: Principles, Concepts and Applications, Nitaigour Mahalik, McGraw Hill
4. Mechatronics, As per AICTE: Integrated Mechanical Electronic Systems, K.P. Ramachandran, G.K. Vijayaraghavan, M.S. Balasundaram, Wiley India
5. Mechanical Measurements & Control, Dr. D. S. Kumar. Metropolitan Book Company