Unit - 1
Digital Image Fundamentals
Q1) What are the Basic Steps in Digital Image Processing?
A1) The basic steps in a digital image processing system are stated below;
Q2) What is Image Acquisition?
A2) Image Acquisition is the first step in any image processing system. The general aim of image acquisition is to transform an optical image into an array of numerical data which could be later manipulated on a computer.
Now image acquisition is achieved using the correct visual perception devices for different applications. For example, if we need an X-ray, we can use an X-ray sensitive camera. For our normal images, we use a camera that is sensitive to the visual spectrum.
Q3) Define Image Storage?
A3) All video signals are essentially in the analog form i.e., Electrical signals convey luminance and color with continuous variable voltage. The cameras alter the interfaced image to a computer where the processing algorithm is written.
Q4) What do you mean by Image Processing?
A4) Systems ranging from microphones to general-purpose large computers are used in image processing. Dedicated image processing systems connected to host computers are very popular. The processing of digital images involves procedures that are usually expressed in algorithmic forms due to which most image processing functions are implemented in software.
Q5) What is Display?
A5) A display device produces and shows a visual form of numerical values stored in a computer as an image array. Principal display devices are printers, TV monitors, CRTs, etc. Any erasable raster graphics display can be used as a display unit with an image processing system. However, monochrome and color TV monitors are the principal display devices used in the modern image processing system.
Q6) Define Transmission?
A6) There are a lot of applications where we need to transmit images. Image transmission is the process of encoding, transmitting, and decoding the digitized data representing an image.
Q7) What are the components to perform image processing on digital image?
A7) An image Processing System is the combination of the different elements involved in digital image processing. Digital image processing is the processing of an image through a digital computer. Digital image processing uses different computer algorithms to perform image processing on digital images.
It consists of the following components: -
Fig 1 - Components
Image sensors sense the intensity, amplitude, co-ordinates, and other features of the images and pass the result to the image processing hardware. It includes the problem domain.
Image processing hardware is the dedicated hardware that is used to process the instructions obtained from the image sensors. It passes the result to a general-purpose computer.
The computer used in the image processing system is the general-purpose computer that is used by us in our daily life.
Image processing software is the software that includes all the mechanisms and algorithms that are used in the image processing system.
Mass storage stores the pixels of the images during the processing.
Once the image is processed then it is stored in the hard copy device. It can be a pen drive or any external ROM device.
It includes the monitor or display screen that displays the processed images.
The network is the connection of all the above elements of the image processing system.
Q8) Define Elements of visual perception?
A8) Human Intuition and analysis play a central role in the choice of techniques of digital image processing. Hence developing a basic understanding of human visual perception is the first step. Factors such as how human and electronic imaging devices compare in terms of resolution and ability to adapt to changes to illumination are not only interesting but are also important from a practical point of view.
Q9) Define the structure of the Human Eye? with the Diagram.
A9)
Diagram shows a simple cross-section of the human eye. The eye is nearly a sphere with an average diameter of approximately 20mm. Three membranes enclose the eye; the cornea and sclera outer cover, the choroid, and the retina. The cornea is a tough, transparent tissue that covers the anterior surface of the eye. Continuous with the cornea, the sclera is an opaque membrane that encloses the remaining the optic globe
The choroid lies directly below the sclera. This membrane contains a network of blood vessels that serve as the major source of nutrition for the eye. At the anterior extreme, the choroid is divided into the ciliary body and iris. The iris contracts or expands to control the amount of light entering the eye.
The lens is made up of concentric layers of fibrous cells and is suspended by fibers that attach to the ciliary body.
The innermost membrane of the eye is the retina, which lines the inside of the wall’s entire posterior portion. When an eye is properly focused light from an object outside the eye is imaged on the retina. The pattern vision is afforded by the distribution of discrete light receptors over the surface of the retina. There are two classes of receptors cones and rods.
Image shows the density of rods and cones for a cross-section of the right eye passing through the region of the emergence of the optic nerve from the eye. The absence of receptors in this area results in a blind spot.
Except for this region, the distribution of receptors is radially symmetric about the fovea.
The fovea is a circular indention in the retina of about 1.5mm in diameter.
Fig 3 Distribution of rods and cones in Retina
Q10) How is the Image Formed in the Eye?
A10) In an ordinary camera, the lens has a fixed focal length, and focusing at various distances is achieved by varying the distances between the lens and the imaging plane where the film is located. In the human eye, however, the opposite is true. The distance between the lens and imaging region is fixed and the focal length needed to achieve proper focus is obtained by varying the shape of the lens. The fibers in the ciliary body accomplish this flattening or thickening the lens for distance or near objects respectively. The distance between the center of the lens and the retina along the visual axis is approximately 17mm. The range of focal length is approximately 14mm to 17mm.
The geometry in Fig below illustrates how to obtain the dimensions of an image formed on the retina.
Graphical Representation of an eye looking at the palm tree.
Q11) What is Image Sensing and Acquisition?
A11) Most of the images in which we are interested are generated by the combination of an “illumination” source and the reflection of energy from that source by the elements of the scene being imaged. We enclose illumination and scene in quotes to emphasize the fact that they are considered more general than the familiar situation in which a visible light source illuminates a common everyday 3-D scene. For example, the illumination may originate from a source of electromagnetic energy such as radar, infrared, or X-ray system. But, as noted earlier it could originate from less traditional sources such as ultrasound or even a computer-generated illumination pattern. Similarly, the scene elements could be familiar objects, but they can Justas easily be molecules buried in rock formations or a human brain. Depending on the nature of the source, illumination energy is reflected from a transmitted through objects. An example is the first category is light reflected from a planar surface. An example is the second category is when X-rays pass through the patient’s body to generate a diagnostic X-ray film. In some applications, the reflected o transmitted energy is focused onto a photo converter which converts the energy into visible light.
Fig 5 (a, b, c) shows three principle arrangement used to transform illumination energy into digital images. The idea is simple, incoming energy s transformed into a voltage by the combination of input electrical power and sensor material that is responsive to the particular type of energy being detected. The output voltage waveform is the response of the sensors and a digital quantity is obtained from each sensor by digitizing its response.
Q12) What are the step of Digitization?
A12) The process of digitization involves two steps:
That means, Digitization=Sampling+Quantization
Q13) Define Quantization?
A13) The values obtained by sampling a continuous function usually comprise an infinite set of real numbers lining from a minimum to maximum depending upon the sensor calibration. These values must be represented by a finite number of bits usually used by a computer to store or process data. In practice, the sampled signal values are represented by a finite set of an integer values. This is known as quantization. The rounding of a number is a simple example of quantization.
With these concepts of sampling and quantization, we now need to understand what these terms mean when we look at an image on the computer monitor.
The higher the special resolution of the image, the greater the sampling rate i.e., lower is the image area. Similarly, the higher the grey level resolution more the number of quantized levels.
Q14) What is RGB HSI Models?
A14) The purpose of the color model is to facilitate the specification of colors in some standard generally accepted way. In essence, a color model is a specification of a coordinate system and a subspace within that system where each color is separated by a single point.
Q15) Describe the RGB color model?
A15) In the RBG Color model, each color appears in its primary spectral components of red-green, and blue. This model is based on a Cartesian coordinate system. In this model, the grayscale extends from black to white along the line joining these two points. The different colors in this model are points on or inside the cube and are defined by vectors extending from the origin. For convenience, the assumption is that all color values have been normalized so that the cube shown in fig 6 is the unite cube. That is all the values of RG&B are assumed to be in the range [0,1]
Fig 6 Schematic of RGB Colour Cube
The image represented in the RGB color model consists of three component images one for each primary color. When fed into an RGB monitor these three images combine on the screen to produce a composite color image.
Q16) Describe HSI color model?
A16) When humans view color objects, we describe them by hue, saturation, and brightness. Brightness is a subjective descriptor whereas saturation gives a measure of the degree to which pure color is diluted by white light. And hue is a color attribute that describes a pure color. HIS model is considered as an ideal tool for developing image processing algorithms based on color descriptions that are natural and intuitive to humans who after all are the developers and users of these algorithms. We can say that RGB is ideal for image color generation but its use for color description is much more limited.
Q17) Explain Fourier series theorem?
A17) Any periodic function f(t) can be expressed as a weighted sum (infinite) of sine and
Discrete Fourier Transform (DFT)
Forward DFT
Inverse DFT
Discrete Cosine Transform
A discrete cosine transform (DCT) expresses as a finite sequence of that of the data points in terms of a sum of cosine functions oscillating at the different frequencies.
DCT is a Fourier-related transform same as that of the discrete Fourier transform (DFT), but using only real numbers. DCTs are equivalent to DFTs of roughly twice the length, operating on the real data with even symmetry. Types of DCT are listed below with eleven samples.