INTRODUCTION:
Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subfield of digital signal processing, digital image processing has many advantages over analog image processing; it allows a much wider range of algorithms to be applied to the input data, and can avoid problems such as the build-up of noise and signal distortion during processing
Ø Some techniques which are used in digital image processing include:
Principal components analysis
Independent component analysis
Self-organizing maps
Ø Principal components analysis: Principal component analysis (PCA) involves a mathematical procedure that transforms a number of possibly correlated variables into a smaller number of uncorrelated variables called principal components. The first principal component accounts for as much of the variability in the data as possible, and each succeeding component accounts for as much of the remaining variability as possible. Depending on the field of application, it is also named the discrete Karhunen–Loève transform (KLT), the Hotelling transform or proper orthogonal decomposition (POD).
Ø Independent component analysis: Independent component analysis (ICA) is a computational method for separating a multivariate signal into additive subcomponents supposing the mutual statistical independence of the non-Gaussian source signals. It is a special case of blind source separation.
Ø Self-organizing maps: Self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map. Self-organizing maps are different than other artificial neural networks in the sense that they use a neighborhood function to preserve the topological properties of the input space.
WHY DIGITAL IMAGE PROCESSING IS USED
`Reasons for compression
– Image data need to be accessed at a different time or location
– Limited storage space and transmission bandwidth
Reasons for manipulation
– Image data might experience nonideal acquisition, transmission or
display (e.g., restoration, enhancement and interpolation)
– Image data might contain sensitive content (e.g., fight against piracy,
conterfeit and forgery)
– To produce images with artistic effect (e.g., pointellism)
Reasons for analysis
– Image data need to be analyzed automatically in order to reduce the
burden of human operators
– To teach a computer to “see” in A.I. tasks
DIGITAL IMAGE CORERECTION
Here are some very basic steps that you can use to adjust your JPEG deep-sky astronomical images right out of the camera.
Using the Eyedropper and Info Palette
Set the Black Point
Increase the Contrast
Increase the Brightness
Adjust the Color
Increase the Color Saturation
Using the Eyedropper and Info Palette The eyedropper and info palette are very powerful tools in Photoshop that can be used to read the color values of pixels in the image under the current location of the cursor. The eyedropper is on the tools palette, and the info palette can be accessed under the Window > Info menu. Click on the small right pointing triangle at the upper right of the info palette box, and click on Palette Options. Then check the box for Actual color for the first readout. The info palette will show the color information for the Red, Blue and Green channels of the pixel you have the cursor over. Zero values indicate no color information, and a value of 255 indicates totally saturated. In these terms, a pixel with RGB values of 0,0,0 would be totally black, and 255, 255, 255 would be totally pure white. A value of 255, 0, 0 would indicate pure totally saturated Red. The area that the densitometer reads can be changed from a single pixel to a 3 x 3 or 5 x 5 pixel box. It is very helpful to set this value to greater than a single pixel because if you enlarge an image to where you can see individual pixels, even in a area that appears to be a very uniform color , you will see that there are random pixels scattered throughout that can vary greatly. By setting the size of the densitometer to a 3 x 3 or 5 x 5 pixel box, you average the values of all the pixels on the box and lessen the chance that a random odd pixel will fool you.
CONCLUSION:
Digital image processing, no matter what it sounds like, is not related to prestidigitation and does not in other ways depend on sleight-of-hand. A computer's digits are not fingers, but numbers. The 'digital' part begins when the computer systematically converts a black-and-white 'analogue' image (a conventional photograph, for example) into a vast matrix of tiny, discrete points the computer can individually number by brightness. When there are enough of these tiny points, called 'pixels' (a nickname for picture elements), the naked eye can scarcely distinguish between the analogue display of the digitized image and its black-and-white source. A digitized image of anything, even a single letter, thus holds a staggering amount of computable 'information', whereas a conventional black-and-white photograph presents the same information in continuous, but incalculable gradations of tone across a two-dimensional surface of film (Cannon and Hunt 1981, 214). The 'image processing' part comes in when the computer goes on to enhance this original image with programs designed, for example, to achieve the full scale of contrast of black to white where the original murky image provides only a small range of greys.
REFERENCES:
http://en.wikipedia.org/wiki/Digital_image_processing
Digital image processing By Rafael C. Gonzalez, Richard Eugene Woods
Friday, August 21, 2009
Subscribe to:
Comments (Atom)
