CHAPTER-1
Introduction
Introduction
The automatic number plate recognition systems (ANPR) exist for a long time, but only in the late 90s it became an important application because of the large increase in the number of vehicles. The information extracted from the license plates is mainly used for traffic monitoring, access control, parking, motorway road tolling, and border control, making car logs for parking systems, journey time measurementetc. by the law enforcement agencies.
The recognition problem is generally sub-divided into 5 parts:
(1) Image acquisition: Capturing the image of the license plate
(2) Pre-processing the image:Normalization, adjusting the brightness, skewness and contrast of the image
(3) Localising the license plate : Extracting the number plate.
(4) Character segmentation : Locating and identifying the individual symbol images on the plate.
(5) Optical character recognition: There may be further refinements over these (like matching the vehicle license number with a particular database to track suspected vehicles etc.) but the basic structure remains the same.
A guiding parameter in this regard is country-specific traffic norms and standards. This helps to fine tune the system i.e. number of characters in the license plate, text luminance level (relative index i.e. dark text on light background or light text on dark background) etc. So the problem can then be narrowed down for application in a particular country. For example, in India the norm is printing the license plate numbers in black colour on a white background for private vehicles and on a yellow background for commercial vehicles. The general format for the license plate is two letters (for state code) followed by district code, then a four digit code specific to a particular vehicle.
The Term Paper on Gujarat Motor Vehicles Department Learner’s License Computer Test
GUJARAT MOTOR VEHICLES DEPARTMENT LEARNER’S LICENCE COMPUTER TEST THE MOTOR VEHICLES ACT, 1988 & RULES QUESTION BANK No. 1 Question Option 1 Option 2 Slow down, sound horn and pass Option 3 Stop the vehicle and wait till the pedestrians cross the road and then proceed Hospital ahead Wait till the other vehicle crosses the bridge and then proceed Picture No Answer No. 3 Sign 2 3 Near a Sound ...
ANPR systems as a practical application of artificial intelligence
Massive integration of information technologies into all aspects of modern life caused demand for processing vehicles as conceptual resources in information systems. Because a standalone information system without any data has no sense, there was also a need to transform information about vehicles between the reality and information systems. This can be achieved by a human agent, or by special intelligent equipment which is be able to recognize vehicles by their number plates in a real environment and reflect it into conceptual resources. Because of this, various recognition techniques have been developed and number plate recognition systems are today used in various traffic and security applications, such as parking, access and border control, or tracking of stolen cars. In parking, number plates are used to calculate duration of the parking. When a vehicle enters an input gate, number plate is automatically recognized and stored in database. When a vehicle later exits the parking area through an output gate, number plate is recognized again and paired with the first-one stored in the database. The difference in time is used to calculate the parking fee. Automatic number plate recognition systems can be used in access control. For example, this technology is used in many companies to grant access only to vehicles of authorized personnel. In some countries, ANPR systems installed on country borders automatically detect and monitor border crossings. Each vehicle can be registered in a central database and compared to a black list of stolen vehicles. In traffic control, vehicles can be directed to different lanes for a better congestion control in busy urban communications during the rush hours.
The Term Paper on Conversion Of Number Systems
The number is a symbol or a word used to represent a numeral, while a system is a functionally related groups of elements, so as whole, a number is set or group of symbols to represent numbers or numerals. In other words, any system that is used for naming or representing numbers is a number system. We are quite familiar with decimal number system using ten digits. However digital devices and ...
Mathematical aspects of number plate recognition systems
In most cases, vehicles are identified by their number plates, which are easily readable for humans, but not for machines. For machine, a number plate is only a gray picture defined as a two-dimensional function f (x, y) , where x and y are spatial coordinates, and f is a light intensity at that point. Because of this, it is necessary to design robust mathematical machinery, which will be able to extract semantics from spatial domain of the captured image. These functions are implemented in so-called “ANPR systems”, where the acronym “ANPR” stands for an “Automatic Number Plate Recognition”. ANPR system means transformation of data between the real environment and information systems. The design of ANPR systems is a field of research in artificial intelligence, machine vision, pattern recognition and neural networks. Because of this, the main goal of this thesis is to study algorithmic and mathematical principles of automatic number plate recognition systems.
Overview:
Chapter one deals with the introduction of the automatic license plate recognition. It gives the over all view of the project.
Chapter two deals with the introduction to the digital image processing. It clearly explains what an image is? Different formats of the system? And about simple image compression, morphological processing, segmentation, object recognition .
Chapter three deals with functions of matlab system . we use in this project. About matlab, Graphical User Interface is briefly explained in it.
Chapter four deals with problematic of number plate area detection. This problematic includes algorithms, which are able to detect a rectangular area of the number plate in original image. Humans define the number plate in a natural language as a “small plastic or metal plate attached to a vehicle for official identification purposes”, but machines do not understand this definition. Because of this, there is a need to find an alternative definition of the number plate based on descriptors, which will be comprehensible for machines. This is a fundamental problem of machine vision and of this section principles of the character segmentation. In most cases, characters are segmented using the horizontal projection of a pre-processed number plate, but sometimes these principles can fail, especially if detected number plates are too warped or skewed. Then, more sophisticated segmentation algorithms must be use and deals with various methods normalization and detection of characters. At first, character dimensions and brightness must be normalized to ensure invariance towards a size and light conditions. Then, a feature extraction algorithm must be applied on a character to filter irrelevant data. It is necessary to extract features, which will be invariant towards character deformations, used font style etc.
The Term Paper on Formatting Graphic Files
You are probably asking yourself, “Why should I care about graphics file formats?” Well, the average end-user (non-programmers) is only concerned with the storage of their graphics information by using a format that majority of graphics applications and filters can read. End-users are not normally concerned with the internal composition of the data within the graphics file itself. Graphic file ...
Chapter five deals with the simulation results of the matlab program for the license plate recognition.The outputs of the program are cleared displayed.
Chapter six deals with the conclusion and future scope of the project.
CHAPTER 2
OVERVIEW OF IMAGE PROCESSING
OVERVIEW OF DIGITAL IMAGE PROCESSING
2.1 IMAGE:
An image is a two-dimensional picture, which has a similar appearance to some subject usually a physical object or a person.
Image is a two-dimensional, such as a photograph, screen display, and as well as a three-dimensional, such as a statue. They may be captured by optical devices—such as cameras, mirrors, lenses, telescopes, microscopes, etc. and natural objects and phenomena, such as the human eye or water surfaces.
The word image is also used in the broader sense of any two-dimensional figure such as a map, a graph, a pie chart, or an abstract painting. In this wider sense, images can also be rendered manually, such as by drawing, painting, carving, rendered automatically by printing or computer graphics technology, or developed by a combination of methods, especially in a pseudo-photograph.
Fig.2.1 Sample color image and its gray scale representation.
An image is a rectangular grid of pixels. It has a definite height and a definite width counted in pixels. Each pixel is square and has a fixed size on a given display. However different computer monitors may use different sized pixels. The pixels that constitute an image are ordered as a grid (columns and rows); each pixel consists of numbers representing magnitudes of brightness and color.
The Essay on Face Detection in Image Processing
Skin classification based on neural networks seems to be a promising method. 2. Parametric skin modeling methods are better suited for constructing classifiers in case of limited training and expected target data set. The generalization and interpolation ability of these methods makes it possible to construct a classifier with acceptable performance from incomplete training data. 3. Excluding ...
Fig 2.2 Image pixels showing gray values of corresponding pixel.
Each pixel has a color. The color is a 32-bit integer. The first eight bits determine the redness of the pixel, the next eight bits the greenness, the next eight bits the blueness, and the remaining eight bits the transparency of the pixel.
Fig 2.3 32-bit Representation of RGB color and transparency.
2.2 IMAGE FILE SIZES:
Image file size is expressed as the number of bytes that increases with the number of pixels composing an image, and the color depth of the pixels. The greater the number of rows and columns, the greater the image resolution, and the larger the file. Also, each pixel of an image increases in size when its color depth increases, an 8-bit pixel (1 byte) stores 256 colors, a 24-bit pixel (3 bytes) stores 16 million colors, the latter known as true color.
Image compression uses algorithms to decrease the size of a file. High resolution cameras produce large image files, ranging from hundreds of kilobytes to megabytes, per the camera’s resolution and the image-storage format capacity. High resolution digital cameras record 12 megapixel (1MP = 1,000,000 pixels / 1 million) images, or more, in true color. For example, an image recorded by a 12 MP camera; since each pixel uses 3 bytes to record true color, the uncompressed image would occupy 36,000,000 bytes of memory, a great amount of digital storage for one image, given that cameras must record and store many images to be practical. Faced with large file sizes, both within the camera and a storage disc, image file formats were developed to store such large images.
2.3 IMAGE FILE FORMATS:
Image file formats are standardized means of organizing and storing images. This entry is about digital image formats used to store photographic and other images. Image files are composed of either pixel or vector (geometric) data that are rasterized to pixels when displayed (with few exceptions) in a vector graphic display. Including proprietary types, there are hundreds of image file types. The PNG, JPEG, and GIF formats are most often used to display images on the Internet.
The Term Paper on Unit 30: Digital Graphics P1, M1
The Role of Components within a given computer system in the Production and Manipulation of Graphics Introduction: In this assignment, I will explain the role of hardware and software components within a computer system. Furthermore, I will explain how they relate to digital graphics.I will also recommend suitable components for Cambridge Industrial Design. (P1) I will then compare the limitations ...
Fig 2.4 image formats
In addition to straight image formats, Metafile formats are portable formats which can include both raster and vector information. The metafile format is an intermediate format. Most Windows applications open metafiles and then save them in their own native format.
2.3.1 RASTER FORMATS:
These formats store images as bitmaps (also known as pixmaps).
* JPEG/JFIF:
JPEG (Joint Photographic Experts Group) is a compression method. JPEG compressed images are usually stored in the JFIF (JPEG File Interchange Format) file format. JPEG compression is lossy compression. Nearly every digital camera can save images in the JPEG/JFIF format, which supports 8 bits per color (red, green, blue) for a 24-bit total, producing relatively small files. Photographic images may be better stored in a lossless non-JPEG format if they will be re-edited, or if small “artifacts” are unacceptable. The JPEG/JFIF format also is used as the image compression algorithm in many Adobe PDF files.
* EXIF:
The EXIF (Exchangeable image file format) format is a file standard similar to the JFIF format with TIFF extensions. It is incorporated in the JPEG writing software used in most cameras. Its purpose is to record and to standardize the exchange of images with image metadata between digital cameras and editing and viewing software. The metadata are recorded for individual images and include such things as camera settings, time and date, shutter speed, exposure, image size, compression, name of camera, color information, etc. When images are viewed or edited by image editing software, all of this image information can be displayed.
* TIFF:
The TIFF (Tagged Image File Format) format is a flexible format that normally saves 8 bits or 16 bits per color (red, green, blue) for 24-bit and 48-bit totals, respectively, usually using either the TIFF or TIF filename extension. TIFFs are lossy and lossless. Some offer relatively good lossless compression for bi-level (black & white) images. Some digital cameras can save in TIFF format, using the LZW compression algorithm for lossless storage. TIFF image format is not widely supported by web browsers. TIFF remains widely accepted as a photograph file standard in the printing business. TIFF can handle device-specific color spaces, such as the CMYK defined by a particular set of printing press inks.
The Term Paper on Digital Media Its Impact On The Film Industry
Digital Media: Its Impact on the Film Industry The popularity of the Internet and home computing has had a dramatic effect on day-to-day life in America and the rest of the modern world. It has changed the way we work, play and even communicate. What was once only available to college students at large universities, and corporations, now is easily accessible by anyone. The so-called Digital ...
* PNG:
The PNG (Portable Network Graphics) file format was created as the free, open-source successor to the GIF. The PNG file format supports true color (16 million colors) while the GIF supports only 256 colors. The PNG file excels when the image has large, uniformly colored areas. The lossless PNG format is best suited for editing pictures, and the lossy formats, like JPG, are best for the final distribution of photographic images, because JPG files are smaller than PNG files. PNG, an extensible file format for the lossless, portable, well-compressed storage of raster images. PNG provides a patent-free replacement for GIF and can also replace many common uses of TIFF. Indexed-color, grayscale, and true color images are supported, plus an optional alpha channel. PNG is designed to work well in online viewing applications, such as the World Wide Web. PNG is robust, providing both full file integrity checking and simple detection of common transmission errors.
* GIF:
GIF (Graphics Interchange Format) is limited to an 8-bit palette, or 256 colors. This makes the GIF format suitable for storing graphics with relatively few colors such as simple diagrams, shapes, logos and cartoon style images. The GIF format supports animation and is still widely used to provide image animation effects. It also uses a lossless compression that is more effective when large areas have a single color, and ineffective for detailed images or dithered images.
* BMP:
The BMP file format (Windows bitmap) handles graphics files within the Microsoft Windows OS. Typically, BMP files are uncompressed, hence they are large. The advantage is their simplicity and wide acceptance in Windows programs.
2.3.2 VECTOR FORMATS:
As opposed to the raster image formats above (where the data describes the characteristics of each individual pixel), vector image formats contain a geometric description which can be rendered smoothly at any desired display size.
At some point, all vector graphics must be rasterized in order to be displayed on digital monitors. However, vector images can be displayed with analog CRT technology such as that used in some electronic test equipment, medical monitors, radar displays, laser shows and early video games. Plotters are printers that use vector data rather than pixel data to draw graphics.
* CGM:
CGM (Computer Graphics Metafile) is a file format for 2D vector graphics, raster graphics, and text. All graphical elements can be specified in a textual source file that can be compiled into a binary file or one of two text representations. CGM provides a means of graphics data interchange for computer representation of 2D graphical information independent from any particular application, system, platform, or device.
* SVG:
SVG (Scalable Vector Graphics) is an open standard created and developed by the World Wide Web Consortium to address the need for a versatile, scriptable and all purpose vector format for the web and otherwise. The SVG format does not have a compression scheme of its own, but due to the textual nature of XML, an SVG graphic can be compressed using a program such as gzip.
2.4 IMAGE PROCESSING:
Digital image processing, the manipulation of images by computer, is relatively recent development in terms of man’s ancient fascination with visual stimuli. In its short history, it has been applied to practically every type of images with varying degree of success. The inherent subjective appeal of pictorial displays attracts perhaps a disproportionate amount of attention from the scientists and also from the layman. Digital image processing like other glamour fields, suffers from myths, mis-connect ions, mis-understandings and mis-information. It is vast umbrella under which fall diverse aspect of optics, electronics, mathematics, photography graphics and computer technology. It is truly multidisciplinary endeavor ploughed with imprecise jargon.
Several factor combine to indicate a lively future for digital image processing. A major factor is the declining cost of computer equipment. Several new technological trends promise to further promote digital image processing. These include parallel processing mode practical by low cost microprocessors, and the use of charge coupled devices (CCDs) for digitizing, storage during processing and display and large low cost of image storage arrays.
2.5 FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING:
2.5.1 Image Acquisition:
Image Acquisition is to acquire a digital image. To do so requires an image sensor and the capability to digitize the signal produced by the sensor. The sensor could be monochrome or color TV camera that produces an entire image of the problem domain every 1/30 sec. the image sensor could also be line scan camera that produces a single image line at a time. In this case, the objects motion past the line.
Fig 2.5 Block diagram of digital image processing.
Fig 2.6 image capturing device camera.
Scanner produces a two-dimensional image. If the output of the camera or other imaging sensor is not in digital form, an analog to digital converter digitizes it. The nature of the sensor and the image it produces are determined by the application.2
Fig 2.7 smart phone camera.
2.5.2 Image Enhancement:
Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interesting an image. A familiar example of enhancement is when we increase the contrast of an image because “it looks better.” It is important to keep in mind that enhancement is a very subjective area of image processing.
Fig 2.8 captured image, image after increase in contrast.
2.5.3 Image restoration:
Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation.
Fig 2.9 image before restoration and after restoration.
Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a “good” enhancement result. For example, contrast stretching is considered an enhancement technique because it is based primarily on the pleasing aspects it might present to the viewer, where as removal of image blur by applying a deblurring function is considered a restoration technique.
2.5.4 Color image processing:
The use of color in image processing is motivated by two principal factors. First, color is a powerful descriptor that often simplifies object identification and extraction from a scene. Second, humans can discern thousands of color shades and intensities, compared to about only two dozen shades of gray. This second factor is particularly important in manual image analysis.
Fig.2.10 a gray image and its color enhanced image.
2.5.5 Wavelets and multiresolution processing:
Wavelets are the formation for representing images in various degrees of resolution. Although the Fourier transform has been the mainstay of transform based image processing since the late1950’s, a more recent transformation, called the wavelet transform, and is now making it even easier to compress, transmit, and analyze many images . Unlike the Fourier transform, whose basis functions are sinusoids, wavelet transforms are based on small values, called Wavelets, of varying frequency and limited duration.
Wavelets were first shown to be the foundation of a powerful new approach to signal processing and analysis called Multiresolution theory. Multiresolution theory incorporates and unifies techniques from a variety of disciplines, including sub band coding from signal processing, quadrature mirror filtering from digital speech recognition, and pyramidal image processing.
Fig.2.11. Histogram representation of an image
2.5.6 Compression:
Compression, as the name implies, deals with techniques for reducing the storage required saving an image, or the bandwidth required for transmitting it. Although storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Internet, which are characterized by significant pictorial content. Image compression is familiar to most users of computers in the form of image file extensions, such as the jpg file extension used in the JPEG (Joint Photographic Experts Group) image compression standard.
2.5.7 Morphological processing:
Morphological processing deals with tools for extracting image components that are useful in the representation and description of shape. The language of mathematical morphology is set theory. As such, morphology offers a unified and powerful approach to numerous image processing problems. Sets in mathematical morphology represent objects in an image. For example, the set of all black pixels in a binary image is a complete morphological description of the image.
Fig 2.12 morphological description of an image.
In binary images, the sets in question are members of the 2-D integer space Z2, where each element of a set is a 2-D vector whose coordinates are the (x,y) coordinates of a black(or white) pixel in the image. Gray-scale digital images can be represented as sets whose components are in Z3. In this case, two components of each element of the set refer to the coordinates of a pixel, and the third corresponds to its discrete gray-level value.
2.5.8 Segmentation:
Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually.
Fig 2.13 original image and segmented sub-images
On the other hand, weak or erratic segmentation algorithms almost always guarantee eventual failure. In general, the more accurate the segmentation, the more likely recognition is to succeed.
2.5.9 Representation and description:
Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constituting either the boundary of a region (i.e., the set of pixels separating one image region from another) or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary. The first decision that must be made is whether the data should be represented as a boundary or as a complete region. Boundary representation is appropriate when the focus is on external shape characteristics, such as corners and inflections.
Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal shape. In some applications, these representations complement each other. Choosing a representation is only part of the solution for transforming raw data into a form suitable for subsequent computer processing. A method must also be specified for describing the data so that features of interest are highlighted. Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another.
2.5.10 Object recognition:
The last stage involves recognition and interpretation. Recognition is the process that assigns a label to an object based on the information provided by its descriptors. Interpretation involves assigning meaning to an ensemble of recognized objects.
2.5.11 Knowledge base:
Knowledge about a problem domain is coded into image processing system in the form of a knowledge database. This knowledge may be as simple as detailing regions of an image when the information of interests is known to be located, thus limiting the search that has to be conducted in seeking that information. The knowledge base also can be quite complex, such as an inter related to list of all major possible defects in a materials inspection problem or an image data base containing high resolution satellite images of a region in connection with change deletion application. In addition to guiding the operation of each processing module, the knowledge base also controls the interaction between modules. The system must be endowed with the knowledge to recognize the significance of the location of the string with respect to other components of an address field. This knowledge glides not only the operation of each module, but it also aids in feedback operations between modules through the knowledge base. We implemented preprocessing techniques using MATLAB.
2.6 Components of an image processing system:
As recently as the mid-1980s, numerous models of image processing systems being sold throughout the world were rather substantial peripheral devices that attached to equally substantial host computers. Late in the 1980s and early in the 1990s, the market shifted to image processing hardware in the form of single boards designed to be compatible with industry standard buses and to fit into engineering workstation cabinets and personal computers. In addition to lowering costs, this market shift also served as a catalyst for a significant number of new companies whose specialty is the development of software written specifically for image processing.
Image displays
computer
Mass storage
Specialized image processing hardware
Image processing software
Image sensor
Problem domain
Network
Hard copy
Fig 2.14 componentas of image processing system
Although large-scale image processing systems still are being sold for massive imaging applications, such as processing of satellite images, the trend continues toward miniaturizing and blending of general-purpose small computers with specialized image processing hardware. Figure 1.24 shows the basic components comprising a typical general-purpose system used for digital image processing. The function of each component is discussed in the following paragraphs, starting with image sensing.
* Image sensors:
With reference to sensing, two elements are required to acquire digital images. The first is a physical device that is sensitive to the energy radiated by the object we wish to image. The second, called a digitizer, is a device for converting the output of the physical sensing device into digital form. For instance, in a digital video camera, the sensors produce an electrical output proportional to light intensity. The digitizer converts these outputs to digital data.
* Specialized image processing hardware:
Specialized image processing hardware usually consists of the digitizer just mentioned, plus hardware that performs other primitive operations, such as an arithmetic logic unit (ALU), which performs arithmetic and logical operations in parallel on entire images. One example of how an ALU is used is in averaging images as quickly as they are digitized, for the purpose of noise reduction. This type of hardware sometimes is called a front-end subsystem, and its most distinguishing characteristic is speed. In other words, this unit performs functions that require fast data throughputs (e.g., digitizing and averaging video images at 30 frames) that the typical main computer cannot handle.
* Computer:
The computer in an image processing system is a general-purpose computer and can range from a PC to a supercomputer. In dedicated applications, sometimes specially designed computers are used to achieve a required level of performance, but our interest here is on general-purpose image processing systems. In these systems, almost any well-equipped PC-type machine is suitable for offline image processing tasks.
* Image processing software:
Software for image processing consists of specialized modules that perform specific tasks. A well-designed package also includes the capability for the user to write code that, as a minimum, utilizes the specialized modules. More sophisticated software packages allow the integration of those modules and general-purpose software commands from at least one computer language.
* Mass storage:
Mass storage capability is a must in image processing applications. An image of size 1024*1024 pixels, in which the intensity of each pixel is an 8-bit quantity, requires one megabyte of storage space if the image is not compressed. When dealing with thousands, or even millions, of images, providing adequate storage in an image processing system can be a challenge. Digital storage forimage processing applications fall into three principal categories: (1) short-term storage for use during processing, (2) on-line storage for relatively fast recall, and (3) archival storage, characterized by infrequent access. Storage is measured in bytes (eight bits), Kbytes (one thousand bytes), Mbytes (one million bytes), Gbytes (meaning giga, or one billion, bytes), and Tbytes (meaning tera, or one trillion, bytes).
One method of providing short-term storage is computer memory. Another is by specialized boards, called frame buffers that store one or more images and can be accessed rapidly, usually at video rates. The latter method allows virtually instantaneous image zoom, as well as scroll (vertical shifts) and pan (horizontal shifts).
Frame buffers usually are housed in the specialized image processing hardware unit shown in Fig. 1.24. Online storage generally takes the form of magnetic disks or optical-media storage. The key factor characterizing on-line storage is frequent access to the stored data. Finally, archival storage is characterized by massive storage requirements but infrequent need for access. Magnetic tapes and optical disks housed in “jukeboxes” are the usual media for archival applications.
* Image displays:
Image displays in use today are mainly color (preferably flat screen) TV monitors. Monitors are driven by the outputs of image and graphics display cards that are an integral part of the computer system. Seldom are there requirements for image display applications that cannot be met by display cards available commercially as part of the computer system. In some cases, it is necessary to have stereo displays, and these are implemented in the form of headgear containing two small displays embedded in goggles worn by the user.
* Hardcopy:
Hardcopy devices for recording images include laser printers, film cameras, heat-sensitive devices, inkjet units, and digital units, such as optical and CD-ROM disks. Film provides the highest possible resolution, but paper is the obvious medium of choice for written material. For presentations, images are displayed on film transparencies or in a digital medium if image projection equipment is used. The latter approach is gaining acceptance as the standard for image presentations.
* Network:
Networking is almost a default function in any computer system in use today. Because of the large amount of data inherent in image processing applications, the key consideration in image transmission is bandwidth. In dedicated networks, this typically is not a problem, but communications with remote sites via the Internet are not always as efficient. Fortunately, this situation is improving quickly as a result of optical fiber and other broadband technologies.
CHAPTER-3
EXISTING METHODS
EXISTING METHODS
Automatic number plate detection systems are very useful to the law enforcement agencies as the need for Radio Frequencies identification tags and similar equipments are minimized. Since number plate guidelines are not strictly practiced every where , it often becomes difficult to correctly identify the non standard number plate characters. Many techniques has been implemented for the detection of number plate.
In recent years many character segmentation have been developed based on text character segmentation methods in printed documents, such as vertical projection algorithm, hough transformation algorithm, cluster based method a novel adaptive morphological approach. In the existing techniques and algorithms for segmenting characters have not been successfully applied in degraded images from real world. But, in those algorithms, some algorithms efficiency, adaptability, and anti noises are low. And some algorithms are computationally complex and cannot be proposed for real time LPR systems.
The following information gives different methods for the license plate detection.
METHODS:
Using boundary features:
• The boundary of the license plate is rectangular
• Simplest, fast and straight forward
• Hardly be applied to complex images. Since they are too sensitive to unwanted edges
Using global image features:
• Finite connected object whose dimension is like a license plate
• Straight forward independent of the license plate position.
• May generate broken objects.
Using characters features
• There must be characters on the license plate
• Robust to rotation
• Time consuming (processing all binary objects), produce detection errors when other text in the image
Using two or more features:
• Combining features is more effective
• More reliable
• Computationally complex
Our proposed method is based on retraining the Artificial Neural Network (ANN) by building a new sample font database. Artificial Neural Network uses Graphical User Interface.
This can improve system performance and make the system more effectively by taking relevant samples.
CHAPTER-4
INTRODUCTION TO MATLAB
INTRODUCTION TO MATLAB
MATLAB is a high-performance language for technical computing. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation.
Typical uses include
* Math and computation
* Algorithm development
* Data acquisition
* Modeling, simulation, and prototyping
* Data analysis, exploration, and visualization
* Scientific and engineering graphics
* Application development, including graphical user interface building
Fig 4.1 matlab icon
MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. This allows you to solve many technical computing problems, especially those with matrix and vector formulations, in a fraction of the time it would take to write a program in a scalar non interactive language such as C or FORTRAN.
The name MATLAB stands for matrix laboratory. MATLAB was originally written to provide easy access to matrix software developed by the LINPACK and EISPACK projects. Today, MATLAB engines incorporate the LAPACK and BLAS libraries, embedding the state of the art in software for matrix computation.
MATLAB has evolved over a period of years with input from many users. In university environments, it is the standard instructional tool for introductory and advanced courses in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-productivity research, development, and analysis.
MATLAB features a family of add-on application-specific solutions called toolboxes. Very important to most uses of MATLAB, toolboxes allow you to learn and apply specialized technology. Toolboxes are comprehensive collections of MATLAB functions (M – files) that extend the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are available include signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others.
4.1 The MATLAB system:
The MATLAB system consists of five main parts
* Development Environment:
This is the set of tools and facilities that help you use MATLAB functions and files. Many of these tools are graphical user interfaces. It includes the MATLAB desktop and command window, a command history, an editor and debugger, and browsers for viewing help, the workspace, files, and the search path.
* The MATLAB Mathematical Function Library:
This is a vast collection of computational algorithms ranging from elementary functions, like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix Eigen values, Bessel functions, and fast Fourier transforms.
* The MATLAB Language:
This is a high-level matrix/array language with control flow statements, functions, data structures, input/output, and object-oriented programming features. It allows both “programming in the small” to rapidly create quick and dirty throw-away programs, and “programming in the large” to create large and complex application programs.
* Graphics:
MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as annotating and printing these graphs. It includes high-level functions for two-dimensional and three-dimensional data visualization, image processing, animation, and presentation graphics. It also includes low-level functions that allow you to fully customize the appearance of graphics as well as to build complete graphical user interfaces on your MATLAB applications.
* The MATLAB Application Program Interface (API):
This is a library that allows you to write C and FORTRAN programs that interact with MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling MATLAB as a computational engine, and for reading and writing MAT-files.
Various toolboxes are there in MATLAB for computing recognition techniques, but we are using IMAGE PROCESSING toolbox.
4.2 GRAPHICAL USER INTERFACE (GUI):
MATLAB’s Graphical User Interface Development Environment (GUIDE) provides a rich set of tools for incorporating graphical user interfaces (GUIs) in M-functions. Using GUIDE, the processes of laying out a GUI (i.e., its buttons, pop-up menus, etc.)and programming the operation of the GUI are divided conveniently into two easily managed and relatively independent tasks. The resulting graphical M-function is composed of two identically named (ignoring extensions) files:
* A file with extension .fig, called a FIG-file that contains a complete graphical description of all the function’s GUI objects or elements and their spatial arrangement. A FIG-file contains binary data that does not need to be parsed when he associated GUI-based M-function is executed.
* A file with extension .m, called a GUI M-file, which contains the code that controls the GUI operation. This file includes functions that are called when the GUI is launched and exited, and callback functions that are executed when a user interacts with GUI objects for example, when a button is pushed.
To launch GUIDE from the MATLAB command window, type guide filename, where filename is the name of an existing FIG-file on the current path. If filename is omitted, GUIDE opens a new (i.e., blank) window.
Fig 4.2 Blank matlab GUIDE window
A graphical user interface (GUI) is a graphical display in one or more windows containing controls, called components that enable a user to perform interactive tasks. The user of the GUI does not have to create a script or type commands at the command line to accomplish the tasks. Unlike coding programs to accomplish tasks, the user of a GUI need not understand the details of how the tasks are performed.
GUI components can include menus, toolbars, push buttons, radio buttons, list boxes, and sliders just to name a few. GUIs created using MATLAB tools can also perform any type of computation, read and write data files, communicate with other GUIs, and display data as tables or as plots.
CHAPTER-5
LICENSE PLATE RECOGNITION USING SEGMENTATION
LICENSE PLATE RECOGNITION USING SEGMENTATION
The aim of this project is to build application which recognizes number plate at the entrance of a gate. By using camera, the system captures the number plate of a car and processes them. Once the number plate is detected, recognition is done with the help of database. The aim of the project is to avoid traffic issues and similar problems. The algorithm which is used in this project includes following steps:
• Pre-processing
• Segmentation
• Recognition
• Comparison
INPUT IMAGE:
Browsing for image is the first stage of this process. An image is read from the specified location in memory to the GUI. This image is later processed to recognize the number on the number plate and to authenticate. The processes involved in this stage are shown in the form of flow chart below.
START
Read an Image
Resize the image
END
Load image in Required AREA
Fig 5.1 flow chart for browsing and resizing the image.
5.1 IMAGE READING:
imread( )
This is a MATLAB function which is used to read an image for processing.
Syntax
A=imread(filename,fmt)
Description
A = imread(filename,fmt) reads a grayscale or truecolor image named filename into A. If the file contains a grayscale intensity image, A is a two-dimensional array. If the file contains a truecolor (RGB) image, A is a three-dimensional (m-by-n-by-3) array.
Filename is a string that specifies the name of the graphics file, and fmt is a string that specifies the format of the file. If the file is not in the current directory or in a directory in the MATLAB path, specify the full pathname of the location on your system. If imread cannot find a file named filename, it looks for a file namedfilename.fmt.
5.2 IMAGE RESIZING:
In computer graphics, image scaling or resizing is the process of resizing a digital image. Scaling is a non-trivial process that involves a trade-off between efficiency, smoothness and sharpness. As the size of an image is increased, so the pixels which comprise the image become increasingly visible, making the image appear “soft”. Conversely, reducing an image will tend to enhance its smoothness and apparent sharpness.
Apart from fitting a smaller display area, image size is most commonly decreased (or subsampled ordownsampled) in order to produce thumbnails. Enlarging an image (upsampling or interpolating) is generally common for making smaller imagery fit a bigger screen in fullscreen mode, for example. In zooming an image, it is not possible to discover any more information in the image than already exists, and image quality inevitably suffers. However, there are several methods of increasing the number of pixels that an image contains, which evens out the appearance of the original pixels.
5.3 SCALING METHODS:
An image size can be changed in several ways. Consider doubling the size of the following image:
Fig 5.2 image of size rxc
The easiest way of doubling its size is nearest-neighbor interpolation, replacing every pixel with four pixels of the same color:
Fig 5.3 image of size 2(rxc)
The resulting image is larger than the original, and preserves all the original detail, but has undesirable jaggedness. The diagonal lines of the W, for example, now show the characteristic “stairway” shape.
Other scaling methods are better at preserving smooth contours in the image. For example, bilinear interpolation produces the following result:
Fig 5.4 bilinear interpolation output.
Linear (or bilinear, in two dimensions) interpolation is typically better than the nearest-neighbor system for changing the size of an image, but causes some undesirable softening of details and can still be somewhat jagged. Better scaling methods include bicubic interpolation (example below) andLanczos resampling.
Fig 5.5 bicubing interpolation output.
For magnifying computer graphics with low resolution and/or few colors (usually from 2 to 256 colors) the best results will be achieved by hqx or other pixel art scaling algorithms. These produce sharp edges and maintain high level of detail. hq2x:
Fig 5.6 image with sharp edges
For scaling photos (and raster images with lots of colors) see also anti-aliasing algorithms called super sampling.
5.4 ALGORITHMS
Two standard scaling algorithms are bilinear and bicubic interpolation. Filters like these work by interpolating pixel color values, introducing a continuous transition into the output even where the original material has discrete transitions. Although this is desirable for continuous-tone images, some algorithms reduce contrast (sharp edges) in a way that may be undesirable for line art.
Nearest-neighbor interpolation preserves these sharp edges, but it increases aliasing (or jaggies; where diagonal lines and curves appear pixelated).
Several approaches have been developed that attempt to optimize for bitmap art by interpolating areas of continuous tone, preserve the sharpness of horizontal and vertical lines and smooth all other curves.
imresize( )
This function resizes the image i.e changes the resolution of the image to larger or still smaller resolutions. –
Syntax
B = imresize(A, scale)
B = imresize(A, [num_rows num_cols])
Description
B = imresize(A, scale) returns image B that is scale times the size of A. The input image A can be a grayscale, RGB, or binary image. If scale is between 0 and 1.0, B is smaller than A. If scale is greater than 1.0, B is larger than A.
B = imresize(A, [numrows numcols]) returns image B that has the number of rows and columns specified by[numrows numcols]. Either num rows or num cols may be NaN, in which case imresize computes the number of rows or columns automatically to preserve the image aspect ratio.
Using these functions the image is modified and loaded into the image area on the GUI, on a click of browse button.
Fig 5.7 image before resizing and after resizing
Browse button’s function is defined such that it imports the image resizes it for the display over the image area on the GUI.
Fig 5.8 The GUI window showing the Browse tab is shown in above figure.
5.5 Segmentation:
Segmentation is the process of partitioning a digital image into multiple segments (sets of pixels, also known as super pixels).
The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics.
The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image (see edge detection).
Each of the pixels in a region are similar with respect to some characteristic or computed property, such as color, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristic(s).
After the image is resized it goes through some processes before segmentation and finally the image is segmented. The steps involved in this are showed in the flow chart below.
START
Input image (Resized Image)
Converting RGB2GRAY
Segmented the image successfully
Deleting the Rows
Thresholding
Filling the Holes
Median Filtering
Finding the Max Connectivity
Region Properties
Removing the Min Connectivity
Bounding Box
END
Fig 5.9 flowchart representation of hierarchical flow segmentation process.
5.5.1 RGB to Gray Conversion
rgb2gray( )
This is the MATLAB function to convert the RGB coloured image to Gray scale image. In order to apply certain processes on images we need to convert coloured images to gray scale one as it is not possible for doing them on coloured images.
Syntax:
I = rgb2gray(RGB)
If the input is an RGB image, it can be of class uint8, uint16, single, or double. The output image I is of the same class as the input image. If the input is a colormap, the input and output colormaps are both of class double.
Algorithm:
rgb2gray converts RGB values to grayscale values by forming a weighted sum of the R, G, and B components: 0.2989 * R + 0.5870 * G + 0.1140 * B
Fig 5.10 RGB to gray image
5.5.2 Deleting Rows & Columns:
We can delete rows and columns from a matrix using just a pair of square brackets. Start with
A =[16 3 2 1
5 10 11 8
9 6 7 12
4 15 14 1 ]
X = A;
Then, to delete the second column of X, use
X(:,2) = []
This changes X to
X =
16 2 13
5 11 8
9 7 12
4 14 1
Since an image is represented in the form of pixel value matrix of order of resolution of image, the rows & columns can be deleted from the image in order to extract the required part of the image.
Fig 5.11 cropped gray scale image.
5.5.3 Thresholding
Gray level thresholding is the simplest segmentation process. Many objects or image regions are characterized by constant reflectivity or light absorption of their surface
Thresholding is computationally inexpensive and fast and can easily be done in real time using specialized hardware.
A complete segmentation of an image R is a finite set of regions R1, . . .RS,
For binary images, there is a single threshold:
g(i,j) = 1 for f(i,j)≥T
= 0 for f(i,j) <T
Thresholding algorithm
Search all the pixels f(i,j) of the image f. An image element g(i,j) of the segmented image is an object pixel if f(i,j) >=T, and is a background pixel otherwise.
Save image to your directory. Open Matlab and download threshgl.m. Load the image into matlab. Run this function with the command:
>>xt=threshgl(x,T_low,T_high); %where x is the image matrix
By trial and error, determine the threshold segmenting the rabbit from the background. To do this, adjust the upper and lower thresholds (upperT, lowerT) of this function. Determine the interval in which the threshold has to be defined to perform the desired segmentation.
Correct threshold selection is crucial for successful threshold segmentation. Threshold selection can be interactive or can be the result of some threshold detection method.
Optimal thresholding
* Based on approximation of the histogram of an image using a weighted sum of two or more probability densities with normal distributions
* The threshold is set as the closest gray level corresponding to the minimum probability between the maxima of two or more normal distributions, which results in minimum error segmentation
Algorithm for optimal thresholding:
1.Assuming no knowledge about the exact location of objects, consider as a first approximation that the four corners of the image contain back-ground pixels only and the remainder contains object pixels.
2.At step t, compute µBt and µot as the mean back ground and object gray level respectively, where segmentation into background and objects at step t is defined by the threshold Tt determined in the previous step,
µBt = i,j€backgroundfi,j#background.pixels#
3.set T(t+1)=(µtB+µot)/2
T(t+1) now provides an updated background/object distinction
.
5.5.4 Median Filtering
The median filter is a nonlinear digital filtering technique, often used to remove noise. Such noise reduction is a typical pre-processing step to improve the results of later processing (for example, edge detection on an image).
Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise (but see discussion below).
The main idea of the median filter is to run through the signal entry by entry, replacing each entry with the median of neighboring entries. The pattern of neighbors is called the “window”, which slides, entry by entry, over the entire signal. For 1D signals, the most obvious window is just the first few preceding and following entries, whereas for 2D (or
Fig 5.12 working of a median filter on pixel
higher-dimensional) signals such as images, more complex window patterns are possible (such as “box” or “cross” patterns).
Note that if the window has an odd number of entries, then the median is simple to define: it is just the middle value after all the entries in the window are sorted numerically
The following figure show the working of a median filter on a neighbourhood to replace the value of the center pixel.
5.5.5 Segmented Number Plate
Fig 5.13 output of image showing segmented number plate
Clicking on the “Segment” button in GUI window we can extract the number plate from the image.
Fig 5.14 Graphical User Interface window showing segmented number plate
5.6 Database
Database contains the data which is to be compared with the image of number plate in order to recognize the character. The database contains 26(English alphabets)+10(Numbers)= 36 image (classes) which will be checked with the image of license plate. So we have to go to the Current Directory and read the file and TRAINING is given to that file.
Normalization provides a tremendous reduction in data size of the characters. For perfect match, input images must be equal-sized with the database characters. The extracted characters cut from plate and the characters on database are now equal-sized.
Database for each character image is stored in CURRENT DIRECTORY to compare with the each and every segmented letter from the license plate.
5.6.1 STEPS FOR CREATING DATABASE:.
* Get created images of each and every character 26(alphabets)+10(digits)=36 characters.
* Database is collected to get into the file path.
* Move all the database to the current directory.
* Read the character image which is to be compared with.
* Create a TRAIN.m matlab lab program for recognition of character.
* Convert the color RGB image to GRAY image.
* Convert obtained GRAY image to Black and White image.
* Image resize is done to a uniform size.
* Complement all pixels in the image .
* Find the connectivity path of each character.
* Consider the area path of the character.
* Image resizing and image complementing is again followed.
* Database of each character is now done.
The following is the flow chart showing the process of creation of database
START
Get the image in CREATED IMAGES
Making DATABASE to get into the File path
Making to move it into Current Directory
Reading the Image
TRAIN
Converting RGB2GRAY
DATABASE is DONE
Converting GRAY2BW
Image Resizing
Image Complementing
Considering the Area
Image Resizing
Image Complementing
END
Finding the Max Connectivity
Fig 5.15 flowchart showing the process of creation of database
5.7 RECOGNITION OF CHARACTERS:
For character recognition we are using Optical Character Recognition (OCR) technique. Each image from the database is taken at a time and checked over the split images of number plate to find the image of maximum correlation (procedure called training) i.e similarity. If it is found then the letter is identified to be the one which is (brought from database) used for training. Thus the characters on the image of number plate is recognized one by one.
START
SPLIT the Image into Individual numbers
Converting RGB2GRAY
RECOGNITION is DONE
Converting GRAY2BW
Considering the first 10 values
Removing the Unwanted region
Labeling
END
Finding the Max Connectivity
Image Complementing
Considering the Area
Storing a new value and Sorting is done Descending order
separate different objects into cell array
Image Resize
Image Complementing
Fig 5.16 flowchart showing image recognition.
5.7.1 IMPLEMENTING SPLIT FUNCTION
* First, it will store pixel values into an Empty Matrix using Thresholding Technique Converting that image into BW
* Complementing that image to get the more Connectivity region of an image
* Range is decided by the number of connected region
Converting image into BW
Fig 5.17 black and white license plate image
Complementing that Image
Fig 5.18 complemented image
5.7.2 STEPS INVOLVED IN THE PROCESS OF CHARACTERS:
* Number of One’s connected region-area is calculated and Stored in an Empty matrix
* It is aligned from descending order using Sort function
* After alignment, we are considering first 10 number i.e., the max values
* Left over numbers are removed using a condition
* Again applying the labeling for connectivity of new pixel values (Number of Pixel values has been reduced compared to previous)
* After Splitting the image , we are comparing the DATABASE with the Obtained image, for this we are using a Correlation function.
* We are finding the max value obtained while correlating (The value obtained in a Matrix)
* Its Results are stored in an Empty Matrix
* Using the length of the above EM, we are recognizing the numbers and displaying the Number Plate.
* Storing this value in OUTPUT.TXT to compare with DATABASE.TXT
Fig 5.19 character recognition
5.8 COMPARISION:
Database is a collection of information or data which it is being orderly organize, thus it can be accessed easily and updated. Database can be in the form of text, contents and images. Database is needed to make sure that the image space can contained enough characters which have been extracted and the vehicle license plated number stored in the notepad for the purpose of comparison. The database would be enlarged in order to improve the accuracy and better chances of obtaining the correct result. The class recognition will check if the computed ratio is correct. It will ignore the class if the image thresholding is too low. If the class has match correctly, it will be compare with data set in the database and provide an output if the image is recognized correctly. Characters are compared to determine whether the input image is authenticated or not. Which is done by comparing the recognised license plate with the database .the database contain the list of license plate which are authorised.comparision is done line by line and by checking every character in each line .
Fig 5.20 sample characters in database
CHAPTER-6
SIMULATION RESULTS
Simulation outputs:
Fig 6.1 graphical user interface window as output
Fig 6.2 selecting an input image for recognition.
Fig 6.3 image read by graphical user interface
Fig 6.4 license plate is extracted from image
Fig 6.5 recognition of characters.
Fig 6.6 resultant output is displayed on command window
SIMULATION OUTPUTS FOR SOME MORE IMAGES:
fig 6.7 input car image fig 6.8 segmented number plate
fig 6.9 character recognition
Fig.6.10 Input car image fig 6.11 Segmented number plate
fig 6.12 Characters recognition
The system’s accuracy in locating the number plate is more than 98%. The problem encountered in the earlier systems in locating the number plate when vehicle bodies and their number plates have similar colors was overcome. There by achieving higher accuracy in number plate extraction step. As the fonts vary from one number plate to the other, ambiguous situation may arise in recognizing the characters ‘G’ and ’C’, ‘I’ and ‘1’, ‘7’ and ‘T’ and alike since OCR template was developed for one particular font. But some of them were overcome by “character categorization” approach.
Units of LPR System |
| Number of Accuracy |
| Percentage of Accuracy |
|
Extraction of plate region | 332/340 | %97.6 |
Segmentation | 327/340 | %96 |
Recognition of characters | 336/340 | %98.8 |
Table 6.1 . result of the test
CHAPTER-7
Conclusion
CONCLUSION:
The number recognition from the number plate of vehicles will definitely prove to be an effective means immigration control, law enforcement on roads. The vehicle breaking the traffic rules also can be recognized and can be warned for such offenses. This provides a better means of control and lessening the burden on traffic police.
A new method of segmenting the characters of the license plate based on a majority of the license plate based on a majority pixel value data ,we have also 1addressed the issue of building the databases as per user convenience so that the user has the option to train the neural networks with the fonts those are more revelant and mostly used in any particular geographic location .This is totally optional i.e,the user can change the network if they want to get better results .this algorithm has been tested on 340 images and it is found that the accuracy of the system is about 98% .The major sources of error were the skewness of the license plate and extreme variation in illumination condition, which can be aptly removed by enchanging the approach furthur .
FUTURE SCOPE
FUTURE SCOPE:
LPR, as a means of vehicle identification, may be further exploited in various ways, such as vehicle model identification as well as undervehicle surveillance. For the vehicle model identification task, the license plate position could play an important role in segmenting a distinctive reference area of the vehicle’s frontal view. We are currently undertaking research for extracting key points from the vehicle mask on the basis of license plate position and creating a digital signature for every vehicle model. Moreover, for undervehicle inspection, it is assumed that a template under vehicle image for each inspected vehicle has been archived into a database in advance. Based on the incoming vehicle license plate, the respective template image is retrieved from the database and then compared to the one acquired during real-time undervehicle inspection.
In our future work, techniques for deriving intrinsic images (e.g., illumination, reflectance and depth images) from a scene image or a number of input images are recommended. Intrinsic images containing only one intrinsic characteristic of the scene are viewpoint dependent and can be of great use for many visual inferences, such as image segmentation, view-based template matching, and object reconstruction. Recall that at the identification stage we have omitted a normalization step to transform extracted license plates to a prescribed size and orientation. Adding this step would improve the performance of license number identification. However, normalization requires knowing the boundaries of either license plates or license numbers. The former may be invisible if vehicle bodies and license plates have similar colors, while detecting boundaries of license numbers can be error-prone.We leave these issues to be considered in future study. Furthermore, the proposed neural approach for character recognition is basically unsupervised. In general, supervised methods can outperform unsupervised ones if rich training sets are available. We may later investigate supervised approaches.
A number of strategies have been introduced to reduce the time complexity of the proposed LPR algorithm. The color edge detector reduces the processing time by ignoring irrelevant edges at an early stage; the topological sorter limits the set of template candidates for character test at the identification stage. Obviously, there are more things that can be done to improve the processing time. However, in order to make our techniques applicable to real-time applications in less restrictive working conditions, the topics regarding replacing firmware components with hard-wired ones and using parallel machines.
References
1. Ankush Roy, Debarshi Patanjali Ghoshal, “Number Plate Recognition for Use in Different Countries Using an Improved Segmentation”. 978-1-4244-9581-8/11/$26.00 © 2011 IEEE.
2. P.ANISHIYA 1, PROF. S. MARY JOANS , ‘‘Number Plate Recognition for Indian Cars Using Morphological characters”. 2011 International Conference on Information and Network Technology IACSIT Press, Singapore 115 IPCSIT vol.4 (2011) © (2011) IACSIT Press, Singapore.
3. P. Kulkarni (Student Member, IEEE), A. Khatri, P. Banga, K. Shah “A Feature Based Approach for Localisation of Indian Number Plates”, in Proc of International Conference on Electro/Information Technology, IEEE , Windsor Canada, June, 2009
4. ‘‘Digital Image Processing”. Written by R.C.Gonjaliz. 2009.
5. ‘‘Image Segmentation” .A.D. Jepson and D.J. Fleet, 2007.
6. Wong Eng Yong, ‘‘Vehicle License Plate Registration Recognition System”. Jul07.
7. Ozbay, S., Ercelebi, E., “Automatic Vehicle Identification by Plate Recognition”, in Proc of World Academy of Science, Engineering and Technology, Vol-9, November 2005 Available: http://www.oletc.org/ole tctoday/ 0415_licplate.pdf
8. Shyang-Lih Chang, Li Shein Chen, Yun-Chung Chung, and Sei-Wan Chen, “ Automatic license plate recognition” IEEE Transaction Intelligent Transportation System, 5:42-53,2004.