Automatic Real-time Vehicle Classification

2025-04-27 0 0 303.17KB 11 页 10玖币
侵权投诉
1
Automatic Real-time Vehicle Classification
by Image Colour Component Based Template Matching
Ahmet Orun*
De Montfort University, Faculty of Computing, Engineering and media,
Leicester UK Email: aorun@dmu.ac.uk. Phone: +44(0)116 3664408
Abstract
Selection of appropriate template matching algorithms to run effectively on real-time low-cost systems is
always major issue. This is due to unpredictable changes in image scene which often necessitate more
sophisticated real-time algorithms to retain image consistency. Inefficiency of low cost auxiliary hardware
and time limitations are the major constraints in using these sorts of algorithms. The real-time system
introduced here copes with these problems utilising a fast running template matching algorithm, which makes
use of best colour band selection. The system uses fast running real-time algorithms to achieve template
matching and vehicle classification at about 4 frames /sec. on low-cost hardware. The colour image sequences
have been taken by a fixed CCTV camera overlooking a busy multi-lane road.
Keywords : Vehicle classification, template matching, CCTV, colour image
1. Introduction
Template matching is a simple method for achieving the fast classification of vehicles in image sequences
taken by a CCTV camera. A description of template matching and the underlying theory can be found in
Ballard and Brown (1982) or Davies (1997). The advantage of template matching over other more elaborate
methods for vehicle classification is that it can be applied in real-time using only low cost hardware. Many
authors describe methods for segmenting vehicles in image sequences. Rittscher et al. (2000) describe a
Markov random field model for pixel grey levels. The model is used to find pixels which have a high
probability of being contained within a vehicle. Beymer et al. (1997) group feature points to locate the image
of a vehicle. Koller et al. (1994) initialise a vehicle tracking algorithm by detecting regions in which the
current image in a sequence differs from the background image. They fit a spline contour about each region
and track using two Kalman filters, one for the motion of the vehicle and the other for the control points of
the spline contour. Cucchiara et al. (1999) implement a vehicle detection and tracking system on low cost
hardware, namely SRAM based Field Programmable Gate Arrays. A moving vehicle is detected by frame
image differencing. This yields a ‘blob’ approximating to the apparent shape of the vehicle. The apparent shape
is recovered more accurately by adjusting the blob such that its boundaries coincide with image regions with
high grey level gradients. Rajagopalan et al. (1999) detect vehicles by modelling the higher order moments
of the grey level values within vehicle outlines. In contrast with the many papers on segmenting images of
vehicles, there are far fewer papers on vehicle classification. Sullivan (1992) uses wire frame models to
locate and classify vehicles in single image. Lim et al. (1995) present a vehicle classification system for
electronic road pricing. There are many other general methods for object classification, for example methods
based on simple properties of regions segmented out of the image. These properties include area, perimeter,
semi-major axis, semi-minor axis (Kitchen and Pugh 1983). Such methods are not useful in this application
because the blobs corresponding to vehicles are poorly structured, for example as shown in Figure 1. However
none of the above papers deal with the use of colour to improve classification results.
The main contributions of this paper are two related techniques; one which achieves vehicle classification by
a real-time fast template matching, and the other colour band selection technique which exploits the colour
properties of sequential video imagery to enhance the quality of templates. At present Colour band selection
technique is implemented off line because of the amount of processing needed. There are more
2
comprehensive methods to exploit the colour properties of imagery, such as Principal Component Transform
(Umbaugh et al. 1993) but these techniques require a large amount of computation and are not suitable for
real-time applications running on low-cost systems. The proposed colour band selection technique within this
work is comparable to the band selection criteria used in Colour Transformation Technique introduced by
Ercal et al. (1993). The four classes of vehicle : car, van, lorry.1 (small lorry) and lorry.2 (large lorry) have
been selected to test the performance of the fast template matching algorithm (without colour band selection).
The templates for each class were generated interactively from an initial image sequence. The complete system
has been tested offline using image sequences. It runs on a 500 MHz PC at 4 frames/sec. The algorithms
introduced here are specifically tailored for use with static cameras, operating in scenes in which vehicles are
assumed to follow fixed routes. Small changes of camera pan/tilt angles do not affect the template matching
operations.
2. Template generation
For the first method (fast template matching without band selection) the templates are generated off-line by a
human operator using a typical image sequence. In this application the images are RGB, and of size 768x576
pixels. The first step is to produce a background image by calculating the mean of first 20-30 images in the
sequence. More elaborate background detection algorithms are possible (Toyama et al. 1999) but simple
averaging is accurate enough in this case. The averaging is carried out incrementally as follows: Let M(i,j)curr
be the current background image calculated from k images. Let F(i,j)curr be the k+1 image. The value M(i,j)new
of the (i,j)th pixel in the new background image incorporating F(i,j)curr is given by:
M(i,j)new =
currcurr jiF
k
jiM
k
k),(
1
1
),(
1
+
+
+
1 i m, 1 j n. (2.1)
The averaging is done separately for each RGB band. The calculation of the background image is initialised
by setting k=0, M(i,j)curr =0.
The operator selects a region of interest in the image of size about 200x150 pixels, extending across the
width of the road. When a suitable vehicle enters the region of interest a binary template for it is calculated as
follows :
a) The background image and the current image are converted into grey level images K, I1, respectively by
averaging the RGB values for each pixel ((R+G+B) / 3) (Jain et al. 1995; Bourke 1989).
b) The image I1 is subtracted from K to give the image I(i,j)subs defined by:
c)
I(i,j)subs = |I1(i,j)-K(i,j)| 1 i m, 1 j n. (2.2)
d) The image Isubs is converted to a binary image B by thresholding. The threshold value T is selected
manually by inspecting the histogram of the image. The histogram usually has two peaks and the
threshold value is selected between them. This algorithm requires too much interaction with the operator.
=otherwise0
),(if1
),( TjiI
jiB subs
( 1 i n , 1 j m ) (2.3)
3
Fig. 1. An example binary image of CCTV camera
Figure 2. A typical CCTV image derived from video sequences.
An example binary image B is shown in Figure 1. The templates are selected from B by hand. A typical
template is shown in Figure 3. Vehicle template data are in binary format and stored in the system video
memory (VRAM) for real-time fast matching.
Figure 3. Typical template of car in binary image format
3. Template matching
Three types of template matching functions were considered a) sum of absolute differences; b) normalised
cross-correlation; c) sum of squared differences. Type (a), “sum of absolute differences” (Hussain 1991)
was selected because it has a low computational cost, in contrast with (b), and because it does not overweight
outlying values, in contrast with (c). Other more specialised template matching algorithms which are invariant
to scale or orientation were not considered because of their high computational cost. The matching functions
(a), (b) and (c) are shown in equations 3.1, 3.2 and 3.3 respectively. Here Fclass is a vehicle template and
摘要:

1AutomaticReal-timeVehicleClassificationbyImageColourComponentBasedTemplateMatchingAhmetOrun*DeMontfortUniversity,FacultyofComputing,Engineeringandmedia,LeicesterUKEmail:aorun@dmu.ac.uk.Phone:+44(0)1163664408AbstractSelectionofappropriatetemplatematchingalgorithmstoruneffectivelyonreal-timelow-costs...

展开>> 收起<<
Automatic Real-time Vehicle Classification.pdf

共11页,预览3页

还剩页未读, 继续阅读

声明:本站为文档C2C交易模式,即用户上传的文档直接被用户下载,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有。玖贝云文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。若文档所含内容侵犯了您的版权或隐私,请立即通知玖贝云文库,我们立即给予删除!
分类:图书资源 价格:10玖币 属性:11 页 大小:303.17KB 格式:PDF 时间:2025-04-27

开通VIP享超值会员特权

  • 多端同步记录
  • 高速下载文档
  • 免费文档工具
  • 分享文档赚钱
  • 每日登录抽奖
  • 优质衍生服务
/ 11
客服
关注