Vol. 9 (2000):
Abstracts of Papers
Nos. 1/2: Special Issue: Proc.
6th GKPO 2000 Conference,
Proc. 6th International Conference on Computer Graphics and Image Processing (GKPO 2000),
Podlesice, Poland, May 15-19, 2000.
A limited number of copies of the Proceedings is still available
for sale ($10 per copy). Please send the above amount to the
conference bank account
to obtain your copy (until stock lasts).
- Hliniak G., Strug B.:
Graph grammars and evolutionary methods in graphic design.
MGV vol. 9, no. 1/2, 2000, pp. 5-13.
A new approach to image generation being a combination of evolutionary
methods and graph grammars is presented. Usually genetic algorithms
operate on binary strings. We propose here to use composition graphs
as genotypes representing structures of designed objects.
Hence, a graph grammar combined with genetic operators
constitutes a tool for creating images being phenotypes corresponding
to generated graphs. A modified genetic algorithm which uses the graph
representation and the graph grammar is described.
Key words: graph, graph grammar, evolutionary methods,
- Kiciak P.:
On B-spline arithmetic and its applications.
MGV vol. 9, no. 1/2, 2000, pp. 15-23.
A number of applications in geometric design of arithmetic
operations with B-spline arguments are discussed. These
include degree elevation, constructing derivatives, normal vector
patches, swept surfaces, Coons surfaces and smoothly joined patches
in B-spline or NURBS representation.
Key words: B-spline, coons, NURBS.
- Lebiedz J.:
Discrete arcs and curves.
MGV vol. 9, no. 1/2, 2000, pp. 25-30.
The paper presents a new proposal for definitions
of the discrete arc and the discrete curve.
The proposed definitions are based on the neighborhood patterns.
Key words: raster graphics, discrete curves.
- Marques N., Capela P., Bernardo J.:
Computing practical solutions for rotational containment
MGV vol. 9, no. 1/2, 2000, pp. 31-40.
A containment problem can be defined as a way of placing
a set of shapes into another shapes without overlapping.
Most containment problem solvers often try to reach a solution
by finding a local or global maximum. Although theoretically
they are correct, when one needs to apply those to a practical
situation such as the footwear industry they fail to give results
in acceptable time. Iterative solvers can take advantage
of some shortcuts and still produce acceptable final results.
We present one iterative algorithm, which uses Minkowski operators
to solve multiple layer rotational containment problems.
We use some AI concepts and simplify them by taking some assumptions
on the layout.
Key words: computational geometry,
containment problems, Minkowski operators.
- Mingo L.F., Martinez A., Gimenez V., Castellanos J.:
Parametric bivariate surfaces with neural networks.
MGV vol. 9, no. 1/2, 2000, pp. 41-46.
This paper presents the application of Enhaced Neural Networks
(ENN) to the field of Image Processing, more precisely,
to the field of surfaces approximation via the generalization property
that ENNs have. This architecture can perform a polynomial
approximation of a given pattern set in such a way that if the net
has n hidden layers, then it will compute the n+2
degree polynomial approximation to its pattern set. Moreover,
the behaviour of this net can be modified just modifying
the activation function f(x) of some neurons,
in such a way that the net will compute the approximation
to the pattern set using a functional basis of functions
f(x). In this way the net computes the non lineal combination
of basis elements to output the desired approximation.
ENNs are used to represent a surface approximation.
Some examples, concerning results when learning surfaces, are explained
in the paper. Results are good since the Mean Squared Error
is very low and the computation time too.
Key words: neural nets, pattern recognition,
function approximation, 3D representation.
- Nikiel S.S., Kirby G.H.:
Iterated function systems for terrain synthesis.
MGV vol. 9, no. 1/2, 2000, pp. 47-55.
This paper discusses terrain synthesis using IFS interpolation
techniques by extending them from two- to three-dimensional cases.
The method is not intended to be started from a purely random set
of points, but rather from some real or meaningful heightfield.
The method is suited to near real-time computing, is deterministic,
context dependent and provides control over the nature of "roughness"
of rendered surfaces. Its speed and flexibility on standard desktop
computers open up the possibility of "terrain engineering".
Key words: fractals, surface modelling, IFS, terrain.
- Cyras V., Lapin K.:
Various perspectives of automatic configuration of structured
MGV vol. 9, no. 1/2, 2000, pp. 57-80.
The paper is aimed at presenting of several approaches to automatic
synthesis of structured 2D drawings. We view the subject from different
standpoints: document configuration, knowledge representation and
document management. Our case study is in the domain of configuration
of electroplating lines. As a result of the configuration process,
a set of technical drawings is produced. We present an approach
(and a software system) SyntheCAD to the preparation of technical drawings
for generating layouts automatically rather than building interactively.
We treat SyntheCAD as a combination of two tasks:
configuration (as in artificial intelligence) and document preparation.
The method combines basic concepts of
(1) the component oriented approach to configuration, and
(2) generic coding introduced in document preparation.
Key words: document preparation, configuration,
document management, logical and layout structures of documents,
- Barouni A.E., Jaoua A., Zaguia N.:
An inside and outside drawing algorithm
of the multi-ring architecture.
MGV vol. 9, no. 1/2, 2000, pp. 81-91.
In this paper we propose efficient techniques to produce drawing
of the Multi-Ring Architecture (MRA) while maintaining
good resolution and clear layout.
The visualization of the MRA on the screen has proven to
be useful and helpful for the network designers. In fact, they can easily
identify rings, perceive the interaction between rings, and then rapidly
spot possible problems. Given a ring cover of telecommunication
networks, we present two new techniques for drawing a ring cover using
force-directed placement. Our algorithm displays it in such a way
that rings are easily identifiable and the drawing are clear to understand
by the network designers. Examples of drawings and results are shown
to demonstrate the effectiveness of the proposed algorithms.
Key words: ring cover, ring graph, ring-cycles,
ring-contact node graph.
- Doncescu A.:
Organized mesh design applied in the body simulation.
MGV vol. 9, no. 1/2, 2000, pp. 93-102.
The meshing of 3D objects, when the surface of the object is a cloud
of points, is realized in general by triangles using the algorithm
of Delaunay. Another possibility exists via the dual algorithm,
the Voronoi diagram. An alternative to the algorithm of Voronoi
is proposed by the utilization of a Voronoi organized meshes.
Vertices of valence 3, forming in general "hexagons" and some "polygons"
compose by 5, 7, 8,... edges to ensure the curvature. An example is
the fullerenes. Fullerenes are symmetrical closed convex graphite shells,
consisting of 12 pentagons and various numbers of hexagons, for which
the gaussian curvature is positive.
We present in this paper the different algorithms of subdivision
and compression applied on this type of surfaces.
Key words: wavelets, Voronoi, subdivision, compression.
- Dufrenois F.:
Ellipsoid Fourier Descriptor: hierarchical shape representation
and deformable model.
MGV vol. 9, no. 1/2, 2000, pp. 103-113.
This paper describes a new model-based segmentation and motion analysis
technic combining Euler-Lagrange formalism and shape representation by
Fourier decomposition. First, we propose to extend the concept of the 2D
Fourier representation for closed curves to a new 3D hierarchical descriptor
for closed surfaces. To study the shape deformation during motion, we
establish also the dynamic equations of motion of the Fourier parameters.
Finally, some results on 2D data traking, 3D reconstructions and 3D data
fitting are presented.
Key words: Fourier parametrization, deformable model,
motion analysis, 3D reconstruction, data fitting.
- Grabska E., Palacz W., Szyngiera P.:
Hierarchical graphs in creative design.
MGV vol. 9, no. 1/2, 2000, pp. 115-122.
This paper is concerned with computational aspects of creative design.
We propose a new representation of artifacts' structures:
hierarchical graphs. This type of graphs enables to define
a glueing operation. The proposed operation is illustrated by an
example of designing gardens.
Key words: creative design, hierarchical graphs,
graph operations, graph interpretation.
- Liu Y., Rodrigues M.A.:
Essential representation and calibration
of rigid body transformations.
MGV vol. 9, no. 1/2, 2000, pp. 123-138.
We extended Chasles' screw motion concept to computer vision calibration
tasks by investigating geometric properties of image correspondence vectors.
In this paper, we analyse geometric properties of reflected
correspondence vectors synthesised into a single coordinate frame
leading to the development of two representation theorems of rigid
transformations. The theorems and their corollaries provide a novel method
to identify the original transformation and an insight into the relationships
between the representation of rigid transformation, rigid constraints,
and the transformation parameters.
The analysis provides a framework for the development
of novel calibration algorithms as demonstrated by two new
proposed algorithms. For a comparative study of performance,
we also implemented another well known calibration procedure based
on the constraint least squares (CLS) method.
The experimental results have shown that the proposed algorithms
are more accurate than the CLS algorithm when the image data are
corrupted by both noise and outliers.
Key words: rigid constraints, reflected correspondence vector,
rigid transformation, calibration, range image.
- Lumini A.:
Coupling iterate function systems and Petri nets
for image representation and generation.
MGV vol. 9, no. 1/2, 2000, pp. 139-147.
Image generation has been proposed for many different tasks
in the literature, from physics events visualization to the purpose
of "art for art's sake". In this paper a new approach to computer
image generation is presented: the method creates new images by randomizing
the decompression process, starting from a compressed representation
of an image by Iterate Function Systems. Petri Nets are employed both
for modeling the decompression process and for inserting a randomization
component in it. A second method proposed in this work directly translates
the evolution of a Petri Net into a graphic output. Experimental results
are given, showing different class of images generated by the two methods.
Key words: image generation, petri nets,
fractal compression, iterate function systems.
- Skala V., Brusi A.:
Two methods for iso-surface extraction from volumetric data
and their comparison.
MGV vol. 9, no. 1/2, 2000, pp. 149-166.
There are various methods for extracting iso-surfaces from volumetric
data. Marching cubes or tetrahedra or raytracing methods are mostly
used. There are many specific techniques to increase speed of
computation and decrease memory requirements. Although a precision of
iso-surface extraction is very important too, it is not mentioned
usually. A comparison of the selected methods was made in different
aspects: iso-surface extraction process time, number of triangles
generated and estimation of radius, area and volume errors based on
approximation of a sphere. Surprisingly, experiments proved that
there is no direct relation between precision of extracted iso-surface
and human perception of it.
Key words: volume data visualization, marching cubes,
marching tetrahedra, isosurface generation, MRI and CT images,
generation precision, iso-surface extraction.
- Kasprzak W.:
Adaptive methods of moving car detection in monocular
MGV vol. 9, no. 1/2, 2000, pp. 167-185.
Computer vision applications for traffic scene analysis
and autonomous navigation (driver support) require highly sophisticated
sensors and computation methods - they constitute a real challenge
for image analysis systems. Common to both applications is the moving
object detection/tracking task. In this paper we study this task
on four different data abstraction levels: image segmentation, 2-D
object tracking, model-based 3-D object tracking and many-object
traffic scene description. Two meanings of the term "adaptive"
are considered: learning algorithms or connectionist systems and recursive
estimation for dynamic systems. Generally the first approach may be applied
for low- and segmentation-level analysis of finite image sequences,
whereas the second approach is suitable for 2-D and 3-D object tracking
Key words: model-based analysis, object tracking,
obstacle detection, recursive estimation, road following,
traffic scene analysis, visual motion estimation.
- Chun B.T., Bae Y., Kim T.-Y.:
Recovering original images for video caption areas
using camera motion and video information.
MGV vol. 9, no. 1/2, 2000, pp. 187-199.
To transform images with a caption into images without one
or to replace the current captions by a different one, we
sometimes need to recover original image for caption areas.
Manual processing is possible when the amount of recovery to
be done is small. However, as the amount of video to be
processed increases, manual processing is impossible or takes
too much time. Conventional researches on image restoration
have focused on restoring blurred images to sharp images
using frequency filtering or video coding for transferring images.
This paper proposes a method for recovering original images
using camera motion and video information such as caption
regions and scene changes. The method decides the direction
of recovery using the caption information(the start and end
frames of caption) and scene change information. According
to direction of recovery, a rough estimate of the direction
and position of the original image is obtained using
calculated motion vector from camera motion. Because the
camera motion dose not reflects local object motion, some
distortion can happen in the recovered image. To solve this
problem, BMA(Block Matching Algorithm) that is applied
in units of caption character components on the obtained recovery
Experimental results show that the case of images having
little motions is well recovered. We see that the case of
images having motion in complex background is also recovered.
Key words: recovering original image, caption,
camera motion, change detection.
- El Haziti M., Aboutajdine D., Cherifi H.:
Lossless optimization of fractal image coding.
MGV vol. 9, no. 1/2, 2000, pp. 201-206.
for the Best Paper Award.]
In fractal image compression the large amount of computations needed
for the encoding stage is a major obstacle that needs to be overcome.
Several methods have been developed for reducing the computational
complexity. They can be divided into lossy and lossless
approaches. In this paper we introduce a lossless approach
where the search space is reduced to the admissible solutions
space. For still images, experiments show that this method,
greatly improves pairing search as compared to the exhaustive
search. It can be easily integrated in most lossy acceleration schemes.
Key words: fractal compression, encoding complexity,
encoding time, block rejection.
- Fusiello A., Murino V.:
Calibration of an optical-acoustic sensor.
MGV vol. 9, no. 1/2, 2000, pp. 207-214.
In this paper an application is described, where image registration to
a model is used in conjunction with registration of acoustic (3-D) data
to the same model to perform calibration of an optical/acoustic
rig. This allows to register the video image with the depth map
acquired by the acoustic device, thereby obtaining a single
multisensorial image. Experimental results on both synthetic and real
images are reported.
Key words: multisensor model of image formation,
3-D imaging calibration, acoustical images.
- Iwanowski M.:
Fault detection using mathematical morphology
MGV vol. 9, no. 1/2, 2000, pp. 215-220.
This paper describes an approach to the problem of fault detection.
Fault detection is considered here as a segmentation task.
One need to segment the initial image in order to find areas
of a particular kind - faults. The approach presented combines
the mathematical morfology and clustering. The morphological treatment
is based on morphological filtering by reconstruction but it includes
also some other tools like top-hat transformation and contrast improvement.
Clustering is performed by the k-means algorithm. Three metods
of fault detection are presented here. Each of them is destined
to the segmentation of the images of different kind.
Key words: fault detection, clustering, k-means,
mathematical morphology, reconstruction, image segmentation.
- Kulpa Z.,
Characterization of convex and pointisable interval relations
by diagrammatic methods.
MGV vol. 9, no. 1/2, 2000, pp. 221-231.
In the paper diagrammatic methods are used to demonstrate equivalence of
different characterizations of convex and pointisable
interval relations. The diagrammatic tools used are introduced:
the MR-diagram for interval space, the W-diagram
for representing arrangement interval relations,
and the conjunction and lattice diagrams.
Two theorems on characterizations of convex and pointisable relation
classes are given. The proof of the first theorem
been published by Kulpa elsewhere; the second theorem on pointisable
relations is proven in this paper with the help of the diagrammatic
Key words: diagrams, diagrammatic representation,
diagrammatic reasoning, intervals, interval relations, interval algebra,
- Szwoch M., Meus G., Tutkaj P.:
ScoreExplorer: a musical score recognition system.
MGV vol. 9, no. 1/2, 2000, pp. 233-241.
In this paper Score Explorer, the Optical Music Recognition (OMR)
system is described. The system performs complete recognition process
starting from a scanned score image through the stages of
low-level pre-processing, segmentation and pattern recognition. A comparison
of the system to the commercial systems is given. Finally some ideas are
proposed to improve the quality and reliability of the system.
Key words: optical music recognition, document analysis,
pattern recognition, music notation.
- Yadohisa H., Yamamoto Y.:
Visualization of cubic (one-way, three-mode) dissimilarity data
by using multidimensional scaling.
MGV vol. 9, no. 1/2, 2000, pp. 243-249.
A visualization method for cubic (one-way, three-mode) dissimilarity
data is proposed. By using the framework of multidimensional scaling (MDS),
the data are represented as points in a euclidean space.
Two models to explain the data are proposed, and estimates are made
by the alternating least squares method.
Key words: alternating least squares method,
data visualization, (dis)similarity, MDS.
- Yang L., Sahli H.:
Integration for 3D structure/motion estimation based
on line drawings.
MGV vol. 9, no. 1/2, 2000, pp. 251-261.
A general framework for 3D structure/motion analysis is proposed upon
integration of different visual modules. Line drawings are analyzed
from motion point of view, and provide an effective means
for 3D reconstruction. 2D motion based and feature correspondences
based approaches are efficiently integrated with the line drawing
interpretations under our framework. Normal flow of line segment
is employed here to characterize 2D motion and the finite difference
method is used for its estimation in a simple and practical way.
To implement the proposed integration model, an incremental scheme
is developed to estimate 3D structure/motion from a sequence of images.
Experiments on scenes containing polyhedral objects demonstrate
the feasibility of the proposed scheme, and show that the integration
of different visual modules gives better 3D structure/motion estimations.
Key words: 2D motion, feature correspondences,
line drawing interpretations, integration, structure from motion.
- Boivin S., Gagalowicz A., Horry Y.:
Morphological modeling and deformation of 3D objects.
MGV vol. 9, no. 1/2, 2000, pp. 263-280.
This paper describes a new set of
techniques based upon laws of object composition for the modeling and
animation of deformable objects. They present some similarities with
logical or morphological operations. This method is unique because
the volume of the merged object is held constant with respect to its
constituents. Composition from two objects is allowed for general
shapes which may have very different topologies. These techniques are
applied towards shapes which are approximated by a polyhedral
description. When integrated into a modeler, they can be used as
modeling tools, animation tools, deformation tools or even morphing
tools depending on how they are handled. One of the advantages of
these tools is that the user may easily deduce the procedure to follow
in order to obtain a given result.
Key words: object modeling, composition, animation,
- Kebaili A., Boumghar F.:
Optimal blocs subdivision in the "Dividing-cubes" algorithm.
Application to the 3D medical imagery.
MGV vol. 9, no. 1/2, 2000, pp. 281-288.
In this article we present an optimisation of the execution time
of the "dividing-cubes" algorithm, whose aim is to extract
3D surfaces from volumetric images. These images are generally
very voluminous and represent a pile up of many iso-surfaces,
which requires a considerable time to process them to extract
the desired iso-surface. In the first time it is matter of reducing
the cover time of the 3D image by subdividing it into blocks of voxels,
which needs to visiting a more reduced number of voxels.
We propose in a second step, an optimization of the subdividing
blocks parameters that allows us to have a block size as suitable
as possible, and permit to generate a minimal execution time.
Our propositions and formulations are made valid by means of
experiments carried out on processed images and medical images.
Key words: image processing, surface reconstruction,
3D medical imagery, "dividing-cubes", subdividing blocks parameters.
- Kozankiewicz P.:
Variance and variation reduction technique
in the quasi-Monte Carlo global illuminaton algorithm.
MGV vol. 9, no. 1/2, 2000, pp. 289-296.
The accuracy of the Monte Carlo and quasi-Monte Carlo algorithms depends
on variance and variation in the sense of Hardy-Krause of the integral.
This paper contains an error estimation of the quasi-Monte Carlo algorithm
for some discontinuous functions that appear in radiosity and global
illumination computation. Also an algorithm that transforms the integral
to reduce variance and variation is presented. It takes advantage
of the BRDF properties and it can be used with diffuse and non-diffuse
Key words: radiosity, global illumination,
quasi-Monte Carlo quadrature, variance, variation, reduction,
discontinuity, error estimation.
- Skala V., Hui B.D.:
Two new algorithms for line clipping in E2
and their comparison.
MGV vol. 9, no. 1/2, 2000, pp. 297-306.
for the Best Paper Award.]
New efficient algorithms for the line clipping by the given rectangle
in E2 are presented. The first algorithm is based
on the line direction evaluation the second one is based on a new coding
technique of clipping rectangle's vertices. It allows solving all cases
more effectively. A comparison of the proposed algorithms with
the Liang-Barsky algorithm and some experimental results are presented too.
Key words: line clipping, computer graphics,
algorithm complexity, geometric algorithms, algorithm complexity analysis.
- Westenberg M.A.,
X-ray volume rendering through two-stage splatting.
MGV vol. 9, no. 1/2, 2000, pp. 307-314.
[The Best Paper Award recipient.]
We consider the splatting method for X-ray rendering of volume data sets,
and introduce two-stage splatting. This variant improves rendering speed
by separating the splatting process in two stages: coefficient accumulation
and convolution. We incorporate this variant into wavelet splatting,
and obtain speedups in rendering time up to a factor of three
for Haar and B-spline wavelets.
Key words: volume rendering, splatting, wavelets,
- Wojdala A., Gruszewski M., Olech R.:
Real-time shadow casting in virtual studio.
MGV vol. 9, no. 1/2, 2000, pp. 315-329.
for the Best Paper Award.]
Combining real and computer-generated imagery, Virtual Studio imposes
the requirement for these two worlds to interact properly
in the composite image. In particular, it is expected that shadows
will be correctly cast between the virtual and real environments.
This paper describes real-time algorithms, that allow actors
and real objects to cast shadows on virtual elements of the scene
and vice versa.
Key words: computer graphics, virtual studio,
- Zaremba M.B., Varma H., St-Laurent L.:
Extension of hyperspatial helical codes for visualization
and correlative analysis of attribute-referenced multi-dimensional
MGV vol. 9, no. 1/2, 2000, pp. 331-340.
In order to address the growing data management and analysis needs
of spatial applications, more research is required in the field
of spatial data indexing to support efficient data access and visualization,
as well as in the field of new processing strategies that will include
topological relationships between the data. This paper introduces hyperspatial
helical codes (HHCode), presents the visualization capabilities
offered by the codes, and describes the software that has been
developed to support viewing, editing and updating of spatial data
sets in 3D. Integration of the HHCode with a
pre-processing scheme is then proposed, based on self-organizing
maps. This enhancement allows the user to discover and visualize
topological relationships hidden in the data, and provides a tool
for indexing generic multi-dimensional attribute-referenced data.
Key words: hyperspatial helical code, self-organizing maps,
multi-dimensional indexing, spatial data structures.
- Hall D., Crowley J.L., de Verdiere V.C.:
View invariant object recognition using coloured receptive
MGV vol. 9, no. 1/2, 2000, pp. 341-352.
This paper describes a technique for the recognition and tracking of
everyday objects. The goal is to build a system in which ordinary desktop
objects serve as physical icons in a vision based system for man-machine
interaction. In such a system, the manipulation of objects replaces
This method is based on sampling a local appearance function
at discrete viewpoints by projecting it onto a vector of receptive fields
which have been normalised to local scale and orientation.
This paper reports on the experimental validation of the approach, and of
its extension to the use of receptive fields based on colour.
The experimental results indicate that the
technique does indeed provide a method for building a fast and robust
recognition technique. Furthermore, the extension to coloured receptive
fields provides a greater degree of local discrimination.
The coloured receptive field approach is applied
to the recognition of objects under changing view points. Appearance
of objects depends strongly on the view point and the lighting. In the
experiments we show that the developed technique based on coloured
receptive fields allows the recognition of objects invariant from the
view point of the camera. This is obtained by training images from
view points that sample the view sphere.
This experiment shows that the approach is suitable for the recognition
of general objects as physical icons in an augmented reality.
Key words: object recognition, view point changes,
appearance-based vision, phicons.
- Abbas J., Domanski M.:
A family of efficient nonlinear filters for video
MGV vol. 9, no. 1/2, 2000, pp. 353-361.
The paper deals with denoising of color video sequences using median-based
filters. The filters employ a simple structure with nonlinear predictor
and prediction error processed by a static (memoryless) nonlinear element.
The basic idea is to predict a pixel value by using a nonlinear filter,
and then to compare this value to that in the input (corrupted) image.
Usually these two values are different and a decision has to be made about
the pixel value at the filter output. This value can be considered as a sum
of the prediction and the prediction error processed by the nonlinear element.
For example, large values of prediction errors are set to zero because
they are can be classified as caused by impulsive noise samples.
Soft decisions on classification of prediction errors lead to good results
for test images. The filters are very suitable for impulsive noise removal
from color images. Preservation of textures and fine details are advantages
of the filters proposed. The experimental data prove that these filters
outperform classic nonlinear median-based filters like vector median,
recursive median and weighted median. In the paper, it is proven that
application of prediction error processing results in improved efficiency
of video restoration.
Key words: image and video restoration, nonlinear filters,
two- and three-dimensional filters.
- Chernov V.M.:
Discrete symplectic transforms and their fast algorithms.
MGV vol. 9, no. 1/2, 2000, pp. 363-368.
In the article the new class of the discrete transforms associated
with the anti-symmetric bilinear form is introduced.
Also the invariance relations similar to the Parseval equality and fast
algorithms for discrete symmetric transforms calculation are considered.
Key words: discrete transforms, symplectic geometry.
- Makris P., Vincent N.:
A new measurement of compressed image quality.
MGV vol. 9, no. 1/2, 2000, pp. 369-378.
Key words: Zipf law, compression, quality measurement,
- Przelaskowski A.:
Modelling and quantization of wavelet domain data
in compression scheme.
MGV vol. 9, no. 1/2, 2000, pp. 379-388.
The methods of wavelet domain data modelling and quantization
to improve compression efficiency
are presented. Natural images are decomposed into several subbands
in multiresolution scheme to remove original data correlation.
Biorthogonal filter banks to realise wavelet transform are used.
We selected filters effective in data compression that means they
give precise approximations of images with a few wavelet
coefficients. Conditional probability model of adjacent (in
scale-spatial sense) magnitudes was applied as a better
approximation of wavelet coefficient dependencies than marginal
data distributions. Additionally, significance of causal local
area was used to model probability of significance for successive
coefficients. These two models were applied in quantization scheme
to improve uniform threshold quantization scheme by adaptive
modulation of a ratio of increased dead-zone to quantization step
size. As a result more effective low-cost quantization scheme was
constructed. It allows significantly increase compression
efficiency of images. Experimental Rate-Distortion curve shows the
same distortion for decreased bit rate values up to 15% in
comparison to standard uniform threshold quantization.
Key words: wavelet-based encoding, wavelet data statistics.
- Serir A.:
Multiplicative waveforms decomposition.
MGV vol. 9, no. 1/2, 2000, pp. 389-396.
This paper introduces a novel nonlinear low-level representation
of an image with signal-dependent noise. For multiplicative noisy image,
we introduce an algorithm called multiplicative matching pursuit
decomposition MMPD, that decomposes the signal containing
the intrinsic variation into a nonlinear expansion of waveforms that
are selected from redundant dictionary of functions to best match the
signal local structures.
The convergence of this new multiplicative decomposition has been
proved and tested in practice. An application to speckle reduction
in SAR images is described.
Key words: multiplicative decomposition, matching pursuit,
structural analysis, speckle reduction.
- Turner M.J.:
Optimised black-and-white contour coding.
MGV vol. 9, no. 1/2, 2000, pp. 397-402.
A hierarchical two-dimensional structural image segmentation algorithm based
on a contour tree description is presented. This format is compared with more
traditional facsimile coding standards (T.4, T.6 and T.82) (as well as other
common compression standards) on the standard ITU test set of facsimile
images. A detailed study of the coding complexity and choices required to
design a general contour coding strategy is carried out. A final choice of
parameters is presented and compared against previous work on grey-scale
Key words: compression, contour coding,
chain codes, facsimile.
- Klonowski W.:
Signal and image analysis using chaos theory
and fractal geometry.
MGV vol. 9, no. 1/2, 2000, pp. 403-431.
Fractal geometry has proven to be a useful tool in quantifying the structure
of a wide range of idealized and naturally occurring objects, from pure
mathematics, through physics and chemistry, to biology and medicine.
In the past few years fractal analysis techniques have gained increasing
attention in signal and image processing, especially in medical sciences,
e.g. in pathology, neuropsychiatry, cardiology. This article intends
to describe fractal techniques and its applications. We concentrate
on applications of chaos theory and fractal geometry in biosignal
and biomedical image analysis.
Key words: fractals, nonlinear dynamics, chaos,
deterministic, fractal dimension, signal analysis, pattern recognition,
- Berndt-Schreiber M., Bieganowski L.:
Fractal analysis of serial fundus images for retinal
MGV vol. 9, no. 1/2, 2000, pp. 433-438.
An attempt to apply fractal analysis in the interpretation
of fundus images of retinal vessel patterns in the human eyes
has been presented. An implementation of the algorithm estimating
the fractal dimension of surfaces (according to the Minkowski-Bouligand
definition) has been suggested within the framework of professional
image processing software as an effective tool in a search for a classifying
feature, supporting the retinal disease diagnosing. Preliminary
test results demonstrating the possibilities of the proposed tool
and further developments of the approach are discussed.
Key words: fractal analysis, fractal dimension,
retinal diseases, glaucoma.
- Jaworska T.:
A new approach to stereo image matching based on multiresolution
MGV vol. 9, no. 1/2, 2000, pp. 439-446.
Automatic stereo matching is an important problem and many algorithms have
been developed already. The paper presents a new stereo matching algorithm.
Correspondence and disparity established between points of two images
is the result of a stereo matching. The proposed solution is based
on wavelet transform. A two-dimensional digital image is a signal which
is analyzed in wavelet space. Each image of a stereopair is decomposed
using multiresolution analysis for many levels from fine-to-coarse.
Each image is also decomposed into an approximation and details (vertical,
horizontal and diagonal) which are computed using a
recursive algorithm of Mallat. In the proposed approach a disparity is
found in the wavelet transform space. The minimization problem which
appears in disparity determination is solved using a Hopfield neural
network. The neural network is used for finding the
disparity separately for approximation and details in the horizontal
direction and separately for approximation and details in the vertical
direction. The approach is based on constructing vectors containing
coefficients from each level. The number of vectors is equal
to the numbers of coefficients at the finest level.
Vectors are constructed for coefficients of approximation and for sum
of coefficients of approximation and details respectively.
A norm of the difference of vectors is set as neurons bias.
Key words: stereo matching, point-to-point correspondence,
wavelet transform, multiresolution analysis, Hopfield neural network.
- Lis W., Kasprzak W.:
Semi-automatic detection of cartographic objects
in digital images.
MGV vol. 9, no. 1/2, 2000, pp. 447-452.
A computer system for semi-automatic detection of specific 2-D objects
in digital images of cartographic maps is presented. The system design
combines image analysis methods with computer graphics and data base
technologies. The purpose of this application system is to speed up
the work of human specialist, to increase its efficiency and to transform
cartographic maps into a symbolic, vector-based shape, i.e. to represent
them in a general purpose data base. The system consists of three
main modules: image analysis, graphic editor and data base administration.
Up to now the object types being detected are: roads, cities, textured
areas and some characteristic signs.
Key words: cartography, data bases, digital maps,
2-D object detection.
- Nieniewski M., Serneels R.:
Extraction of the shape of small defects on the surface
of ferrite cores.
MGV vol. 9, no. 1/2, 2000, pp. 453-462.
The paper presents a morphological method for detection and extraction
of the shape of defects on the surface of industrial type objects,
such as ferrite cores. The method uses a morphological defect detector
and a morphological pyramidfor detecting positions of defects.
The output from the pyramid gives the positions and extents of defects
together with a very coarse approximation of the shape of defects.
In order to extract the precise shape, two morphological reconstructions
are subsequently used. The first of them is a gray level reconstruction
extracting the shape of all brighter spots in the image. The other
reconstruction is a binary one. It extracts the shape of those brighter
spots indicated by the morphological pyramid, and leaves out all the others.
The described morphological operations can be carried out in real time.
The method correctly extracts the shape of relatively small defects.
Defects of this kind are very typical for ferrite cores, and their
presence justifies the rejection of the core from the production line.
Extraction of the shape of defects covering a significant portion
of the core may be obtained by the use of the method presented
in the current paper together with the watershed segmentation. However,
in the case of a large defect, the core can also be rejected directly,
on the basis of the output from the pyramid. The paper presents several
examples of the extraction of the shape of defects, which confirm
the validity of the proposed method.
Key words: morphological defect detection,
defect extraction, morphological reconstruction.
- Orkisz M., Hernandez-Hoyos M., Douek Ph., Magnin I.:
Advances of blood vessel morphology analysis in 3D magnetic
MGV vol. 9, no. 1/2, 2000, pp. 463-471.
We deal with image processing applied to 3D analysis of vascular morphology
in magnetic resonance angiography (MRA) images. It is, above all,
a state-of-the-art survey. Both filtering and segmentation techniques
are discussed. We briefly describe our most recent contribution:
an anisotropic non-linear filter which improves visualization
of small blood vessels. Enhancement of small vessels is obtained
by combining a directional L-filter applied according to the locally
estimated orientation of image content and a 2D laplacian orthogonal
to this orientation.
Key words: 3D image processing, medical imaging,
magnetic resonance angiography, image enhancement, image segmentation.
- Palenichka R.M., Belikova T.P., Ivasenko I.B.:
Robust extraction of diagnostic features of lesions
from medical radiographic images.
MGV vol. 9, no. 1/2, 2000, pp. 473-486.
for the Best Paper Award.]
In this paper, a robust structural approach to detection, object segmentation
and calculation of object features in medical images of different modalities
is proposed. The goal of the presented approach is the detection
and feature-based objective description of objects of interest
in medical images for diagnosis of lesions in a natural way and
in accordance with the physician diagnostic features used in the clinical
practice. A set of local structural features was divided in two classes:
properties of planar object shape and intensity distribution properties.
The methods of robust regression have been applied for
robust estimation of some object properties. Experimental results
of the extraction of diagnostic properties of lesions on lung images
confirmed the advantage of the proposed method over the conventional
approach of histogram-based segmentation.
Key words: image processing, robust regression,
- Tarnawski W., Merta A.:
Colour image segmentation based on fuzzy rules derived
from entropy measurement.
MGV vol. 9, no. 1/2, 2000, pp. 487-495.
This paper introduces a new colour image segmentation method based
on fuzzy rules derived from measures of fuziness (fuzzy entropy)
of an input image.
The proposed supervised process of segmentation proceeds in
two steps: first - user selects the region including pixels
"more-or-less" connected with image objects, at the second
step - based on generated and tuned fuzzy memberships
functions - the system automatically generate fuzzy rules and
on the basis of them finds in the whole image all pixels
which are similar to pixels from selected region. By
combining the statistical approach connected with entropy
measurement with uncertain or ambigous data selected by the
user we received very high repeatability of results
indepedently from the degree of precision in defining the
region of features. Results are presented for using the
mentioned method in analysing of medical images.
Key words: colour image segmentation,
fuzzy sets and rules, maximum entropy principle.
- Stapor K.:
Geographic map image interpretation - survey and problems.
MGV vol. 9, no. 1/2, 2000, pp. 497-518.
This paper is a brief survey of the most common large scale (1:500-1:1000)
geographic map processing techniques in the traditional three levels
of document image analysis: pixel-level processing, feature-level analysis
and graphics/text recognition and interpretation, followed by a description
of some of the most relevant systems developed for map interpretation.
The systems (methods) are classified depending on the type of map that
they can interpret.
Key words: document image analysis, graphics recognition,
map interpretation, automatic vectorization.
- Arckhipov A., Degtyarev S., Titov V.:
The system for microcircuit analysis and identification.
MGV vol. 9, no. 1/2, 2000, pp. 519-524.
The recognition of character information is one of the most important
matters of the present day. One of such matters is the electronic
microcircuit identification by optical recognition of black and white
image of character mark on its body. The paper describes the system
for microcircuit analysis and identification based on an input
device in a computer of the videoimage of the microcircuit and
algorithms of recognition of the character mark on its body.
The system for microcircuit analysis and identification
structure and the new method of line segmentation are described.
Key words: character recognition, identification system,
video-input device, electronic microcircuit, camera.
- Rodrigues M A., Liu Y., Wei Q.:
Scatter matrix based iterative structure and pose
estimation from a single image.
MGV vol. 9, no. 1/2, 2000, pp. 525-538.
It has been demonstrated that the constraint least squares
(CLS) method yields accurate and robust motion parameter
estimation from range images assuming that the image data are corrupted
by noise only without outliers. Also, it has been shown
that scatter matrix based correspondenceless motion
estimation from range images is mathematically well defined.
In this paper, we propose a new accurate and robust method
for pose and structure estimation from perspective images that draws
on the good features of scatter matrix and constraint least
squares methods. For reasons of practicality and
efficiency, the method first iteratively estimates
the depths of some control points based on the constraints
induced from the scatter matrix based correspondenceless
motion estimation algorithm and then provides a closed
form estimation of the depths of the other points based
on rigid constraints. Finally, the method estimates pose
parameters by the constraint least squares. For a
comparative study of performance, we also implemented
a linear algorithm for pose estimation and a motion estimation
algorithm based on epipolar geometry. Experimental results based
on both synthetic data and real images show the relative superior
performance of the proposed algorithm.
Key words: pose and structure estimation,
constraint least squares, scatter matrix, correspondenceless motion estimation,
perspective image, range image.
- Won Y., Park Y.:
Property of grayscale hit-or-miss transform
and its application.
MGV vol. 9, no. 1/2, 2000, pp. 539-547.
Even a slight change in image acquisition environment makes recognition
of objects or patterns difficult.
This paper proposes a mathematical morphology operation for feature
extraction insensitive to intensity variation. A gray-scale hit-or-miss
transform is introduced and its property which is independent of intensity
change is verified. The hit-or-miss transform operator is applied to
a shared-weight neural network for feature extraction process.
For real world application, the neural network is applied to recognition of
numbers in vehicle license plate. Experimental results demonstrate that
the gray-scale hit-or-miss transform can reduce effects caused
by lighting conditions.
Key words: intensity variation, mathematical morphology,
neural network, vehicle license plate recognition.
- Hoshino J.:
Merging moving objects onto a panoramic background image.
MGV vol. 9, no. 3, 2000, pp. 551-560.
In this paper, we propose a new
method for merging moving objects onto a panoramic background
image. First, we decompose the local intensity variation
between the input image and the panoramic background image
into three components (i.e., translation in x and y
direction, and the intensity offset caused by the moving
objects). Then, we estimate the accurate planar projective
parameters by using the weighted least square method to
eliminate intensity offset caused by the moving objects.
The experimental results show that the proposed method can enable
Key words: panoramic image mosaicing, object detection,
weighted least square estimation, video surveillance, virtual reality.
- Tobler R.F., Purgathofer W.:
Hierarchical stochastic radiosity for implicitly defined
MGV vol. 9, no. 3, 2000, pp. 561-571.
The method presented in this paper calculates global illumination for scenes
with implicitly defined surfaces, such as CSG models. A Monte Carlo type
particle tracer which directly uses the CSG models distributes the light
power in the scene. Global illumination information can be maintained at
different resolution levels with the help of a hierarchical storage mechanism for radiosity values based on an octree. A hierarchical meshing algorithm
is used to adaptively refine the radiosity approximations as the simulation
of particle proceeds. Shadow leaks and excessive variance of the
radiosity values are avoided by filtering the illumination information
represented in the octree data structure. Included results show that
the new method is viable for generating high quality global illumination
- Shin Y., Lee S., Lee Y.:
Facial expression recognition based on dimension model using
automated feature extraction.
MGV vol. 9, no. 3, 2000, pp. 573-585.
This paper presents a facial expression recognition based on
dimension model of internal states that uses automated feature
extraction. We apply this approach mostly for the frontal pose.
Features of facial expressions are extracted automatically in
three steps. In the first step, Gabor wavelet representation can
provide edges extraction of major face components using the
average value of the image's 2D Gabor wavelet coefficient
histogram. In the second step, sparse features of facial
expressions are extracted using fuzzy C-means clustering (FCM)
algorithm for neutral faces, and in the third step,
using the dynamic model (DM) for expression images. The result of
facial expression recognition is compared with dimensional values
of internal states derived from semantic ratings of words
related to emotion by experimental subjects. The dimensional model
can recognize not only 6 facial expressions related to Ekman's
basic emotions, but also expressions of various internal states.
A facial expression in the dimension model includes two dimensions which
are pleasure-unpleasure and arousal-sleep. We show the result of
expression recognition in the dimension model. In this paper, with
the dimension model we have improved the limitations of expression
recognition based on basic emotions, and have extracted features
automatically with a new approach using the FCM algorithm and the
Key words: dimension model, Gabor wavelet coefficient
histogram, fuzzy C-means clustering (FCM), dynamic model (DM),
multilayer-perceptron, expression recognition.
- Stojanovic R., Mitropulos P., Koulamas C., Karayiannis Y., Koubias S.,
An approach for automated defect detection and neural classification
of web textile fabric.
MGV vol. 9, no. 3, 2000, pp. 587-607.
This paper presents an automatic vision-based system for quality
control of web textile fabrics. A general hardware and software
platform developed to solve this problem is presented and a
powerful algorithm for defect inspection is proposed. Based on the
improved binary, textural and neural network algorithms, the
proposed method gives good results in detection of many types
of fabric defects under real industrial conditions, where
presence of many types of noise is an inevitable phenomenon.
A high detection rate with good localization accuracy, low rate of
false alarms, compatibility with standard inspection tools and low
price are the main advantages of proposed system, as well as of the
overall inspection approach.
Key words: automated visual inspection, textile fabric,
defect detection, texture analysis, neural classification.
- Zorski W., Foxon B., Blackledge J., Turner M.:
Irregular pattern recognition using the Hough transform.
MGV vol. 9, no. 3, 2000, pp. 609--632.
This paper presents an application
of the Hough Transform to the tasks of learning and identifying
irregular patterns in computer vision systems. The method
presented is based on the Hough Transform with a parameter space
defined by translation, rotation and scaling operations. An
essential element of this method is the generalisation of the
Hough Transform for grey-level images. The technique simplifies
application of the Hough Transform to pattern recognition tasks
as well as accelerates the calculations considerably. The technique
could be used, for example, in a robotic system or for image analysis.
Key words: Radon transform, Hough transform, computer vision,
irregular patterns, scaling problem.
- Philipp M.:
An active structure approach to stereo matching.
MGV vol. 9, no. 3, 2000, pp. 633-645.
A new global method of image
matching in stereopsis is proposed. It relies on the matching of
points (tokens) belonging to image edges. The matching is based on
the smooth disparity field constraint and the intensity-constancy
constraint on the tokens using epipolar geometry. The paper
describes the way the tokens are chosen and the matching procedure.
Results are presented for sample image pairs.
Key words: stereopsis, disparity field, epipolar geometry,
smoothness constraint, active contour models, simulated annealing.
- Stapor K.:
A knowledge-based framework for geographic map image
MGV vol. 9, no. 3, 2000, pp. 647-671.
In this paper a flexible and generally applicable knowledge-based
framework for large-scale geographic map image analysis is proposed.
The described framework is composed of three schemas, namely:
map model, object detectors and image analysis flow scheme. A new,
efficient hybrid knowledge representation scheme, and a flexible,
mixed control strategy based on reasoning with incomplete information
are proposed. To increase reliability and accuracy of map image
analysis, accumulation of information from different sources
is used. The proposed approach has been implemented in the
map conversion module MAPIN of the Integrated Spatial
Organization System SIT.
Key words: document image analysis,
geographic map image interpretation, model-based analysis,
graphics/text recognition, vectorization.
- Klette R., Yip B.:
The length of digital curves.
MGV vol. 9, no. 3, 2000, pp. 673-703.
The paper discusses one of the elementary subjects in
image analysis: how to measure the length of a digital curve?
A digital curve in the plane is defined to be a cycle given
either as an alternating sequence of vertices and edges, or an
alternating sequence of edges and squares. The paper reports about
two length estimators, one based on the partition of a frontier of a
simply-connected isothetic polygon into digital straight segments, and
one based on calculating the minimum-length polygon within an open boundary
of a simply-connected isothetic polygon. Both techniques are known
to be implementations of convergent estimators of the perimeter of bounded,
polygonal or smooth convex sets in the euclidean plane.
For each technique a linear-time algorithm is specified,
and both algorithms are compared with respect to convergence speed
and number of generated segments. The experiments
show convergent behavior also for perimeters of non-convex
bounded subsets of the euclidean plane.
Key words: digital geometry, digital curves,
length of digital curves, digitization, curve length estimation,
multigrid convergence, experimental evaluation,
digital straight segment approximation, minimum-length polygon calculation.
- Pieczynski W., Tebbache A.-N.:
Pairwise Markov random fields and segmentation of textured
MGV vol. 9, no. 3, 2000, pp. 705-718.
The use of random fields, which allows one to take into
account the spatial interaction among random variables in
complex systems, becomes a frequent tool in numerous problems
of statistical mechanics, spatial statistics, neural network
modelling, and others. In particular, Markov random field
based techniques can be of exceptional efficiency in some
image processing problems, like segmentation or edge detection.
In statistical image segmentation, that we address in this
work, the model is generally defined by the probability
distribution of the class field, which is assumed to be a
Markov field, and the probability distributions of the
observations field conditional to the class field. Under some
hypotheses, the a posteriori distribution of the class field,
i.e. conditional to the observations field, is still a Markov
distribution and the latter property allows one to apply
different bayesian methods of segmentation like Maximum a
Posteriori (MAP) or Maximum of Posterior Mode (MPM). However,
in such models the segmentation of textured images is
difficult to perform and one has to resort to some model
approximations. The originality of our contribution is to
consider the markovianity of the couple (class field,
observations field). We obtain a different model; in
particular, the class field is not necessarily a Markov
field. However, the posterior distribution of the class field
is a Markov distribution, which makes possible bayesian MAP
and MPM segmentations. Furthermore, the model proposed makes
possible textured image segmentation with no approximations.
Key words: hidden Markov fields,
pairwise Markov fields, bayesian image segmentation, textured images.
- Caban D., Zamojski W.:
Median filter implementations.
MGV vol. 9, no. 3, 2000, pp. 719-728.
The paper reports on some experiments with implementing
median or other rank-order digital image filters using field
programmable devices. It demonstrates that a single field programmable
device may be used to build a median filter. The performance
delivered using a Xilinx XC4000E-1 device exceeded 35 million
pixels per second, when filtering within a 3x3 pixels
window using 8-bit pixel representation. These results were obtained
using autosynthesis, avoiding any direct manipulation in the design.
Key words: median, rank-order filter, image processing,
- Duleba I.:
Discrete Hartley transforms in processing of medical
MGV vol. 9, no. 3, 2000, pp. 729-740.
In this paper discrete Hartley transforms were used to filter
and to compress medical ultrasonic images. The transforms are global
and resemble discrete Fourier transforms, thus the butterfly algorithm,
used in Fast Fourier Transform (FFT), can be utilized for their
computation. Hartley transforms offer some advantages over Fourier
transforms especially when computations are performed on general-purpose
Key words: image processing, compression, filtration,
Hartley transform, FFT.
- Lilas T., Kollias S.:
An active 3D robot vision
system for robotic welding applications.
MGV vol. 9, no. 4, 2000, pp. 743-762.
In this paper we present development of an active three-dimensional
robot vision system, conducted on a large gantry-type welding robot.
The constraints are a complex working environment and the size
of the robot system, tackled by developing and using techniques
from several research fields. In particular, laser lighting is combined
with stereo vision and photogrammetry in order to obtain accurate
three-dimensional measurements. Computational geometry methods based
on Delaunay triangulations are used to generate a three-dimensional
model of the object. Pattern matching and segmentation techniques
are used to evaluate the model and identify its important features.
Path planning techniques are used for guiding the tool around the object
and adapting trajectories to fulfill the welding requirements.
The proposed active vision system significantly increases the potential
of the welding robot system, constituting a sophisticated sensor system
that is effective for robotic applications.
Key words: active vision system, vision system configuration,
vision system architecture, vision system calibration, 3D range data,
feature extraction, path planning.
- Zhang J.Z., Tsui H.-T., Wu Q.M.J.:
A monocular approach to 3-D reconstruction based
on bilateral symmetry.
MGV vol. 9, no. 4, 2000, pp. 763-773.
An object with a plane of symmetry is called bilaterally symmetrical.
When an arbitrary object and its image in a plane mirror are both visible
in a monocular view, the object and its image in the mirror
are also bilaterally symmetrical with respect to the mirror.
In this paper, we present a method for 3-D reconstruction
from a monocular image obtained by a calibrated camera based
on the bilateral symmetry theory. We first verify the algorithm
using a calibration block that is bilaterally symmetrical.
Then we test the algorithm using more natural objects
and a plane mirror whose position and orientation are known.
Very good reconstruction results have been obtained
in the experiments.
Key words: bilateral symmetry,
structure recovery, monocular vision.
- Wongthanavasu S., Sadananda R., Qi Y.:
A cellular automata-based model for edge detection.
MGV vol. 9, no. 4, 2000, pp. 775-796.
The paper discusses two-dimensional uniform cellular automata
for image processing. A cellular automaton rule using von Neumann
neighborhood is proposed for carrying out edge detection on binary images.
The computation model and characterization of the state space
of the rule are analyzed using a finite state machine, a state graph,
a deterministic finite automaton, and a characteristic polynomial.
The rule is extended to deal with 8-bit gray-scale images, and has been
tested on a set of images. Our work shows that a cellular
automata-based model for edge detection provides an optimum edge map
on binary images, and is an average better than
comparative edge operators for 8-bit gray-scale images.
Key words: cellular automata, image processing,
finite state machine, deterministic finite automaton,
- Kämpke T.:
MGV vol. 9, no. 4, 2000, pp. 797-824.
Transformations of vertex sequences of a regular grid graph into paths
of an arbitrary connected graph are facilitated according to various
coarsening and approximation operations, including minimum cost alterations
and minimum cost re-routings. The sequence transformations are supposed
to support issues of man-machine interaction, which implies lack
of an ultimate formal design objective. Furthermore, this implies that
formal methods and algorithms have to be combined in a pragmatic fashion.
For planar graphs, the notion of Voronoi regions is modified to graph
Voronoi regions which partition the plane according to proximity
to vertices and edges simultaneously. The non-planar case
is reduced to the planar case by adding all intersection points
of vertex connections to the original vertex set and by splitting vertex
connections accordingly. This allows grid point sequences
to be intermediately transformed to so-called mixed or region sequences
which are eventually transformed to vertex sequences by production
The algorithmic preprocessing burden of partitioning and indexing
the Euclidean plane via the graph Voronoi regions or approximations
thereof is practically larger and typically more complicated than any
of the run time computations.
Key words: graph Voronoi region, grid graph,
regular expression, touch screen.
- Abbas A.M., Szirmay-Kalos L., Horváth T., Fóris T.:
Quadratic shading and its hardware implementation.
MGV vol. 9, no. 4, 2000, pp. 825-839.
Rendering systems often represent curved surfaces as a mesh
of planar polygons that are shaded to restore a smooth appearance.
Gouraud shading uses linear color interpolation and its hardware
implementation is relatively easy, but it handles specular highlights
incorrectly and introduces annoying artifacts called Mach banding
over the edges of the polygon mesh. In software rendering, Phong shading
has been more popular, because it can realistically handle specular
materials. Since it requires the rendering equation to be evaluated
for each pixel, its hardware support poses problems. This paper presents
a nonlinear, i.e. quadratic interpolation scheme which is in between
Gouraud shading and Phong shading. It can also be implemented by hardware
means as Gouraud shading, but its shading quality is comparable with that
of the Phong shading. A software simulation and a VHDL description
of the shading hardware are also presented.
Key words: reflectance function, BRDF representation,
hardware rendering, incremental concept, interpolation,
Gouraud and Phong shading.
- Skala V., Bui D.H.:
Faster algorithm for line clipping against
a pyramid in E3.
MGV vol. 9, no. 4, 2000, pp. 841-850.
A new algorithm for line clipping against a pyramid in E3
is presented. The suggested algorithm is based on the use of a separation
function and avoids computation of intersection points that are not
endpoints of the output line segment. It also allows to solve all cases
more effectively. The performance of this algorithm is shown
to be consistently better than the traditional Liang-Barsky's algorithm
for line clipping in E3. Experimental results are also presented.
Key words: line clipping, computer graphics,
algorithm complexity, geometric algorithms, algorithm complexity analysis,
- Dharmaratne A., Harada K.:
Compatible convex decompositions of simple polygons.
MGV vol. 9, no. 4, 2000, pp. 851-866.
In this paper, we present an algorithm for constructing compatible
convex decompositions of two simple polygons. Given two simple polygons
with an equal number of vertices, convex decompositions of these
polygons are said to be compatible if there exists a one-to-one mapping
between them such that the two polygons are decomposed into an equal
number of convex sub-polygons and the corresponding sub-polygons
are defined by the corresponding sequence of vertices. In general,
it may not be possible to decompose two polygons compatibly if additional
vertices inside the polygon (commonly called Steiner points)
are not allowed. Our algorithm calculates a compatible convex
decomposition of two polygons with/without introducing Steiner points.
Key words: simple polygons, convex decomposition,
Steiner points, compatible convex decomposition.
- Nikodem P., Strug B.:
Re-creating Mondrian's style by artificial evolution.
MGV vol. 9, no. 4, 2000, pp. 867--875.
Genetic algorithms (GA) have become a popular computational approach
in computer arts. A brief overview of the field of evolutionary art
is sketched. An implementation of this technique intended to generate
2D images following the style of a well-known Dutch geometric painter,
Piet Mondrian, is also presented in this paper.
Key words: evolutionary methods, genetic algorithms,
evolutionary art, genetic programming.
- Lebiedz J.:
Efficient algorithms for drawing of lines in
MGV vol. 9, no. 4, 2000, p. 877.
- Revievers' index
- Authors' index
- Contents of volume 9, 2000
Maintained by Zenon Kulpa
Last updated Apr 24, 2002