Abstracts of Papers

5 (1996) | main | 7 (1998) |

No. 1: Special Issue on **Diagrammatic Representation and Reasoning**,

No. 2,

No. 3,

No. 4.

Special Issue Editor: Zenon Kulpa.

- Excerpt from the

The problem of finding appropriate representations of various types of knowledge is a subject of continued research in artificial intelligence and related fields since the beginning of these disciplines. Much of the progress in science involved finding new representations of various phenomena or formal constructs: from diagrams of Euclid, through calculus notation of Newton and Leibnitz, to Feynman quantum particle diagrams, and many others. An appropriate way of representing knowledge about some phenomenon, problem, or formal system allows for effective description of the domain knowledge, and facilitates reasoning and problem solving in the domain. As was stated by Simon: "...solving a problem simply means representing it so as to make the solution transparent."

One of the types of knowledge representation, used by people since unknown times, but only recently becoming an object of focused research (and computer implementation of the results) is the representation involving visual, graphical and pictorial means (diagrams).In recognition of growing interest and volume of works in this emerging field of diagrammatic representation and reasoning as well as its close ties with the fields of computer processing, understanding and generation of images and pictures, this Special Issue of the Machine GRAPHICS & VISION Journal has been compiled. It contains seven papers which cover a number of fundamental concepts and directions of research in the field.

- Kulpa Z.:

**Diagrammatic representation for a space of intervals**.

MGV vol. 6, no. 1, 1997, pp. 5-24. -
The paper presents a two-dimensional graphical representation
for the space of intervals and interval relations.
It consists of the representation of the space of all intervals
(the IS-diagram), and two diagrammatic representations of the space
of relations between intervals: the W-diagram (based on the
IS-diagram), and lattice diagram (for the neighbourhood lattice
of the set of basic relations). Also, a set of new graphical symbols
for the interval relations is proposed. Examples of application
of the proposed representations to represent knowledge about
interval algebra and certain interval problems are given,
to illustrate the possibilities and advantages of the proposed
representation.

**Key words**: diagrams, diagrammatic representation, diagrammatic reasoning, intervals, interval relations, interval algebra, knowledge representation, qualitative physics. - Barker-Plummer D., Bailin S.C.:

**The role of diagrams in mathematical proofs**.

MGV vol. 6, no. 1, 1997, pp. 25-56. -
This paper describes our research into the way in which
diagrams convey mathematical meaning. Through the
development of an automated reasoning system, called GROVER,
we have tried to discover how a diagram can convey the
meaning of a proof. GROVER is a theorem proving system that
interprets diagrams as proof strategies. The diagrams are
similar to those that a mathematician would draw informally
when communicating the ideas of a proof. We have applied
GROVER to obtain automatic proofs of three theorems that are
beyond the reach of existing theorem proving systems
operating without such guidance. In the process, we have
discovered some patterns in the way diagrams are used to
convey mathematical reasoning strategies. Those patterns,
and the ways in which GROVER takes advantage of them to prove
theorems, are the focus of this paper.

**Key words**: mathematical diagrams, reasoning strategies, visualization, proof, automated reasoning. - Anderson M., McCartney R.:

**Learning from diagrams**.

MGV vol. 6, no. 1, 1997, pp. 57-76. -
The theory of inter-diagrammatic reasoning provides a syntax
for a general diagram and set of operators that can be used
to reason with collections of related diagrams. We show the
theory's utility as a means of learning from diagrammatic
representations by presenting two different methods of
machine learning that learn from diagrams in two distinct
diagrammatic domains.
- Olivier P.:

**Hierarchy and attention in computational imagery**.

MGV vol. 6, no. 1, 1997, pp. 77-88. -
The most recent characterisation of visual mental imagery is
in terms of a complex set of interdependent subsystems rooted
in a theory of visual perception. We concentrate our
attention on a particular component of the architecture for
visual mental imagery, the visual buffer, and consider its
role in the performance of imagistic reasoning. After a brief
description of the relevant properties of the visual buffer
we critically review previous computational models of imagery
before going on to describe our own model of the visual
buffer and its application to reasoning about the kinematic
behaviour of higher pairs.

**Key words**: visual mental imagery, computational imagery, diagrammatic reasoning, kinematics. - Lemon O., Pratt I.:

**Spatial logic and the complexity of diagrammatic reasoning**.

MGV vol. 6, no. 1, 1997, pp. 89-108. -
Researchers have sought to explain the observed "efficacy"
of diagrammatic reasoning (DR) via the notions of "limited
abstraction" and inexpressivity. We argue that application
of the concepts of computational complexity} to systems of diagrammatic
representation is necessary for the evaluation of precise claims about
their efficacy. We show here how to give such an analysis.
Centrally, we claim that recent formal analyses of
diagrammatic representations fail to account for the ways
in which they employ spatial relations in their representational work.
This focus raises some problems for the expressive power of graphical systems,
related to the topological and geometrical constraints of the medium.
A further idea is that some diagrammatic reasoning may be analysed
as a variety of topological inference. In particular, we show how
reasoning in some diagrammatic systems is of polynomial complexity,
while reasoning in others is NP hard. A simple case study is carried out,
which pinpoints the inexpressiveness and complexity of versions
of Euler's Circles. We also consider the expressivity and complexity
of cases where a limited number of regions is used, as well as Venn
diagrams, "GIS-like" representations, and some three dimensional cases.

**Key words**: spatial logic, diagrammatic reasoning, complexity, topological inference, Euler's Circles, Venn diagrams, GIS. - Oliver M., O'Shea T., Fung P.:

**Three representations of modal logic: a comparative experiment**.

MGV vol. 6, no. 1, 1997, pp. 109-128. -
An empirical study was carried out to assess the comparative
benefits for learners of three representations of modal
logic. The three representational styles used were
syntactic, diagrammatic, and a learning environment
representation, which combined diagrams with block-world
examples. Results showed that the syntactic and learning
environment conditions were significantly more effective for
learners than the diagrammatic condition. The learning
environment condition also had significant motivational
benefits, and provided a meaningful grounding for the
concepts being learnt. An explanation based on Green's
cognitive dimensions is proposed for the poor performance of
learners using the diagrammatic condition. This experiment
demonstrates that not all graphical representations are of
benefit to learners, and that in addition to choosing a
representation which maps efficiently onto the domain,
motivational and cognitive factors must be considered.
A learning environment style of representation, combining law
encoding diagrams with concrete examples, is proposed to be
an effective way of teaching topics such as modal logic.

**Erratum**:

*Due to editor's error, the first line of the Introduction to this paper has been omitted in the printed version.*

The first paragraph of the paper should read:"This paper reports on a study which investigates the effects of three representational styles for learners of modal logic. The conditions used teaching material that was informationally equivalent, but which differed in representational style. The three styles used can be characterised as syntactic, diagrammatic, or "learning environment" (a combination of diagrams with examples). This paper focuses on two of the concepts covered in the material: proofs and modal systems. A complete report can be found in [13]."

- Missikoff M., Pizzicannella R.:

**A Deductive Approach to Object-Oriented Diagramming for Information System Analysis**.

MGV vol. 6, no. 1, 1997, pp. 129-151. -
With the rapid expansion of Object-Oriented (O-O) programming, also
Object-Oriented Analysis and Design (OOAD) is rapidly developing, with a
significant number of methods being proposed in the literature, and a few
experimented on the field.

OOAD methods are based on a diagrammatic approach. Diagrams are intuitive, fast to learn and easy to use. However they lack formality and therefore their use is mainly descriptive, while validation and verification are, to a large extent, performed manually. Lack of formality in diagrammatic modeling is not easily removed without loss of intuitiveness and transparency.

In this paper we present a method aimed at adding formality, without loss of clarity, in an OOAD diagram. This is achieved by using a simple, rule based, diagrammatic formalism: TQD++, and a deductive approach. TQD++ has been conceived to supply a diagrammatic representation for the Object-Oriented specification language: TQL++, used to construct conceptual models of O-O database applications, having a formal semantics. Our diagrammatic approach is based on an abstract syntax, where the graphical symbols are represented in their essence, not their appearance. The actual drawing of symbols is left to a high level layout facility. In this approach, the abstract diagram models the part of the analysis best suited to be represented graphically, then the analysis model is completed by conceptual annotations in TQL++. This hybrid analysis model can be verified and validated by Mosaico, an O-O conceptual modeling tool developed at IASI-CNR.

- Kozera R., Klette R.:

**Finite difference based algorithms for a linear shape from shading**.

MGV vol. 6, no. 2, 1997, pp. 157-201. -
We analyse different sequential algorithms for the recovery
of object shape from a single shading pattern generated under
the assumption of a linear reflectance map. The algorithms
are based on the finite difference approximations of the
derivatives. They operate on a rectangular discrete image and
use the height of the sought-after surface along a curve in
the image (image boundary) as initial data.
- Tieng Q.M., Boles W.W., Deriche M.:

**Recognition of polyhedral objects based on wavelet representation and attributed graphs**.

MGV vol. 6, no. 2, 1997, pp. 203-224. -
We propose a method for recognising polyhedral objects from
either their weak perspective or orthogonal projection on an
image plane. The method is applicable for one solely
polyhedral object per image. Each solid object is
represented by an attributed graph whose nodes represent the
object faces and the arcs represent the edges formed by the
intersection of adjoining faces. Each node of the graph is
associated with an attributed vector consisting of a set of
wavelet transform based affine invariant features in addition
to a scalar feature. This scalar feature is the type of
shape which the face belongs to. Each arc is attached with
one scalar attribute value which is the angle between two
adjoining faces corresponding to the ends of that arc.
A hierarchical matching algorithm is proposed for recognising
unknown objects based on the attributed graphs.
The algorithm consists of four stages arranged in the order
of increasing computational complexity. The proposed method has
been tested with several polyhedral objects in synthetic as
well as real images. Experimental results show that in most
cases, a single projection image is sufficient to recognise
the unknown object in the scene.
- Verestoy J., Chetverikov D.:

**Shape defect detection in ferrite cores**.

MGV vol. 6, no. 2, 1997, pp. 225-236. -
In the framework of a European technological research
project, a general method is presented for shape measurement
and defect detection of industrially produced objects using
the characteristic 2D projections of the objects.
The method is applied to the visual inspection
and dimensional measurement of ferrite cores. An optical
shape gauge system is described, based on rotation-invariant
shape matching. A novel shape representation is introduced
that facilitates the invariant matching. Special attention
is paid to finding the optimal reference position and orientation
(pose) of the measured shape for invariant comparison
to the reference shape. This problem arises because no reference pose
(e.g., baseline) can be specified a priori as shape defects may
deteriorate any of the dimensions.

**Key words**: image analysis, industrial inspection, ferrite cores, shape gauge, dimensional measurement, invariant shape comparison, shape matching. - Pham B.:

**A hybrid representation model for aesthetic factors in design**.

MGV vol. 6, no. 2, 1997, pp. 237-245. -
A general hybrid representation scheme is presented for
aesthetic factors in design, which consists of three parts:
visual, symbolic and quantitative. This scheme would allow
designers' aesthetic intents to be specified in ways that are
more intuitive and familiar to designers. The representation is
then inferred and mapped onto appropriate mathematical and
geometric techniques in order to achieve required effects on
shape and other characteristics of designed objects. This
approach would provide an environment which allow designers
fluid exploration of ideas in the conceptual stage of design,
without being encumbered with precise mathematical concepts and
details.

**Key words**: aesthetic factors, computer-aided design. - Palenichka R. M.:

**Lossless image data compression by binary block segmentation**.

MGV vol. 6, no. 2, 1997, pp. 247-264. -
The problem of reversible data compression of
radiographic images is considered with application to the
diagnostics imaging. The hierarchical (multiresolution)
prediction approach with a subsequent entropy encoding is
proposed for efficient compression by using a preliminary
spatial decorrelation of image data. The process of
decorrelation is envisaged as a feature extraction process by
a differentiation in the space domain based on a piecewise
polynomial model of the image data. The binary segmentation of
blocks allows to control the block entropy before an encoding
during the process of the multiresolution prediction.
The experimental results confirm a relatively high ratio of this
lossless compression.

**Key words**: image lossless compression, binary segmentation, entropy encoding, prediction, decorrelation. - Ignatiev V.M., Abuzova I.V., Larkin E.V.:

**Image filtering in the MIMD concurrent computer**.

MGV vol. 6, no. 2, 1997, pp. 265-274. -
Organization principles and time characteristics of an image
filtering in the MIMD concurrent computer with interprocessor
communicator are considered. It has been shown, that the
dependence of computational time on the number of processors
has a minimum point, which position on the time-axis is
defined by hardware and software features of a system.

**Key words**: filtering, concurrency, communicator, image.

- Stevens M.R., Beveridge J.R., Goss M.E.:

**Visualizing multi-sensor model-based object recognition**.

MGV vol. 6, no. 3, 1997, pp. 279-303. -
A difficult problem when designing automatic object recognition algorithms
is the visualization of relationships between sensor data and the internal
models used by the recognition algorithms. In our particular case, we need
to coregister color, thermal (infrared), and range imagery, to 3D object
models in an effort to determine object positions and orientations in
three-space.

In this paper we describe a system for interactive visualization of the various spatial relationships between the heterogeneous data sources. This system is designed to be closely linked to the object recognition software such that it allows detailed monitoring of each step in the recognition process. We employ several novel techniques for visualizing the output from an imaging range device. Our system also incorporates sensor models which can simulate sensor data for visible features of stored object models, and display these features in the proper position relative to the appropriate sensor.

**Key words**: CAD modeling, multi-sensor, visualization. - Ranefall P., Nordin B., Bengtsson E.:

**A new method for creating a pixel-wise box classifier for colour images**.

MGV vol. 6, no. 3, 1997, pp. 305-323. -
When segmenting colour images pixelwise classification is
often a useful approach.
In this paper a method for creating a pixelwise box
classifier to be applied to multiband images is presented.
It is based on a hierarchical colour space splitting
algorithm originally presented as a method for selecting
colours to a display that does not support full colour
quality. Through the addition of the concept of interactively
defined reference pixels the original unsupervised clustering
algorithm is transformed into a supervised classification
algorithm. This classifier is compared with the commonly
used Maximum Likelihood (ML) classifier, with respect to
speed and average colour distance. It is also shown that the
algorithm applied to a reference image defines a metric in
the colour space. The proposed method is particularly useful
when the same classifier should be applied to several similar
images, since the resulting box classifier can be implemented
efficiently using table look-up techniques.

**Key words**: box classifier, colour images, supervised classification. - Wang Y., Bhattacharya P.:

**An algorithm for finding parameter--dependent connected components of gray-scale images**.

MGV vol. 6, no. 3, 1997, pp. 325-340. -
In a previous work, we have introduced the concept of a
parameter-dependent connected component of gray-scale images,
which is a convenient tool to analyze or understand images at
a higher level than the pixel level. In this paper, we
describe an algorithm for finding the parameter-dependent
components for a given image. We discuss different strategies
used in the algorithm. Since the proposed algorithm is
independent of the formation of the images, it can be used
for the analysis of many types of images. The experimental
results show that for some appropriate values of the
parameters, the objects of an image may be represented by its
parameter-dependent components reasonably well. Thus, the
proposed algorithm provides us with the possibility of
analyzing images further at the component level.

**Key words**: gray-scale image, connected components, algorithm. - Aboul Ella H., Nakajima M.:

**Image morphing with scattered data points based on snake model and thin plate spline transformation**.

MGV vol. 6, no. 3, 1997, pp. 341-351. -
Image morphing is an active field of research and recent
efforts aim at improving both user interface and warping
results. Generally, the morphing technique involves three
problems. The first is to establish correspondence between
given two images, which is most difficult part of morphing
process. The correspondence is usually established by hand.
The second problem is to define or construct a mapping
function which deforms the first image towards the second one
based on these feature points, and vice versa. The third
problem is to blend pixel values of the two deformed images
to create in-between images, this will end the morphing
process. In this study aims to raise strategy to solve these
problems.

First, we adopt a semi-automatic algorithm based on snake model to specify the feature correspondence between two given images. It allows a user to extract a contour that defines a facial features such as lips, mouth, profile, etc., by specifying only endpoints of the contour around the feature that serve as the extremities of a contour. Then we use these two points as anchor points, and automatically computes the image information around these endpoints to provide boundary conditions.

Next, we optimize the contour by taking this information into account first only close its extremities. During the iterative optimization process, the image forces are moving progressively from the contour extremities towards its center to define the feature. It helps the user to define easily the exact position of a feature. It may also reduce the time taken to establish feature correspondence between two images. For the second image morphing problem, this paper presents a warping algorithm using thin plate spline, a well known scattered data method which has several advantages. It is efficient in time complexity and smoothed interpolated morph images with only a remarkably small number of feature points specified. It allows each feature point to be mapped to the corresponding feature point in the warped image.

Finally, the in-between images could be defined by creating a new set of feature points through the cross-dissolving process.

**Key words**: feature specification, image warping, image morphing, snake, thin plate spline. - Hajdasz T., Maciejewski H.:

**Image filtering for noise reduction based on fractal dimension**.

MGV vol. 6, no. 3, 1997, pp. 353-361. -
In this paper we present an algorithm for noise reduction
based on estimation of fractal dimension of neighborhood of
points in the image. Based on the estimated fractal
dimension, pixels interpreted as noise are distinguished from
objects of the original clean image. This technique is
effective for filtering e.g., scanned technical drawings.
A few examples presented in the paper show that the algorithm
introduces hardly any blurring into edges of the image
processed. The algorithm can be possibly applied in software
for filtering scanned images.

**Key words**: filtering for noise reduction, fractal dimension. - Abdel-Qader I.M.:

**Computation of displacement fields in noisy images**.

MGV vol. 6, no. 3, 1997, pp. 363-380. -
Motion estimation is an ill-posed problem since the motion
parameters that depend on optical variations are not unique.
Researchers need to add constraints to obtain unique
displacement estimates. In this paper, a new motion
estimation algorithm based on Markov Random Fields (MRF)
modeling is developed. The algorithm adds a new constraint
to the motion problem, named the restoration term. As with
MRF model-based algorithms, the motion estimation problem is
formalized as an optimization problem. Mean Field Annealing
Theory is used to compute accurate displacement fields in the
noisy image with explicit consideration of the noise
presence. Furthermore, the algorithm computes the mean field
values of the estimated image. The algorithm results in
accurate displacement vector fields, even for scenes with
noise or intensity discontinuities as demonstrated by the
implementation results on synthetic and real world image
sequences.

- Gorelik A.G.:

**Three-valued calculi in the problem of conversion from CSG to boundary models**.

MGV vol. 6, no. 3, 1997, pp. 381-387. -
This paper describes a computer method for conversion from
Constructive Solid Geometry (CSG) models to Boundary
Representation (B-Rep). CSG-model of the object is
represented in an algebraic form as a function of arbitrary
length. The conversion from CSG-model to B-Rep is performed
in one step only. For solving this problem , the Indexing
Three-valued Calculi (ITC) are used.

**Key words**: constructive solid geometry, boundary representation, conversion, three-valued calculus.

- Tarel J.-P.:

**Global 3D planar reconstruction with uncalibrated cameras and rectified stereo geometry**.

MGV vol. 6, no. 4, 1997, pp. 393-418. -
This article presents a seldom studied 3D reconstruction approach, and
focus on its numerous advantages. It is a global approach,
in which the algorithm attempts to reconstruct geometric high level
features by using only global attributes of the image projections
of the feature. Our elementary feature is a 3D planar patch, and
geometric moments are the global attributes. The proposed method
of reconstruction has the following advantages:

- the use of the geometric moments of image region does not require a pixel-to-pixel matching and yields robustness to small errors of segmentation on region edges,
- the rectified stereo geometry constraint allows reconstruction with uncalibrated or calibrated cameras when epipolar geometry is known,
- valid matched regions are selected and thus the probably occluded planar patches are not reconstructed,
- interpolation of views between the left and right cameras can be performed to produce synthetic intermediate views of the observed scene.
Experiments on real and synthetic stereograms are presented to illustrate the advantages of the approach.

**Key words**: 3D, stereo, reconstruction, planar assumption, uncalibrated camera, rectified geometry, moments of inertia, geometric attributes, affine invariants, view morphing.- Dabkowska M., Mokrzycki W.S.:

**A multiview model of convex polyhedra**.

MGV vol. 6, no. 4, 1997, pp. 419-450.- This paper deals with the multiview models of convex polyhedra for visual identification systems. In particular, it consists of a new method and an algorithm for generating views of convex polyhedra using the view sphere conception. It also describes a new method and an algorithm for completing the set of views of a model. The method consists in defining one-view areas on the view sphere and investigating adjacency of these areas. Information about the adjacency of one-view areas is needed for checking the covering of the sphere with these areas, for finding possible gaps in the covering and for generating missing views. The completeness of the covering of the sphere with one-view areas is necessary for the correctness of the representation.

**Key words**: model based identification of objects, 3D view representations of models, view sphere, one-view area, covering of a view sphere with one-view areas, completeness of representation.- Wünsche B.:

**A survey and analysis of common polygonization methods & optimization techniques**.

MGV vol. 6, no. 4, 1997, pp. 451-486.- Implicitly defined surfaces of (isosurfaces) are a common entity in scientific and engineering science. Polygonizing an isosurface allows storing it conventionally and permits hardware assisted rendering, an essential condition to achieve real-time display. In addition the polygonal approximation of an isosurface is used for simplified geometric operations such as collision detection and surface analysis. Optimization techniques are frequently employed to speed up the polygonization algorithm or to reduce the size of the resulting polygon mesh.

- Bartkowiak A., Szustalewicz A.:

**Detecting multivariate outliers by a grand tour**.

MGV vol. 6, no. 4, 1997, pp. 487-505.- A method of viewing multivariate data vectors and identifying among them outliers is described. The method is applied to two sets of benchmark data: Browne's stack-loss data and Hawkins-Bradu-Kass data. All the outliers contained in these data are easily identified.

Generally, it is expected that the method will yield good results (i.e. will find the outliers) for data having elliptical or nearly elliptical distributions.

**Key words**: exploratory data analysis, atypical data vectors, graphical representation of points from multivariate space, sequential rotations and projections.- Grabska E.:

**Theoretical concepts of graphical modelling**. [Dissertation Abstract]

MGV vol. 6, no. 4, 1997, pp. 507.- The purpose of the dissertation is to develop a theoretical basis of graphical modelling which is closely connected with creative and commercial design. Graphical modelling is discussed within the framework of graph transformations.
- Revievers' index
- Authors' index
- Contents of volume 6, 1997

5 (1996) | main | 7 (1998) |