DSpace Collection:https://dspace.lboro.ac.uk/2134/22702017-09-24T19:30:39Z2017-09-24T19:30:39ZThe design of a neural network compilerSulaiman, Md. Nasirhttps://dspace.lboro.ac.uk/2134/256282017-06-29T15:29:33Z1994-01-01T00:00:00ZTitle: The design of a neural network compiler
Authors: Sulaiman, Md. Nasir
Abstract: Computer simulation is a flexible and economical way for
rapid prototyping and concept evaluation with Neural
Network (NN) models. Increasing research on NNs has led
to the development of several simulation programs. Not
all simulations have the same scope. Some simulations
allow only a fixed network model and some are more
general. Designing a simulation program for general
purpose NN models has become a current trend nowadays
because of its flexibility and efficiency. A proper
programming language specifically for NN models is
preferred since the existing high-level languages such as
C are for NN designers from a strong computer background.
The program translations for NN languages come from
combinations which are either interpreter and/or
compiler. There are also various styles of programming
languages such as a procedural, functional, descriptive
and object-oriented.
The main focus of this thesis is to study the
feasibility of using a compiler method for the
development of a general-purpose simulator - NEUCOMP that
compiles the program written as a list of mathematical
specifications of the particular NN model and translates
it into a chosen target program. The language supported
by NEUCOMP is based on a procedural style. Information
regarding the list of mathematical statements required by
the NN models are written in the program. The
mathematical statements used are represented by scalar,
vector and matrix assignments. NEUCOMP translates these
expressions into actual program loops.
NEUCOMP enables compilation of a simulation program
written in the NEUCOMP language for any NN model,
contains graphical facilities such as portraying the NN
architecture and displaying a graph of the result during
training and finally to have a program that can run on a
parallel shared memory multi-processor system.
Description: A Doctoral Thesis. Submitted in partial fulfilment of the requirements for the award of Doctor of Philosophy of Loughborough University.1994-01-01T00:00:00ZComputer solution of non-linear integration formula for solving initial value problemsYaakub, Abdul R. binhttps://dspace.lboro.ac.uk/2134/253812017-06-14T15:42:51Z1996-01-01T00:00:00ZTitle: Computer solution of non-linear integration formula for solving initial value problems
Authors: Yaakub, Abdul R. bin
Abstract: This thesis is concerned with the numerical
solutions of initial value problems with ordinary
differential equations and covers
single step integration methods.
focus is to study the numerical
the various aspects of
Specifically, its main
methods of non-linear
integration formula with a variety of means based on the
Contraharmonic mean (C˳M) (Evans and Yaakub [1995]), the
Centroidal mean (C˳M) (Yaakub and Evans [1995]) and the
Root-Mean-Square (RMS) (Yaakub and Evans [1993]) for
solving initial value problems.
the applications of the second
It includes a study of
order C˳M method for
parallel implementation of extrapolation methods for
ordinary differential equations with the ExDaTa schedule
by Bahoshy [1992]. Another important topic presented in
this thesis is that a fifth order five-stage explicit
Runge Kutta method or weighted Runge Kutta formula [Evans
and Yaakub [1996]) exists which is contrary to Butcher
[1987] and the theorem in Lambert ([1991] ,pp 181).
The thesis is organized as follows. An introduction
to initial value problems in ordinary differential
equations and parallel computers and software in Chapter
1, the basic preliminaries and fundamental concepts in
mathematics, an algebraic manipulation package, e.g.,
Mathematica and basic parallel processing techniques are
discussed in Chapter 2. Following in Chapter 3 is a
survey of single step methods to solve ordinary
differential equations. In this chapter, several single
step methods including the Taylor series method, Runge
Kutta method and a linear multistep method for non-stiff
and stiff problems are also considered.
Chapter 4 gives a new Runge Kutta formula for
solving initial value problems using the Contraharmonic
mean (C˳M), the Centroidal mean (C˳M) and the Root-MeanSquare
(RMS). An error and stability analysis for these
variety of means and numerical examples are also
presented. Chapter 5 discusses the parallel
implementation on the Sequent 8000 parallel computer of
the Runge-Kutta contraharmonic mean (C˳M) method with
extrapolation procedures using explicit
assignment scheduling
Kutta RK(4, 4) method
(EXDATA) strategies. A
is introduced and the
data task
new Rungetheory
and
analysis of its properties are investigated and compared
with the more popular RKF(4,5) method, are given in
Chapter 6. Chapter 7 presents a new integration method
with error control for the solution of a special class of
second order ODEs. In Chapter 8, a new weighted Runge-Kutta
fifth order method with 5 stages is introduced. By
comparison with the currently recommended RK4 ( 5) Merson
and RK5(6) Nystrom methods, the new method gives improved
results. Chapter 9 proposes a new fifth order Runge-Kutta
type method for solving oscillatory problems by the use
of trigonometric polynomial interpolation which extends
the earlier work of Gautschi [1961]. An analysis of the
convergence and stability of the new method is given with
comparison with the standard Runge-Kutta methods.
Finally, Chapter 10 summarises and presents
conclusions on the topics
discussed throughout the thesis.
Description: A Doctoral Thesis. Submitted in partial fulfilment of the requirements for the award of Doctor of Philosophy of Loughborough University.1996-01-01T00:00:00ZSome aspects of the efficient use of multiprocessor control systemsWoodward, Michael C.https://dspace.lboro.ac.uk/2134/251992017-06-01T10:16:43Z1981-01-01T00:00:00ZTitle: Some aspects of the efficient use of multiprocessor control systems
Authors: Woodward, Michael C.
Abstract: Computer technology, particularly at the circuit level, is fast
approaching its physical limitations. As future needs for greater
power from computing systems grows, increases in circuit switching
speed (and thus instruction speed) will be unable to match these
requirements.
Greater power can also be obtained by incorporating several processing
units into a single system. This ability to increase the performance
of a system by the addition of processing units is one of the major
advantages of multiprocessor systems. Four major characteristics of
multiprocessor systems have been identified (28) which demonstrate
their advantage. These are:-
Throughput
Flexibility
Availability
Reliability
The additional throughput obtained from a multiprocessor has been
mentioned above.. This increase in the power of the system can be
obtained in a modular fashion with extra processors being added as
greater processing needs arise. The addition of extra processors
also has (in general) the desirable advantage of giving a smoother
cost - performance curve ( 63). Flexibility is obtained from the
increased ability to construct a system matching the user 'requirements
at a given time without placing restrictions upon future expansion.
With multiprocessor systems; the potential also exists of making
greater use of the resources within the system.
Availability and reliability are inter-related. Increased availability
is achieved, in a well designed system, by ensuring that processing
capabilities can be provided to the user even if one (or more) of the
processing units has failed. The service provided, however, will
probably be degraded due to the reduction in processing capacity.
Increased reliability is obtained by the ability of the processing
units to compensate for the failure of one of their number. This
recovery may involve complex software checks and a consequent decrease
in available power even when all the units are functioning.
Description: A Doctoral Thesis. Submitted in partial fulfilment of the requirements for the award of Doctor of Philosophy of Loughborough University.1981-01-01T00:00:00ZFast learning neural networks for classificationTay, Leng Phuanhttps://dspace.lboro.ac.uk/2134/251612017-05-26T11:52:22Z1994-01-01T00:00:00ZTitle: Fast learning neural networks for classification
Authors: Tay, Leng Phuan
Abstract: Neural network applications can generally be divided into two categories. The first
involves function approximation, where the neural network is trained to perform intelligent
interpolation and curve fitting from the training data. The second category involves
classification, where specific exemplar classes are used to train the neural network. This
thesis directs its investigations towards the latter, i.e. classification.
Most existing neural network models are developments that arise directly from human
cognition research. It is felt that while neural network research should head towards the
development of models that resemble the cognitive system of the brain, researchers should
not abandon the search for useful task oriented neural networks. These may not possess the
intricacies of human cognition, but are efficient in solving industrial classification tasks.
It is the objective of this thesis to develop a neural network that is fast learning, able
to generalise and achieve good capacity to discern different patterns even though some
patterns may be similar in structure. This eventual neural network will be used in the
pattern classification environment.
The first model developed, was the result of studying and modifying the basic ART I
model. The "Fast Learning Artificial Neural Network I" (FLANN I) maintains good
generalisation properties and is progressive in learning. Although this neural network
achieves fast learning speeds of one epoch, it was limited only to binary inputs and was
unable to operate on continuous values. This posed a real problem because industrial
applications usually require the manipulation of continuous values.
The second model, FLANN II, was designed based on the principles of FLANN I. It was
built on the nearest neighbour recall principle, which allowed the network to operate On
continuous values. Experiments were conducted on the two models designed and the results
were favourable. FLANN II was able to learn the points in a single epoch and obtain
exceptional accuracy. This is a significant improvement to other researcher's results.
A further study was conducted on the FLANN models in the parallel processing
environment. The parallel investigations led to the development of a new paradigm;
Parallel Distributed Neural Networks (PDNNs), which allows several neural networks to
operate concurrently to solve a single classification problem. This paradigm is powerful
because it is able to reduce the overall memory requirements for some classification
problems.
Description: A Doctoral Thesis. Submitted in partial fulfilment of the requirements for the award of Doctor of Philosophy of Loughborough University.1994-01-01T00:00:00Z