Introduction - Research
- What is parallel processing?
Parallel processing is running a (sequential) program on multiple
to get the job done in less time. Herefore we parallelise the
by assigning parts of the job to the different processors. The
is to have a performance gain, which we call the speedup. The
problem is that parallelisation is problem-dependent and cannot yet be
automated, furthermore, speedup is not guaranteed.
- Parallel Processing: scalable performance for free (by
"A few years ago more or less everybody was sure about one thing:
processing was the ONLY solution to our more and more demanding
problems. Today, enthusiasm has been replaced by scepticism, as both
development cost of parallel software and the maintenance cost of a
machine has proven to be out of reach for most. Networks of powerful
readily available (and maintained !) in most companies and
running public domain parallel operating systems like PVM or MPI, can
cheap and yet truly scalable performance directly at your desktop. Now
if we would just get this software development problem solved." A
- Some Examples
- Overall scheme of Parallelisation.
- Our "Parallel
System" course provides a good introduction.
Our research focusses on:
The problems that have to be solved to get an
- Distributed systems (not shared-address space architectures),
clusters running under LINUX and the MPI message-passing communication
- General purpose parallelisation, opposite to instantiated
- Performance analysis, we are developing a standard analysis and a
to do the analysis automatically.
- Visualisation of the algorithm, the parallelisation, the system
A Generic solution for
automated parallel processing needs intelligence?
See also the project
topics on parallel processing.
and Parallel Computing Conferences
CCGRID-2002, 2nd IEEE
Conference on Cluster Computing and the Grid