Thomas Wolf

Department of Mathematics, Brock University

500 Glenridge Avenue, St.Catharines, Ontario, Canada L2S 3A1

email: twolf@brocku.ca

Winfried Neun

Konrad Zuse Institute Berlin

Takustrasse 7

D-14195 Berlin-Dahlem, Germany

email: neun@zib.de

The program Crack
is a computer algebra package written in
REDUCE for the solution
of over-determined systems of algebraic,
ordinary or partial differential equations with at most polynomial
non-linearity.
Crack is available as part
of the version 3.8 of the REDUCE system, dated April 2004.
The development version can be downloaded from
`
http://lie.math.brocku.ca/crack/src` and an interactive demo
can be done under `
http://lie.math.brocku.ca/crack/demo`.

The program with all its modules and options is too large to demonstrate. Instead we will concentrate on features that are unique to this software.

The development of the program was driven by large applications, for example, linear systems of partial differential equations in 30 independent variables, or bi-linear algebraic systems with hundreds of unknowns and more than a thousand conditions. The key to solve such problems was to analyse how one would have made progress in hand computations, how to formalize the heuristics that one would have applied and how to program and fine tune them.

To give an example, a standard Gröbner basis computation is strongly guided by a total ordering of monomials. Any such algorithm would miss linear combinations of two equations (analogous to computing a S-polynomial) which do not aim at eliminating leading terms but at eliminating as many terms as possible, regardless whether the leading terms get worse with respect to the ordering or not. By being flexible with the ordering after one has achieved a sparse system one has a good chance to avoid expression swell when continuing with traditional Gröbner basis computations.

The demo intends to give a

- short overview of the Crack
interface, concentrating on visualization aids for inspecting
large systems, including
- the occurence of all derivatives of selected functions in any equation,
- a statistics summary of the equations of the system,
- a matrix display of occurences of unknowns in all equations,
- a count of the total number of appearances of all unknowns,
- the determination of not under-determined subsystems,
- a listing of all sub- and sub-sub-.. cases with their assumptions, number of steps and number of solutions,
- graphical displays of size related measures of the computation done so far.

- a discussion of possibilities to trade interactively or
automatically the speed of the solution process versus avoidance of
expression swell:
- length reductions of equations irrespective of orderings,
- standard versus only length-reducing Gröbner basis computation steps,
- substitutions in the complete systems versus substitutions only in shorter equations,
- general substitutions versus growth bounded substitutions,
- Gröbner basis steps versus case splittings (induced by factorizations, substitutions with potentially vanishing coefficients or adhoc case distinctions).

- a demonstration of the ability to collect and apply syzygies which result as a by-product in the process of computing a differential Gröbner basis to integrate linear PDEs,
- a discussion of the treatment of inequalities: their usage, active collection and derivation, and their constant update in an ongoing reduction process based on newly derived equation,
- a demonstration of the capability added by Winfried Neun to run in a truely parallel mode on a beowulf cluster, recently also ported to 64bit AMD processors, (given an Internet access is available)
- a demonstration of safety measures, supporting extended automatic and interactive computations over long periods,
- a discussion and demonstration of post-computation procedures, especially the possibility to merge solutions of parametric linear algebraic systems and to mass-produce web-pages for sequences of solutions.

If time permits, some details of the parallel implementation of REDUCE on cluster systems can be discussed, as:

- This implemenation uses the PVM (Parallel Virtual Machine) system from Oak Ridge Labs for the communication between nodes of the cluster. Therefore it is (almost) portable to different architectures. A typical problem is the serialization of the LISP data structures which are spread out in memory at random.
- A remarkable feature is the availability of the communication controls at user level, i.e. the parallel program can be written in the same language as an application program like CRACK.?