The PDL Book by PDL 2.006 Various Contributors - HTML preview

PLEASE NOTE: This is an HTML preview only and some elements such as links or page numbers may be incorrect.
Download the book in PDF, ePub, Kindle for a complete version.

"Dataflow" is the concept that multiple variables can remain connected to one another (so that data flows between them). PDL allows you to keep multiple variables that refer to the same underlying data. For example, if you extract a subfield of a large data array you can pass it to subroutines and other expressions just like any other PDL, but changes will still propagate back to the large array unless you indicate otherwise.

In general, PDL's element-selection operators (such as slicing and indexing) maintain dataflow connections unless they are explicitly severed. To support dataflow, PDL has two different kinds of assignment: the global assignment operator = and the computed assignment operator .=.

Global assignment is used to create new PDLs, and computed assignment is used to insert values into existing PDLs. Many other languages, such as FORTRAN and IDL, don't maintain dataflow for slices of arrays except in the special case where the slice operation is on the left-hand side of an assignment; in that case, those languages assume computed assignment rather than global assignment. That nuance sweeps under the rug the differences between the two types of assignment. It also yields many special cases that do not work correctly in those languages - for example, array subroutine parameters in IDL are passed by reference and can hence be used to change the original array - but array slices are copied before being passed, so the original array does not change. C sidesteps the issue by not (directly) supporting array slices. One result is that you can keep multiple representations of your data, and work on the representation that is most convenient.

For example:

  pdl> $a = xvals(5);

  pdl> $b = $a(2); # global

  pdl> $b .= 100;  # computed - flows back to $a

  pdl> print $a;

  [0 1 100 3 4]

Here, $a and $b remain connected by the slicing/indexing operation, so the change in $b flows back to $a. Most indexing operations maintain dataflow.

At times, you want to ensure that your variables remain separate or to make a physical copy of your data.

The copy operator makes a physical copy of its argument and returns it. In general, if you want a real copy of something, just ask for it:

  pdl> $a = xvals(5);

  pdl> $b = $a(2)->copy;

  pdl> $b .= 100;

  pdl> print $a;

  [0 1 2 3 4]

or, even more straightforwardly,

  pdl> $a = xvals(5);

  pdl> $c = $a;       # $c and $a remain connected

  pdl> $b = $a->copy; # $b is a (separate) copy of $a

The sever operator is slightly more subtle. It acts in place on its argument, cutting most kinds of dataflow connection. It cannot disconnect two variables that were cloned with Perl's =; it can only sever the dataflow connection between related PDLs. The wart is present in current versions that rely on the Perl 5 engine, because it is not possible to overload the built-in = operator in Perl 5.

  pdl> $a = xvals(5);

  pdl> $b = $a(2:3)->sever; # $b is a slice of $a: gets separated

  pdl> $b += 100; print $a; # changing $b doesn't affect $a.

  [0 1 2 3 4]

  pdl> $c = $a->sever;      # $c is a clone of $a: still connected

  pdl> $c += 100; print $a; # changing $b affects $a.

  [100 101 102 103 104]

Threading

Array languages like PDL perform basic operations by looping over an entire array, applying a simple operation to each element, row, or column of the array. This process is called threading. Threading is accomplished by the threading engine, which matches up the sizes of different variables and ensures that they "fit". The threading engine is based on constructs from linear algebra (but is slightly more forgiving than most math professors).

Most operations act on the first few dimensions of a PDL. These first dimensions are active dimensions and any dimensions after that are called thread dimensions. The active dimensions must match any requirements of the operator, and the thread dimensions are automatically looped over by the threading engine. The operator sets the number of active and thread dimensions. A given operator may have 0 active dimensions (e.g. addition, +), 1 active dimension (e.g. reduce operators like sumover and vector operators like cross), 2 active dimensions (e.g. matrix multiplication), or even more.

You can rearrange the way that an operator acts on a PDL by rearranging the dim list of that PDL, to bring dims down into the active position(s) for an operation or to bring them up to be threaded over. These rearrangements are a generalization of matrix transposition, though in general they are quite fast as they don't actually transpose the data in memory - only rearrange PDL's internal metadata that explain how the block of memory is to be used.

Threading rules

PDL operators that act on two or more operands require the thread dimensions of each operand to match up. The threading engine follows these rules for each dim (starting with the 0 dim and iterating through to the highest dim in either operand):

  • If both operands have the dim and it has a size greater than 1 in each operand, then the size must be the same for both!
  • print pdl(1,2,3) * pdl(3,4) doesn't work, because dim 0 of the left operand has size 3 and dim 0 of the right operand has size 2.
  • print pdl(1,2,3)*pdl(4,5,6) prints the string [4 10 18].
  • If both operands have the dim and it has size 1 in at least one operand (it is a trivial dim), then the dim is "extended" as a dummy dimension. This is a generalization of scalar multiplication in linear algebra.
  • print pdl(1,2,3) * pdl(2) prints the string [2  4  6].
  • If a dimension exists in one operand and not in the other, it is treated as a virtual trivial dim
  • print pdl([1,2],[3,4]) * pdl(3) prints the string [ [3 6] [9 12] ].
  • If one operand is a PDL and the other is a Perl scalar, the scalar is PDL-ified before the operation
  • print pdl([1,2],[3,4]) * 3 prints the string [ [3 6] [9 12] ].

Controlling threading and dimension order: xchg, mv, reorder, flat, clump, and reshape

Because rearranging the dim list of a PDL (i.e. transposing it) is the way to control the threading engine, PDL has many operators that are devoted to rearranging dim lists. Here are six of them:

transpose - matrix transposition

$at=$a->transpose will yield the transpose of a matrix $a (that is, with the 0 and 1 dims exchanged); you can use $a->inplace->transpose to change the variable itself. Of course, if $a has more than two dims, it is treated as a collection of matrices (the other dims are threaded over).

xchg - generalized transposition

You can generalize transpose to any two dims with xchg - just give the index numbers and those two indices get exchanged (transposed): $at  =  $a->xchg(0,1) is the same as using transpose, but you can also say (for example) $ax  =  $a->xchg(0,3).

mv - dim reshuffling

Using mv shifts a dim from its original location to a new location; all the other dims stay in the same relative order but get shifted to make room and/or fill up the old slot. You can