The current library implementation uses a distributed memory MIMD model, regardless of the true underlying architecture. In order to keep processor utilization high, and communication low, the object database is replicated as among the processing elements. This approach works well for most scenes, since hundreds of thousands of the simple geometric primitives can be stored in the available node memory of most parallel computers. Although it performs well for simple objects, replication is a poor strategy for volumetric data and image map data. Work is in progress to distribute volume data and use ray passing when a ray pierces a volume with non-local data. Parallel CFD simulations already distribute grid data among processors so it is logical to render the data in place. The key factor in this strategy is to replicate or cache small memory size objects, and leave large memory size objects in place during rendering.
To exploit the data parallel nature of raytracing, the library assigns subsections of the image frame to seperate processors. The current implementation of the library uses static load balancing and scattered decomposition for the partitioning of the image plane. Since image blocks are computed in distributed memory, the final image must be constructed by writing the blocks into an output file. The library uses parallel file I/O to write pixel blocks to disk while computation proceeds. Since not all architectures support parallel I/O, message passing is used on networks of workstations and on the IBM SP-2. Future versions of the software will allow image data to be collected in memory on one node for interactive visualization or retransmission. An initial implementation of a runtime output window is already working on sequential machines.
All of the initial development and testing of the rendering library was conducted on a 32 node Intel iPSC/860 using NX. The current version of the rendering library uses MPI. Since the addition of MPI support, the library has been successfully tested on the iPSC/860, Paragon, IBM SP-2, SGI Challenge, and on networks of workstations. The current code includes makefile configuraions for each of the platforms above.