How do I prevent erratic timings when capturing large sequences of images in Openlab?

Technical Note: 22
Reads: 6087
Creation Date: 06/12/1999
Modification Date: 12/02/2004

1. Other applications and the operating system taking time from Openlab

Other applications can take time from Openlab. This time probably peaks at less than 500ms - you don't notice it when you are only working with Openlab, but when an Openlab automation is running the pause shows up as a major "blip" in the capture sequence. The Finder is particularly bad for this - every few seconds it checks whether new CDs and floppy disks have been mounted and whether any open windows have changed.

To get around this, we have introduced the concept of a "Critical Section" within the Automator. While a Critical Section is running, the Automator does not give time to the rest of Openlab or to any other applications. This results in much better, smoother performance. You can start and stop Critical Sections in the Automator whenever you choose using the "Begin Critical Section" and "End Critical Section" tasks.

Note that it is still not possible for us to prevent the network and other interrupt-driven activities from taking time. For best performance, you should turn AppleTalk off when using the Automator for high-speed capture.

2. Memory management in low-memory situations

When Openlab needs memory, it asks the MacOS to allocate it from a store called the application heap - you set the size of this heap in the Finder "Get info" dialogue. If MacOS does not have a contiguous free block of sufficient size available (because memory is low or fragmented), it takes the following steps:

(a) Purges memory that can be purged (this has minimal effect in Openlab)

If there is still not enough memory...
(b) Compacts memory to move all the free blocks to the end and make one bigger free block

If there is still not enough memory...
(c) Calls Openlab's cache manager to free some images, making enough free memory.

The problem here is step (b): with small heap sizes (e.g. 20Mb), heap compaction is very fast and you don't notice any delay. With large heaps (e.g. 80Mb and up), heap compaction can take a considerable amount of time (up to 1 second on a G3/300 with 80Mb allocated to Openlab).

Unfortunately, there is no way of preventing heap compaction within the system - it happens automatically without the application being notified. Openlab is only aware that the memory manager has had to compact the heap after the compaction has been done - by then it is too late.

Openlab 2. improves the situation by trying to prevent the memory manager being forced to compact the heap. It does this by pre-emptively putting images into the cache just before a new image is created - this generally prevents MacOS from running short of memory. The result is improved acquisition speed and consistency.

There is another active step that users can take to improve performance in low-memory situations. In Openlab 2.0.1, we introduced an Automator task called "Compact Memory". This task places all the images that are currently in RAM out to disk, and compacts the remaining memory, leaving a large clear block for new images. Rather than have a small delay before every image as the cache manager makes space for the new data, you could use this task to clear the memory before a capture sequence - this way you incur the time penalty all in one go, rather than in lots of little pieces. You can then perform the compaction at a time that is convenient within the experiment, for instance when the fluorescence shutter is closed.

It is also possible to use the "Compact Memory" task to achieve very high capture rates in short bursts - capture a sequence until the memory is almost full, compact the memory, capture the next set of images, compact and so on. This would allow each burst to go directly to RAM, giving the best possible capture speed.