Dataflow Computing for Biophysics and High Energy Physics

We use FPGAs to implement the dataflow graphs of algorithms. Dataflow computing runs every step of an algorithm in parallel and resembles a system of pipelines with synchronous operations. It maps well to FPGA hardware and can speed up algorithms for HPC by one or two orders of magnitude compared to general purpose hardware.

Projects

Localization Microscopy

Localization microscopy enhances the resolution of fluorescence light microscopy by about an order of magnitude. The fluorophores are switched between two different spectral states, e.g. they blink between bright and dark. These otherwise inseparable signals can then be isolated and the centre of each signal is determined with increased accuracy. Finally, the obtained positions are plotted in a new image with an improved resolution. The process can easily take up hours on standard hardware for a five-minute recording.

We use a pipelined description of the algorithms that find and fit the signals of the optical markers. Mapping the algorithm to FPGA hardware showed huge improvements in speed and made computational analysis shorter in time than the recording. Rewriting the algorithm yielded an acceleration factor of 100, and using the tools and FPGA card from Maxeler Technologies as an application accelerator gave us another factor of 225.

Publications

  • Frederik Grüll:
    Acceleration of biomedical image processing and reconstruction with FPGAs
    Doctoral Thesis, Univ.-Bibliothek Frankfurt am Main, 2014
  • Frederik Grüll, Udo Kebschull:
    Acceleration of biomedical image processing and reconstruction with FPGAs
    International Conference on Field Programmable Logic and Applications 2014, Munich, Germany, 2014
  • Heiko Engel, Frederik Grüll, Udo Kebschull:
    High-Level Data Flow Description of FPGA Firmware Components for Online Data Preprocessing
    GSI Scientific Report 2013, page 292, Darmstadt, Germany, 2014
  • Frederik Grüll, Michael Kunz, Michael Hausmann, Udo Kebschull:
    An Implementation of 3D Electron Tomography on FPGAs
    Proceedings of ReConFig 2012, Cancun, Mexico, 2012
  • Rainer Kaufmann, Jörg Piontek, Frederik Grüll, Manfred Kirchgessner, Jan Rossa, Hartwig Wolburg, Ingolf E. Blasig, Christoph Cremer
    Visualization and Quantitative Analysis of Reconstituted Tight Junctions Using Localization Microscopy
    PLoS ONE, vol. 7, Public Library of Science, 2012
  • Frederik Grüll, Manfred Kirchgessner, Rainer Kaufmann, Michael Hausmann, Udo Kebschull
    Accelerating Image Analysis For Localization Microscopy With FPGAs
    International Conference on Field Programmable Logic and Applications 2011, Chania, Greece, 2011

These publications were supported by Maxeler Technologies.
Maxeler Technologies

Electron Tomography

3D tomography is a technique where the density distribution of a volume is reconstructed by taking images of the sample from different angles and calculating back to the volume from the obtained 2D projections. Electron tomography uses electrons rays inside an electron microscope to image the sample and can therefore achieve higher resolution that light microsopy.

Our group is currently analysing to which extend image reconstruction can benefit from dataflow computing on FPGAs. We are implementing SART with both forward and back projection on FPGAs as an application accelerator.

High Energy Physics

FPGAs have a long history in data processing for High Energy Physics covering the handling of low level protocols up to computing first event building tasks. Every new FPGA generation comes with an increased device size, and as a consequence a larger number of more complex algorithms can be implemented in hardware. Until now, these algorithms are described using low level hardware description languages like VHDL or Verilog.

These languages have been proven to be well suited for the description of interface blocks like PCIe, DRAM controllers or serial optical links. However, development is expensive for processing algorithms based on dataflows. Complex pipeline architectures with hundreds of stages easily lead to code that is hard to read, and optimized processing modules with differing latencies cannot be integrated easily. Maintenance and modification of this kind of code is a complex task.

Utilization of code generation techniques from higher-level dataflow based frameworks in High Energy Physics can reduce the design effort of developing firmware dramatically while producing more efficient hardware.

Our aim is to investigate the benefits for FPGA firmware when using a higher level framework and compare the results to manually-written hardware descriptions.

Acknowledgement

We like to thank Maxeler Technologies for their generous hardware support as a member of the Maxeler University Program MAX-UP.

Machines in use:

  • MaxWorkstation, MAX2, Virtex-5 LX330T with 12 GB RAM on board, Intel Core i5 750 @ 2.67GHz
  • MaxWorkstation, MAX3, Virtex-6 SX475T with 24 GB RAM on board, Intel Core i7 870 @ 2.93GHz