From stir
Jump to: navigation, search

Frequently asked questions about how to use STIR.


What publication should I refer to for STIR?

All info on publications is on the "publications" section of the STIR web-site, currently at

Projection data (aka sinograms) related FAQs

How to display projection generated by STIR (or your scanner)?

STIR comes with very basic utilities called manip_projdata and display_projdata that can display sinograms or viewgram, but this is really only useful for a quick check (or to see if STIR did read your data as expected). Your best option is to use an external display program.

To do this, use the extract_segments utility. This will write each segment in the projection data to a different file as a 3D volume. Currently, this is written in a version of the Interfile format. See the #Image related FAQs FAQ how to display volume.

How are projection data organised?

Please check the STIR glossary, available via the STIR website for information.

What is the difference between default bin size and effective central bin size (e.g. in the .hs Interfile header)?

The "default bin size" is only used when doing arc-correction ("geometric correction" in GE language). This is normally only used when you use FBP (or when you use arc-correction with correct_projdata).

The "effective central bin size" is really only used by STIR for arc-corrected data. It is then the bin-size that will be used by the projectors, so should be the "true" bin size. (Of course, if you've asked STIR to do the arc-correction, it will normally be equal to the "default bin size", but you can ask STIR to use a different bin size for arc-correction).

For non-arccorrected data, the "effective central bin size" is usually a bit larger than the "default", but this is manufacturer dependent. In any case, for non-arccorrected data, its value is ignored by STIR.

Image related FAQs

How to display images generated by STIR?

STIR comes with a very basic utility called manip_image that displays image slices, but it is really only useful for a quick check (or to see if STIR did read your image as expected). stir_write_pgm can be used to write a slice as a single PGM bitmap. However, normally, your best option is to use an external display program.

By default, STIR uses a version of the Interfile format, although only a subset of keywords is implemented. (See the STIR web-site for more links on Interfile).

Amide and xmedcon read STIR .hv files without trouble as long as the data-offset is zero (e.g. files which are written by STIR). Other packages might ignore the scale factor (but STIR by default writes as floats with scale factor 1). And other packages might insist on using the official Interfile 3.3 standard. The .ahv files written by STIR are closer to that, but they have a tweak to let Analyze (from the Mayo) read them correctly (as Analyze misinterprets the z-spacing). Open one of the .ahv files in your text editor and read the comments.

STIR currently uses a home-grown way to specify the image origin. No other program supports this convention as far as we know (as Interfile currently does not have relevant keywords). STIR currently completely ignores patient orientation etc. So if you have an image of the same object written by a different program, and the display program tries to interpret coordinate systems, it's unlikely the 2 objects will be displayed in the same location.

How to display images generated by STIR using ImageJ?

In the case that the file is a 3D image (no timing information) the "import raw" of ImageJ (or any other application with the ability to read binary data) will do the job. The proper size information can be found in the respective header file and the precision should be set to "float - 32 bit" (do check the relevant Interfile keyword).

How does the STIR coordinate system work (e.g. for generate_image)

Details on orientation etc are given in the STIR Developer's guide. Here we attempt to give some more info on the origin and how this works for images. This is somewhat complicated because the relation with the scanner needs to be considered.

STIR coordinates are currently related to the scanner, i.e. not to the patient (as in DICOM). A peculiarity is that STIR coordinates are ordered (z,y,x) (with z along the scanner axis, y vertical and x horizontal).


Currently, STIR does not support rotated coordinate systems. Therefore, we only need to give the location of the origin. Inside STIR, this origin can be shifted around using the offset. However, we will first assume that you did not do this ("zero offset").

For an image with zero offset, the origin is assumed to coincide with the centre of the first plane.

Let's say you want to use generate_image to create a cylinder in the centre of the (3D) image (and we use zero offsets for the image and an odd-number of pixels in x,y, see below). Then we need to compute the STIR coordinates of the centre. Given that (0,0,0) is at center of the first plane, the center of the last plane is at ((num_planes-1)*z_voxel_size,0,0). And therefore, the middle of the image is at ((num_planes-1)*z_voxel_size/2,0,0).

There's a small complication. When you have an odd number of voxels in x (or y), the 0 coordinate is indeed in the centre of the image. But if you have an even number, it's actually half a pixel off. This convention is probably different from other programs, therefore

We recommend to use odd number of voxels in all 3 directions.

For an existing image, you can use the list_image_info utility to get some information on geometry.

When developing code in STIR, you want to use the functions DiscretisedDensity::get_physical_coordinates_for_indices() etc to find out what the location of the centre of a particular voxel is.

Projection data

This coordinate system is currently never exposed to the user, but only the developers. Functions like ProjDataInfo::get_m() use a coordinate system where (0,0,0) is the centre of the (gantry of the) scanner. There's unfortunately also a set of obsolete function (which will be removed) that use a coordinate system where (0,0,0) in the centre of the first ring of the scanner (e.g. find_cartesian_coordinates_of_detection).

For all "segments" (see the STIR glossary), the data is assumed to be centred to the scanner (taking their average ring different into account).

Combining images and projection data

When you are forward projecting an image or reconstructing projection, you need to know what the relation between the 2 conventions is. Unfortunately, STIR does not support different "bed positions" yet. For backwards compatibility, the following convention is used.

For images with zero offset, the middle of the scanner is assumed to coincide with the middle plane of the image.

So, for the generate_image example above, the cylinder would be located in the centre of the scanner.

WARNING: the combination of these conventions means that if you change the number of planes in the image, you also have to change the "origin" of the shape such that it would forward project into the same projection data.

You can see this clearly from the formulas used in the generate_image example above.

However, this convention is confusing and therefore might be changed in the future. Together with a current limitation of the STIR projectors, this leads us to the following:

We recommend that you use images which have 2*(num_rings-1) planes with z-voxel-size equal to ring_spacing/2.

Using images with non-zero offset

It is possible to create images where a different origin is used. How you do this depends on the file format, but for Interfile, this can be done by changing the first pixel offset (mm) keywords. This is not recommended of course.

The only STIR utility that creates images with non-zero offset is zoom_image. It is designed such that if you specify zero offsets and all image sizes are odd, the object will remain in the same physical location compared to the scanner. Its usage is

zoom_image <output filename> <input filename> \
   sizexy [zoomxy [offset_in_mm_x [offset_in_mm_y \
     [sizez [zoomz [offset_in_mm_z]]]]]]]

If you need to zoom an image (e.g. for estimate_scatter), it is therefore highly recommended to use zoom_image (as opposed to trying to figure out yourself where the objects will go).

As a developer, you can create images with non-zero offset using the VoxelsOnCartesianGrid constructor with non-zero origin. This is not recommended however for future compatibility. The stir::zoom_image function should be safe.

Projection related FAQs

ERROR: DataSymmetriesForBins_PET_CartesianGrid can currently only support z-grid spacing equal to the ring of the scanner divided by an integer. Sorry

You can see this error when using forward projection of an image (e.g. when computing attenuation correction factors), or backprojection to an image. It happens because the projectors try to save memory (and time) by using a symmetry in the axial direction. To avoid this error, you have to use a z-spacing for the image which is half the ring-spacing of the scanner. (Although the symmetries could be used in other cases as well, there seem to be some problems with the projectors in such cases in the current version of STIR).

Building related FAQs

Unless you're building STIR on some exotic or very recent system, the building process should be straightforward (after reading the STIR_UsersGuide!).


Before building, on Ubuntu and debian, you should do the following

apt-get install  gcc g++ make libncurses-dev libX11-dev libboost-dev tcsh

(prefix with sudo for Ubuntu). The User's guide contains details for other systems.

What's the difference between "make all" and "make install"?

"make all" (or indeed "make") will only compile. "make install" will copy executables and a few scripts to ${INSTALL_PREFIX}/bin.

The "install" target "depends" on "all", i.e. "make" will implicitly do update "all" before doing "install". That's a complicated way to say you need only "install".

By the way, with current STIR Makefiles, "make test" will first check if the library is compiled and up-to-date, and if not, build it. In contrast, for the Makefiles generated by 'CMake', "make test" will run the compiled test executables without checking if they are up-to-date.

A good practice on multi core systems is to add the option " -jX". Where "X" is the number of the system's cores. This option will speed up dramaticaly the proccess.

Common compilation errors

Problems with missing include files

For instance

-DSC_XWINDOWS -o opt/display/gen.o -MMD -MP  -c display/gen.c ;
In file included from display/gen.c:24:0:
display/gen.h:114:48: fatal error: curses.h: No such file or directory
compilation terminated.

This message is telling you that you need the curses.h file (it is used in STIR when selecting GRAPHICS=X). See #Prerequisites (and the User's Guide) for installing extra packages.

Problems with boost

You do have boost, but are getting a compilation error as follows

./include/boost/config/compiler/gcc.hpp:92:7: warning: #warning "Unknown
 compiler version - please run the configure tests and report the results"

You need to upgrade your boost version. Your compiler version is more recent than the boost version that you have.

Recent compilers

STIR 2.1 (and earlier) cannot be compiled with very recent compilers such as gcc 4.6.x or CLang++. This problem is fixed in STIR 2.2. See the stir-users mailing list for more info. Example error:

error: uninitialized const ‘stir::BSpline::near_n_BSpline_function’
/opt/stir_2.1/include/stir/numerics/BSplines_weights.inl:78:9: note:
‘const class stir::BSpline::BSplineFunction<(stir::BSpline::BSplineType)0u,
double>’ has no user-provided default constructor

Problems with linking

These errors occur after a long time in the building process as all "object files" will be compiled first, before the executables will be build by linking the object files and the libraries. Errors would say something like a missing library.

You probably miss some development libraries. See the installation guide.

Testing related FAQs

All tests fail with a not found message

You get many messages like 138: generate_image: not found

You need to install all STIR executables into a directory and either add this to your path (see #Why can't I run any of the STIR executables and/or scripts?, or pass this directory to the test script (see recon_test_pack/README.txt).

Why does run_tests in the recon_test_pack fail?

A lot of tests fail

Most likely this due to a bug in the "incremental backprojector" which crops up depending on compiler/optimisation settings etc. An extensive discussion of this bug appeared on the STIR users lists. You might be able to see this using this link.

For example, a known system that has this problem is Ubuntu 64bit using gcc 4.5 in 64 bit mode (and -O3).

There are several ways to check:

  • run recon_test_pack with the --nointbp flag to remove tests that use this projector
  • display the sensitivity image that you're getting. Most likely you'll see a hot spot in the middle or some 45 degree lines

You could always compile as 32bit even on 64bit linux of course (use EXTRA_CFLAGS=-m32 EXTRA_LINKFLAGS=-m32, and don't forget to either change DEST or make clean). However, we do not recommend using the incremental backprojector for the iterative algorithms anyway as there's no corresponding forward projector.

Only the OSSPS quadratic prior test fails

For STIR 2.2 (and earlier) there is a known problem with the test for OSSPS in the recon_test_pack on cygwin with gcc 4.5.3. OSSPS is fine, but the test is too optimistic.

Why does run_ecat_tests in the recon_test_pack fail?

If it (only) fails on the ECAT6 data, this is because of a known problem in the LLN Matrix library that appears on more modern systems (e.g. 64 bit). Search the stir-users mailing list for some info. If you really need ECAT6 support, maybe you can disable optimisation when compiling the LLN Matrix library.

Why can't I run any of the STIR executables and/or scripts?

STIR comes as as a set of executables and scripts. For normal usage, these need to be in the path of your shell. If this isn't the case, you will see something like

$ generate_image mygreatimage.par
-bash: generate_image: command not found

The exact message you would see depends on your environment.

The recommended way to solve this is to install the STIR executables and scripts into a directory and then add this directory to your path. For example on Linux (or Cygwin) when using the handcrafted Makefiles:

$ cd /where/ever/is/STIR
$ mkdir /where/ever/you/want/it
$ make install INSTALL_PREFIX=/where/ever/you/want/it
$ PATH=$PATH:/where/ever/you/want/it/bin
$ LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/where/ever/you/want/it/lib

When using CMake, you normally set the installation location when running CMake, but you still need to do make install. And when using csh as your shell, you need to replace the last 3 lines with csh syntax

$ set path=( $path /where/ever/you/want/it/bin )
$ setenv LD_LIBRARY_PATH $LD_LIBRARY_PATH:/where/ever/you/want/it/lib

If you get permission denied' messages when creating the installation directory (or when doing the installation) it is probably because you choose a location for which you do not have write permission (such as /usr/local). Unless you want to make STIR available to all users on a machine, it is recommended to use a subdirectory of your home.

You might want to make sure that STIR is always in your path. To do that, you should copy the last 3 lines into your startup file. Depending on your default shell, this will be a file called ~/.bash_profile, ~/.bashrc, ~/.profile or ~/.cshrc on Unix/Linux/Cygwin. On Windows you will need to set the Path environment variable via the System Control Panel.

STIR and output of other simulators

Note that since STIR 3.0, there is an example script to do your own analytic simulations with STIR in examples/PET_simulation.


STIR comes with utilities to read SimSET sinograms, check the SimSET subdirectory and its README.txt


See the howto on this wiki


  • if you have a ROOT file, try See also Nikolaos' presentation at the STIR 2013 User's Meeting
  • if you output as raw, try STIR 3.0 conv_GATE_raw_ECAT_projdata_to_interfile
  • if you output as ECAT7, run ifheaders_for_ecat7, and then manually edit the .hs file to reflect the correct scanner size. A lot of the info should be alright, but scanner radius and scanner name is wrong (set the latter to unknown). The radius you should know as you ran the simulation! Block info etc is currently ignored by STIR so don't bother filling it in.
  • Almost working code exists to read LMF data via STIR. You can find it on the registered users page for STIR. Sadly, nobody has ever finalised this.

What are good settings for the reconstruction programs?

This is unfortunately a very hard question to answer. Look for some advice on the mailing lists. Note that the STIR defaults are not optimal, and neither are the sample .par files (and most definitely not the .par files in the recon_test_pack). A few things that seems pretty clear:

Should I change the default projectors used by the reconstruction?

For STIR 2.x, the default for the iterative reconstructions was to use a ray tracing forward projector and an incremental interpolating backprojector. This turned out to be a bad choice as these projectors are not matched, which creates problems (even if the incremental backprojector does work on your system, see the known problems). It's probably best to use matched projectors, and the easiest way to do that is to use a "matrix". The fastest is the ray tracing matrix. This is the default projector since STIR 3.0.

However, this is still not ideal as this defaults to using only 1 ray per bin in the sinogram. This can create discretisation artefacts in high count situations. For the iterative algorithms, you should therefore probably use something like this in your .par file:

projector pair type:= Matrix
Projector Pair Using Matrix Parameters:=
  Matrix type:= Ray Tracing
  Ray tracing matrix parameters:=
    number of rays in tangential direction to trace for each bin:= 10
  End Ray tracing matrix parameters:=
End Projector Pair Using Matrix Parameters:=

For the analytic algorithms, using a ray tracer as backprojector can create (other) discretisation artefacts. You can alleviate this by using more rays, but in 3D, the ray tracing matrix doesn't take a Jacobian into account (as you don't have to for iterative reconstructions). Either stick to the default interpolating backprojector, or use its matrix equivalent. See the User's Guide.

How many subsets should I use?

If you can afford to wait: 1. Otherwise as small as possible. However, because the STIR projectors use symmetries (unless you switch them off) the number of subsets needs to divide number_of_views/x where x is 4,2 or 1 depending on which x gives you an integer number in the division (when using all symmetries). For example, if you have 210 views, x=2, so you could use 1,3,5,7 and their multiples that still divide 105.

Which regularisation method should I use?

Kris Thielemans doesn't like "early stopping". Post-filtering is straughtforward (but you need to iterate longer than you think). You can use inter-iteration filtering or penalised reconstruction, but be aware that these create non-uniform resolution/regularisation. This is well-known in the literature, and can be fixed in STIR, but the relevant code is not yet available. (Remember also that OSL can diverge in noisy cases when the regularisation is too high).

In any case, you should normally not mix different regularisation methods (unless you know what you're doing of course).

What "penalisation factor" should I use for a particular prior?

This is entirely prior and data dependent. Think about it this way. The objective function is something like

Log-Poisson-likelihood + penalisation_factor * prior

where the prior is image-dependent. The log-Poisson is proportional to the projection data, and so is the (STIR-) reconstructed image.

Quadratic Prior

The QP prior is essentially (image_differences)^2, therefore the prior is proportional to (projection_data)^2. This means that if you want to have the QP to achieve count-independent smoothing, you have to make the penalisation factor inversely-proportional to the counts.

One way to fix this is to use the "uniform resolution weights for the QP" from Fessler et al. That's currently not distributed with STIR however. So, I'm afraid you're down to tuning things for your data.

Median Root Prior

Although there is not really a prior function for MRP, its gradient is independent of the image-scale, and the gradient of the log-likelihood is independent of counts as well (after enough iterations). Therefore, the effect of MRP is not count dependent like QP. However, it does depend on sensitivity/attenuation etc. You can use the "multiplicative form" of the update (see the User's Guide and to avoid this (in which case 1 is a high penalty), although the "additive form" of MRP is in principle better as it has higher influence where the sensitivity is lowest (note that MRP with large penalty is hampered by convergence problems when using OSL).

Other questions

what about arc-correction?

(see the glossary for what this is. GE tends to use "geometric correction" for the same concept)

STIR gets info about the arc-correction from the data. For instance, for Interfile you could see

applied corrections:={arc correction}

STIR reconstruction algorithms will automatically handle this for you, or tell you that they cannot. For instance, FBP (2D and 3D) will arc-correct first if still necessary. The iterative algorithms will not precorrect the data, but it is up to the projectors to adjust. However, the (default) incremental interpolating backprojector cannot handle non-arccorrected data. It will say something like

ERROR: BackProjectorByBinUsingInterpolation:
can only handle arc-corrected data (cast to ProjDataInfoCylindricalArcCorr)!

Change the projector.

How do I add my own scanner to STIR?

You might not need to. If you specify the scanner geometry in your Interfile header, STIR will handle it ok.

For instance, you could use create_projdata_template, pick a scanner that might be somewhat similar to yours, and then edit the generated Interfile header. The scanner part of the header takes the same information as Scanner:set_params() (take care of changes between mm and cm). Obviously, it contains more information such as the actual number of views, ring differences etc that is supposed to be in your data. (Check the STIR Glossary as well for some info). Once you have this template, you should be good to go.

Alternatively, you will have to modify the Scanner class. Marc Chamberland gave a good explanation of this on the stir-users list.

Note that STIR (at least 2.2 and earlier) ignores view_offset and block information.

Content license

All content on this wiki uses the Creative Commons Attribution-ShareAlike 3.0 Unported License CreativeCommons-BY-SA-Logo.png