Running Nebula
After you have compiled nebula, the build/bin
folder contains two or three
executables:
nebula_gpu
: This is the main simulator program that runs on the GPU. It is only compiled if a CUDA compiler was found on the system.nebula_cpu_mt
: This is a multi-threaded CPU-only version.nebula_cpu_edep
: A special CPU-only version that outputs locations inside the sample where energy was lost.
Invoking via the command line
Each of the programs is run with the following signature:
./nebula [options] geometry.tri primaries.pri material1.hdf5 [...] materialN.hdf5
The exact file formats of the geometry and primary electrons are described in detail on separate pages. We also provide tutorials and that should get you up and running with your first simulations.
The geometry file labels materials by numbers. These numbers are directly related to the order in which materials are supplied to the command line. Material 0 in the geometry file refers to the first material supplied to the command line, 1 in the geometry file refers to the second material on the command line, and so on.
The simulator sends its output (which is in a binary format) to the standard output. This output should be redirected to a file or a program that analyses the output on-the-fly. Redirecting to a file works as follows:
./nebula (parameters) > target_file
Command line parameters
The other options may be supplied in the format --key = value
. Each of the
executables has the following available options:
seed
: Sets the random seed. The seed is not randomized by default. Note though, that if the simulation runs on multiple GPUs or CPU cores simultaneously, work is assigned to threads in an unpredictable manner. So for reproducibility, the simulation must always be run on a single GPU or CPU core.energy-threshold
: Sets the lowest energy (w.r.t. vacuum level) to be included in the simulation. By default, this is zero, so all electrons that can reach the vacuum are simulated. Increasing this setting means that electrons are taken out of the simulation sooner, which can lead to big speedups.
nebula_gpu
has a few options specific to running the simulator on a GPU. These
are mostly a consequence of the fact that we cannot dynamically allocate more
memory on the GPU. The options are:
batch-factor
: New electrons must be added to the simulation in batches. Before the simulation starts, the simulator does a "prescan" to find out how often a batch needs to be added, and how big it must be. It is possible, though, that the prescan accidentally chooses too large a batch size. You will then see the simulation getting stuck. Reducing the batch factor then lets you reduce the batch size and avoid the simulation getting stuck. The default value is 0.9.capacity
: The maximal number of electrons that can be simulated at once. There is a large performance penalty for setting this too small, and a small penalty for setting it too large. It does not affect the simulation results. The default value works well for all GPUs we tested. If the simulation gets stuck, then changing the capacity will not help, as the batch size scales with the capacity.prescan-size
: Number of electrons to use in the prescan. Large values will result in a more accurate prescan, but it will take more time.sort-primaries
: Whether to sort the primary electrons before simulating. Sorting them (by starting position) speeds up the simulation itself, but the time spent sorting them usually costs more than is saved. Disabled by default.
Finally, nebula_cpu_edep
has two types of output: the regular output with
detected electrons, and the energy deposition data. Therefore, it has the
following additional options:
detect-filename
: File name to send the data for detected electrons to. This isdetected.bin
by default.deposit-filename
: File name to send the data for energy deposits to. By default, this is sent to the standard output.