In source/Makefile, the swith -DMPI turns on (if present) and off (if absent) the multi-chains MPI support. When the switch is present, the number of chains is automatically set equal to the number of processors used in the run. Each chain then produces output files with an id tag "_i", with i=1,...,n (n being the number of processors).
From the source directory, the command
make cleanall
cleans all the compiled files and executables. The command
make all
then recompiles the whole package building static libraries for each of the codes included into the package. Then those are linked to the MonteCarlo source files. The command
make superbayes
only recompiles the MCMC files in the source directory. Use
make getplots
to compile the getplots routine (for chain analysis and plotting).
For testing purposes the testing.90 file is provided. The command
make tester
will compile it.
The run is made from the command line with the command
tester
The parameters for running the tester are hardcoded in the source\tester.f90 file, and are the same as described below. Setting debug=.true. will write the full output with detailed info about the point being considered to the file spectrum.out. By default a file called ifort.* containing the formatted output is created too. The tester works in single-point mode if test_chain = .false., otherwise it can read in a list of points from an existing chain (filename of the chain file hardcoded in tester.f90), which can be useful for testing purpose.
If your installation has been successfull, the output of running tester should match the content of the file tester.output in the main directory.
If MPI is turned off, Superbayes is invoked in single-processors mode from the command line with the command
superbayes SampleIniFile.ini
The corresponding MPI command depends on the configuration of your machine. The SampleIniFile.ini file contains all parameters for the run. Currently, only the Constrained MSSM is supported, but the package is easily customizable to extend the MCMC to the general MSSM if required. The syntax of the .ini file is mutuated from the cosmomc package, and the meaning of the parameters is explained here .
SuperBayeS can be run in MCMC mode (using Metropolis-Hastings) or in grid-scan mode (which returns the likelihood on a multi-dimensional grid at pre-defined spacings in parameter space). See running options for details.
SuperBayeS comes with GetPlots, a routine for analysing the MCMC chains and plotting the results (which is largely based on GetDist, from the cosmomc package - refer to cosmomc website for futher details).
GetPlots is invoked with the command (from SuperBayeS root directory)
getplots GetPlotsSample.ini
It produces files .margestats (marginalized 1D statistics), .likestats (best-fit point and mean quality of fit limits) and .converge (converge statistics about the chains, refer to the cosmomc website for details). Those files are saved in the output_files folder (any pre-existing files are overwritten).
It also produces matlab files for 2 and 3 dimensional plotting and SuperMongo (SM) files for 1D plots of probability distributions. Those files are stored in the output_files folder, with suffixes .sm and .m
To produce corresponding plots, call SM (for 1D plots) with the command
sm < filename.sm
or matlab via the command
matlab < filename_2D.m or matlab < filename_3D.m
Details of the format of the ensuing plots can be custom-edited in the source files source\matlab.f90 and source\GetPlots.f90 (the latter for SM plots customization).
Analysis and plotting options are explained here.
file_root: the name of the output files produced. They consist of MCMC chains ( .txt ), log files ( .log , containing details of acceptance ratio and useful timing information if timing=.true. in source/paramdef.f90), a file containing the name and position of the saved variables ( .info , which variables are computed and saved depends on the settings of compute_xxx variables, see below). When postprocessing, this becomes the name of the input chains. Finally a file containing the computed observables flags in the MCMC mode ( .pp , which is used in the postprocessing mode for setting the format of the all chains) is generated as well. If restart = T, the run restarts from the last line of the chains. If chains with the same name already exist, they are overwritten. To prevent this from happening, set the variable FailSafeOn = .true. (hardcoded) in source/utils.F90
out_root: the name of the output files produced when postprocessing (otherwise ignored).
restart_and_continue: set it to F to produce a new run, set it to T to continue from where a previous chain stopped (currently only supported in MCMC mode).
action: set it to 0 to do MCMC (currently only Metropolis-Hastings is supported), set it to 1 to post-process an existing set of chains (useful for computing new variables, or doing a rough posterior adjustment for new data or new priors without re-running the whole chain, or for computing the indirect detection quantities corresponding to the output of a MCMC run), set it to 4 to compute the likelihood on a fixed-grid in parameter space (if MPI is enabled, each chain covers one part of the grid). When run in grid-scanning mode, only physically admissible points (e.g., EWSB achieved, no tachyonic masses, etc) are saved.
When post-processing, redo_like = T will recompute the likelihood from the saved values of the variables in the chains, without recomputing the theory. This is useful if only the data have changed, but you must have saved all of the relevant variables in the chains, as observables are not recomputed. redo_theory = T will recompute the observables, as well (useful if the theoretical predictions have changed). redo_change_like_only = T will just change the likelihood.
skip_lines: number of samples that are not saved at the beginning of the run (burn-in period). You might as well save them and remove them later when analysing the chains.
compute_xxx: those flags determine which quantities are computed and saved in the .txt files. Meaning as below.
compute_DM: set it to T to compute the relic dark matter abundance. CDM_purely_LSP = T assumes the LSP is the dark matter, otherwise the dark matter is made up of LSP plus another component and hence the WMAP3 observations are only an upper bound (the latter mode is currently untested).
compute_Direct_Detection : set it to T to compute direct detection cross sections.
compute_Indirect_Detection: set it to T to compute indirect detection quantities.
compute_Collider_Predictions: set it to T to compute collider-related quantities (masses, etc).
compute_BD: set it to T to compute B decay predictions.
compute_FH: set it to T to compute cross sections and branching ratios using FeynHiggs. Customize the variables you want to save by modifying the routine ReduceOutput in source/paramdef.f90 and the corresponding type, Reduced_Out
feedback: controls amount of text printed on standard output. 0 = none, 1 = some, >2 debug mode.
use_xxx: those flags determine which quantities are used in the computation of the likelihood (obviously if you want to use them you have to set the relative compute_xxx flag to T). All of the data values are customaziable in the file calclike.f90. Refer to our paper for how the likelihood is computed. Meaning as below.
Use_Nuisance: set it to T to use current constraints for nuisance (SM) parameters.
Use_CDM: set it to T to use current cosmological constraints on dark matter abundance.
Use_LEP: set it to T to use constraints from LEP on sparticle masses and Higgs mass.
Use_Anomalous_Mu: set it to T to use constraints on the anomalous magnetic moment of the muon.
Use_bsgamma: set it to T to use constraints on the process B-> s gamma.
Use_Bsmumu: set it to T to use constraints on the process B-> mu mu.
Use_Mass_W: set it to T to use constraints on the W mass.
Use_Weak_Mixing_Angle: set it to T to use constraints on the effective weak mixing angle.
Use_Bdecay: set it to T to use constraints on B decay.
Use_DD: set it to T to use constraints on direct dark matter detection (currently not supported).
use_data: select from the list to use current data (Dec 2007) or constraints from future observations (edit your future data in source/likedata.f90).
propose_matrix: set it to the name of the file containing the covariance matrix from previous runs. Used to adjust the proposal width in the new run.
redo_likeoffset: when postprocessing it might be useful to put an offset to the loglike if there is a large change to it with new data to get sensible weights.
samples: number of samples to obtain per chain. All accepted samples are counted (after burn-in). If in grid-mode, this sets a limit to the maximum grid points per chain that will be allowed - increase it as needed.
temperature: temperature of the MCMC (1 by default). Increase to explore the tails, jump more easily to disconnected regions of parameter space, etc (must be matched by the cooling factor when analysing the chains).
rand_seed: if blank this is set from system clock.
use_log: whether to use a log scale (set it to T) or a linear scale (set it to F) for the gaugino and scalar mass parameters.
param_xxx: parameters over which to do Monte Carlo or grid scanning. For MCMC, flat priors are taken on this set of parameters. The meaning of the 5 real numbers is the following:
For MCMC: start_central_val, min_val, max_val, start_width, propose_width
where start_central_val is the central value around which the chain is started, min_val/max_val are the minimum and maximum values allowed (prior range), start_width is the standard deviation around start_central_val from which the starting point is drawn, propose_width is the proposal width for the Metropolis-Hastings step (overriden if a covariance matrix is present).
For grid scanning: ignored, min_val, max_val, ignored, grid_step
where ignored is irrelevant, min_val/max_val set the grid's boundaries and grid_step gives the step size in that direction. The grid is split among chains if running in MPI mode. If the number of grid points per chain exceeds the number of samples, you will be asked to increase samples.
All of the following options are only relevant if compute_Indirect_Detection is set to true. You might want to use the post-processing routine to compute indirect detection quantities from the saved MCMC run rather than compute them directly during the MCMC.
compute_ID_gadiff: set it to T to compute the differential spectrum of gamma-ray.
compute_ID_gacont: set it to T to compute the gamma-ray flux with continuum energy spectrum integrated above some threshold energy.
compute_ID_gamonoc: set it to T to compute the monocromatic monochromatic gamma-ray flux induced by 1-loop annihilationprocesses into a 2-body final state containing a photon. There are two such final states: the 2 photon final state and the final state with a photon and a Z boson.
compute_ID_antiprot: set it to T to compute the differential spectrum of antiprotons.
compute_ID_antideut: set it to T to compute the differential spectrum of antideutrons.
compute_ID_posit: set it to T to compute the rates of positrons.
compute_ID_positfrac: set it to T to compute the positron fraction.
compute_ID_muonsun: set it to T to compute the total rates in neutrino telescopes from the Sun above some threshold energy.
compute_ID_muonearth: set it to T to compute the total rates in neutrino telescopes from the Earth above some threshold energy.
compute_ID_sundiff: set it to T to compute the differential rates in neutrino telescopes from the Sun.
compute_ID_muonearthdiff: set it to T to compute the differential rates in neutrino telescopes from the Earth.
compute_ID_muonhalo: set it to T to compute the total rates in neutrino telescopes from the halo above some threshold energy.
compute_ID_muonhalodiff: set it to T to compute differential rates in neutrino telescopes from the halo.
compute_ID_sigmav: set it to T to compute the annihilation cross section sigma v at p=0 for neutralino-neutralino annihilation.
compute_ID_efluxes: set it to T to compute differential fluxes over a range of energies for each model. It only works in the postprocessing mode.
num_hm: sets the number of halo profiles used.
modelxx: sets the specific halo models you want to use (see DarkSusy).
pbmodel: sets the antiprotons and antideutrons propagation model one wants to use (see DarkSusy).
epmodel: sets the positron diffusion model (see DarkSusy).
ntmodel: sets the neutrino telescopes parameters (see DarkSusy).
cospsi0: for gamma-rays and neutrinos with the chosen halo profile, sets the line of sight integration factor j in the direction of observation, which is defined as the direction which forms an angle psi0 with respect to the direction of the galactic centre (see the .ini file).
delta_gamma: for gamma-rays if one takes into account the angular resolution of the detector then delta_gamma is set is given (in sr) otherwise it is set to 0 (see the .ini file).
egam: sets the energy (GeV) for the differential gamma-ray flux.
egath: sets the threshold energy (GeV) for the gamma-ray flux with continuum energy spectrum.
BF: sets the boost factor for antimatter fluxes.
epb: sets the kinetic energy (GeV) of the antiprotons for the differential flux of antiprotons.
edb: sets the kinetic energy (GeV) of the antideutrons for the differential flux of antideutrons.
eep: sets the kinetic energy (GeV) of the positrons for the differential flux of positrons.
eth: sets the energy threshold (GeV) for neutrino telescopes.
thmax: sets the maximum half-aperture angle (degrees) for neutrino telescopes.
enu: sets the neutrino energy (GeV) for differential flux of neutrinos.
theta: sets the angle from center of Earth/Sun in degrees for neutrino telescopes.
rtype: sets the type of fluxes (see the .ini file).
delta_nt: as delta_gamma for neutrino fluxes.
efluxes_i: sets the initial energy of the fluxes spectrum computation once the compute_ID_efluxes is on.
efluxes_f: sets the final energy of the fluxes spectrum computation once the compute_ID_efluxes is on.
nbins: sets the number of the bins to be scanned in the fluxes spectrum computation. Notice that it is in logaritmic scale.
Plotting options are set in the GetPlotsSample.ini file as follows:
file_root: name of chains to be analyzed (including chains subdirectory). Numbering of chains set automatically using the chain_num setting.
out_root: name of output files (if empty, the same as file_root)
add_columns: number of new combinations of variables to add to the anaylsis. Customize this by writing your own AddParams routine in GetPlots.f90
smoothing: set to T to use Gaussian smoothing with a kernel about 3 bins wide. Useful to reduce jaggedness of 2D contours. Set it to F to use top-hat bins (no smoothing).
chain_num: number of chains to process. If 0 it assumes one chain and no filename suffixes.
first_chain: default is 1.
exclude_chain: if you want to exclude one particular chain from the analysis.
num_bins: number of bins per dimension.
skip_bin: number of bins to discard at the edges (use with care).
ignore_rows: number of rows to discard when analysing (burn-in period).
cool: cooling factor, must match the temperature of the chain (default 1).
compare_num: number of other chains to compare this one with by overplotting them on the same graph. You'll need to specify the root filenames of the chains to compare with via compare1, compare2, ....
thin_factor: set it to produce a file_root_thin.txt file containing every thin_factorth sample.
thin_cool: cooling factor applied in the thin process. It has to match the temperature of the chain (default 1).
adjust priors: performs rough importance sampling. Write your own AdjustPriors routine.
map_params: set it to T to map chain parameters to any function of the parameters (e.g, transform from linear to log scale for plotting. This will not adjust the prior, though! use adjust_priors instead). Write your own MapParameters routine.
contour1, contour2: percentage of confidence levels contours.
force_twotail: set it to T to force 2-tails limits on all variables regardless of the settings for tailsxxx below.
plotparams_num: how many variables to get plots for. If zero, uses all parameters which have labels in .info file plus all added parameters (with labels labAxxx). This can be exceedingly slow, use with care.
plotparams: list of parameter numbers to plot, must match plotparams_num above. For the parameters saved in the chain, look at the .info file to determine which number corresponds to which parameter. For added parameters, use the syntax A1, A2, ..., where a capital A denotes that the number refers to an added parameter (numbering for added parameters goes from 1 up to the maximum determined by the value in add_columns).
plot_2D_param: set it to 0 to produce 2D plots only of a list of parameters combinations (specified below). Set it to the number corresponding to the parameter you want to have 2D plots of (plotted against all other parameters that have labels).
plot_2D_num: number of parameters combinations for which to produce a 2D plot (mean quality of fit and marginal probability density will be both plotted by default and saved in different .ps files). Only relevant if plot_2D_param = 0. You must then specify a list of plot1, plot2, ... giving the couples of parameters that you want to plot against each other. Use A to denote added variables, same syntax as above.
num_3D_plots: number of 3D scatter plots to produce. You must then specify a list of 3D_plot1, 3D_plot2, ... giving the triplets of parameters that you want to plot (x axis, y axis and coloured variable). Use A to denote added variables, same syntax as above.
all_2D_plots: set it to T if you want to produce plotting data files for all possible 2D combinations (although the ones that will be included in the .m file are still controlled by the plot_2D_num variable above). This is useful if you want to plot some other parameter combination subsequently without having to re-run GetPlots to get the corresponding matlab file. Careful, setting this to T can produce several thousands of plot files.
do_3D_plots: set it to T to produce a single samples file used by 3D plots. Setting is overriden to T if num_3D_plots > 0.
cov_matrix_dimension: number of parameters to get covariance matrix for. If you are going to use the output as a proposal density make sure you have map_params= F, and the dimension equal to the number of MCMC parameters of the run (9 for the CMSSM).
labA1, labA2: labels for added parameters. Parameters saved in the chain automatically get their labels from the .info file.
limitsxx: lower and upper limit in the 1D SM plot of the corresponding variable (notice: this is different from the original GetDist). Use limitsA1, limitsA2, ... for added variables instead.
tailsxxx: set to 1 for 1-tail probabilities and to 2 for 2-tails (default). For added variables use tailsA1, tailsA2, ... instead. Overridden if force_twotail = T.