Darshan-runtime installation and usage ====================================== == Introduction This document describes darshan-runtime, which is the instrumentation portion of the Darshan characterization tool. It should be installed on the system where you intend to collect I/O characterization information. Darshan instruments applications via either compile time wrappers or dynamic library preloading. An application that has been instrumented with Darshan will produce a single log file each time it is executed. This log summarizes the I/O access patterns used by the application. The darshan-runtime instrumentation has traditionally only supported MPI applications (specifically, those that call `MPI_Init()` and `MPI_Finalize()`), but, as of version 3.2.0, Darshan also supports instrumentation of non-MPI applications. Regardless of whether MPI is used, Darshan provides detailed statistics about POSIX level file accesses made by the application. In the case of MPI applications, Darshan additionally captures details on MPI-IO and HDF5 level access, as well as limited information about PnetCDF access. Note that instrumentation of non-MPI applications is currently only supported in Darshan's shared library, which applications must `LD_PRELOAD`. Starting in version 3.0.0, Darshan also exposes an API that can be used to develop and add new instrumentation modules (for other I/O library interfaces or to gather system-specific data, for instance), as detailed in http://www.mcs.anl.gov/research/projects/darshan/docs/darshan-modularization.html[this document]. Newly contributed modules include a module for gathering system-specific parameters for jobs running on BG/Q systems, a module for gathering Lustre striping data for files on Lustre file systems, and a module for instrumenting stdio (i.e., stream I/O functions like `fopen()`, `fread()`, etc). Starting in version 3.1.3, Darshan also allows for full tracing of application I/O workloads using the newly developed Darshan eXtended Tracing (DxT) instrumentation module. This module can be selectively enabled at runtime to provide high-fidelity traces of an application's I/O workload, as opposed to the coarse-grained I/O summary data that Darshan has traditionally provided. Currently, DxT only traces at the POSIX and MPI-IO layers. Initial link:DXT-overhead.pdf[performance results] demonstrate the low overhead of DxT tracing, offering comparable performance to Darshan's traditional coarse-grained instrumentation methods. This document provides generic installation instructions, but "recipes" for several common HPC systems are provided at the end of the document as well. More information about Darshan can be found at the http://www.mcs.anl.gov/darshan[Darshan web site]. == Requirements * C compiler (preferably GCC-compatible) * zlib development headers and library == Compilation .Configure and build example (with MPI support) ---- tar -xvzf darshan-.tar.gz cd darshan-/darshan-runtime ./configure --with-log-path=/darshan-logs --with-jobid-env=PBS_JOBID CC=mpicc make make install ---- .Configure and build example (without MPI support) ---- tar -xvzf darshan-.tar.gz cd darshan-/darshan-runtime ./configure --with-log-path=/darshan-logs --with-jobid-env=PBS_JOBID --without-mpi CC=gcc make make install ---- .Explanation of configure arguments: * `--with-mem-align=`: This value is system-dependent and will be used by Darshan to determine if the buffer for a read or write operation is aligned in memory (default is 8). * `--with-jobid-env=` (mandatory): this specifies the environment variable that Darshan should check to determine the jobid of a job. Common values are `PBS_JOBID` or `COBALT_JOBID`. If you are not using a scheduler (or your scheduler does not advertise the job ID) then you can specify `NONE` here. Darshan will fall back to using the pid of the rank 0 process if the specified environment variable is not set. * `--with-log-path=` (this, or `--with-log-path-by-env`, is mandatory): This specifies the parent directory for the directory tree where Darshan logs will be placed. ** NOTE: after installation, any user can display the configured path with the `darshan-config --log-path` command * `--with-log-path-by-env=`: specifies an environment variable to use to determine the log path at run time. * `--with-log-hints=`: specifies hints to use when writing the Darshan log file. See `./configure --help` for details. * `--with-mod-mem=`: specifies the maximum amount of memory (in MiB) that active Darshan instrumentation modules can collectively consume. * `--with-zlib=`: specifies an alternate location for the zlib development header and library. * `--without-mpi`: disables MPI support when building Darshan - MPI support is assumed if not specified. * `--enable-mmap-logs`: enables the use of Darshan's mmap log file mechanism. * `--disable-cuserid`: disables use of cuserid() at runtime. * `--disable-ld-preload`: disables building of the Darshan LD_PRELOAD library * `--enable-group-readable-logs`: sets Darshan log file permissions to allow group read access. * `CC=`: specifies the C compiler to use for compilation. .Configure arguments for controlling which Darshan modules to use: * `--disable-posix-mod`: disables compilation and use of Darshan's POSIX module (default=enabled) * `--disable-mpiio-mod`: disables compilation and usee of Darshan's MPI-IO module (default=enabled) * `--disable-stdio-mod`: disables compilation and use of Darshan's STDIO module (default=enabled) * `--disable-dxt-mod`: disables compilation and use of Darshan's DXT module (default=enabled) * `--enable-hdf5-mod`: enables compilation and use of Darshan's HDF5 module (default=disabled) ** NOTE: This option requires the HDF5 install prefix as an argument (e.g., `--enable-hdf5-mod=/path/to/hdf5/`) ** NOTE: HDF5 instrumentation only works on HDF5 library versions >=1.8, and further requires that the HDF5 library used to build Darshan and the HDF5 library being linked in either both be version >=1.10 or both be version <1.10 ** NOTE: This option does not work with the profile configuration instrumentation method described in the "Instrumenting applications" section (link:darshan-runtime.html#_using_a_profile_configuration[Using a profile configuration]) * `--disable-pnetcdf-mod`: disables compilation and use of Darshan's PNetCDF module (default=enabled) * `--disable-lustre-mod`: disables compilation and use of Darshan's Lustre module (default=enabled) * `--disable-mdhim-mod`: disables compilation and use of Darshan's MDHIM module (default=disabled) == Environment preparation Once darshan-runtime has been installed, you must prepare a location in which to store the Darshan log files and configure an instrumentation method. This step can be safely skipped if you configured darshan-runtime using the `--with-log-path-by-env` option. A more typical configuration uses a static directory hierarchy for Darshan log files. The `darshan-mk-log-dirs.pl` utility will configure the path specified at configure time to include subdirectories organized by year, month, and day in which log files will be placed. The deepest subdirectories will have sticky permissions to enable multiple users to write to the same directory. If the log directory is shared system-wide across many users then the following script should be run as root. ---- darshan-mk-log-dirs.pl ---- .A note about finding log paths after installation [NOTE] ==== Regardless of whether a Darshan installation is using the --with-log-path or --with-log-path-by-env option, end users can display the path (and/or environment variables) at any time by running `darshan-config --log-path` on the command line. ==== .A note about log directory permissions [NOTE] ==== All log files written by Darshan have permissions set to only allow read access by the owner of the file. You can modify this behavior, however, by specifying the --enable-group-readable-logs option at configure time. One notable deployment scenario would be to configure Darshan and the log directories to allow all logs to be readable by both the end user and a Darshan administrators group. This can be done with the following steps: * set the --enable-group-readable-logs option at configure time * create the log directories with darshan-mk-log-dirs.pl * recursively set the group ownership of the log directories to the Darshan administrators group * recursively set the setgid bit on the log directories ==== == Instrumenting applications [NOTE] ==== More specific installation "recipes" are provided later in this document for some platforms. This section of the documentation covers general techniques. ==== Once Darshan has been installed and a log path has been prepared, the next step is to actually instrument applications. The preferred method is to instrument applications at compile time. === Option 1: Instrumenting MPI applications at compile time This method is applicable to C, Fortran, and C++ MPI applications (regardless of whether they are static or dynamicly linked) and is the most straightforward method to apply transparently system-wide. It works by injecting additional libraries and options into the linker command line to intercept relevant I/O calls. On Cray platforms you can enable the compile time instrumentation by simply loading the Darshan module. It can then be enabled for all users by placing that module in the default environment. As of Darshan 3.2.0 this will instrument both static and dynamic executables, while in previous versions of Darshan this was only sufficient for static executables. See the Cray installation recipe for more details. For other general MPICH-based MPI implementations, you can generate Darshan-enabled variants of the standard mpicc/mpicxx/mpif90/mpif77 wrappers using the following commands: ---- darshan-gen-cc.pl `which mpicc` --output mpicc.darshan darshan-gen-cxx.pl `which mpicxx` --output mpicxx.darshan darshan-gen-fortran.pl `which mpif77` --output mpif77.darshan darshan-gen-fortran.pl `which mpif90` --output mpif90.darshan ----- The resulting *.darshan wrappers will transparently inject Darshan instrumentation into the link step without any explicit user intervention. They can be renamed and placed in an appropriate PATH to enable automatic instrumentation. This method also works correctly for both static and dynamic executables as of Darshan 3.2.0. For other systems you can enable compile-time instrumentation by either manually adding the appropriate link options to your command line or modifying your default MPI compiler script. The `darshan-config` command line tool can be used to display the options that you should use: ---- # Linker options to use for dynamic linking (default on most platforms) # These arguments should go *before* the MPI libraries in the underlying # linker command line to ensure that Darshan can be activated. It should # also ideally go before other libraries that may issue I/O function calls. darshan-config --dyn-ld-flags # linker options to use for static linking # The first set of arguments should go early in the link command line # (before MPI, while the second set should go at the end of the link command # line darshan-config --pre-ld-flags darshan-config --post-ld-flags ---- ==== Using a profile configuration The MPICH MPI implementation supports the specification of a profiling library configuration that can be used to insert Darshan instrumentation without modifying the existing MPI compiler script. You can enable a profiling configuration using environment variables or command line arguments to the compiler scripts: Example for MPICH 3.1.1 or newer: ---- export MPICC_PROFILE=$DARSHAN_PREFIX/share/mpi-profile/darshan-cc export MPICXX_PROFILE=$DARSHAN_PREFIX/share/mpi-profile/darshan-cxx export MPIFORT_PROFILE=$DARSHAN_PREFIX/share/mpi-profile/darshan-f ---- Examples for command line use: ---- mpicc -profile=$DARSHAN_PREFIX/share/mpi-profile/darshan-c mpicxx -profile=$DARSHAN_PREFIX/share/mpi-profile/darshan-cxx mpif77 -profile=$DARSHAN_PREFIX/share/mpi-profile/darshan-f mpif90 -profile=$DARSHAN_PREFIX/share/mpi-profile/darshan-f ---- Note that unlike the previously described methods in this section, this method *will not* automatically adapt to static and dynamic linking options. The example profile configurations show above only support dynamic linking. Example profile configurations are also provided with a "-static" suffix if you need examples for static linking. === Option 2: Instrumenting MPI applications at run time This method is applicable to pre-compiled dynamically linked executables as well as interpreted languages such as Python. You do not need to change your compile options in any way. This method works by injecting instrumentation at run time. It will not work for statically linked executables. To use this mechanism, set the `LD_PRELOAD` environment variable to the full path to the Darshan shared library. The preferred method of inserting Darshan instrumentation in this case is to set the `LD_PRELOAD` variable specifically for the application of interest. Typically this is possible using command line arguments offered by the `mpirun` or `mpiexec` scripts or by the job scheduler: ---- mpiexec -n 4 -env LD_PRELOAD /home/carns/darshan-install/lib/libdarshan.so mpi-io-test ---- ---- srun -n 4 --export=LD_PRELOAD=/home/carns/darshan-install/lib/libdarshan.so mpi-io-test ---- For sequential invocations of MPI programs, the following will set LD_PRELOAD for process duration only: ---- env LD_PRELOAD=/home/carns/darshan-install/lib/libdarshan.so mpi-io-test ---- Other environments may have other specific options for controlling this behavior. Please check your local site documentation for details. It is also possible to just export LD_PRELOAD as follows, but it is recommended against doing that to prevent Darshan and MPI symbols from being pulled into unrelated binaries: ---- export LD_PRELOAD=/home/carns/darshan-install/lib/libdarshan.so ---- [NOTE] For SGI systems running the MPT environment, it may be necessary to set the `MPI_SHEPHERD` environment variable equal to `true` to avoid deadlock when preloading the Darshan shared library. === Option 3: Instrumenting non-MPI applications at run time Similar to the process described in the previous section, Darshan relies on the `LD_PRELOAD` mechanism for instrumenting dynamically-linked non-MPI applications. This allows Darshan to instrument dynamically-linked binaries produced by non-MPI compilers (e.g., gcc or clang), extending Darshan instrumentation to new contexts (like instrumentation of arbitrary Python programs or instrumenting serial file transfer utilities like `cp` and `scp`). The only additional step required of Darshan non-MPI users is to also set the DARSHAN_ENABLE_NONMPI environment variable to signal to Darshan that non-MPI instrumentation is requested: ---- export DARSHAN_ENABLE_NONMPI=1 ---- As described in the previous section, it may be desirable to users to limit the scope of Darshan's instrumentation by only enabling LD_PRELOAD on the target executable: ---- env LD_PRELOAD=/home/carns/darshan-install/lib/libdarshan.so io-test ---- [NOTE] Recall that Darshan instrumentation of non-MPI applications is only possible with dynamically-linked applications. === Using other profiling tools at the same time as Darshan As of Darshan version 3.2.0, Darshan does not necessarily interfere with other profiling tools (particularly those using the PMPI profiling interface). Darshan itself does not use the PMPI interface, and instead uses dynamic linker symbol interception or --wrap function interception for static executables. As a rule of thumb most profiling tools should appear in the linker command line *before* -ldarshan if possible. == Using the Darshan eXtended Tracing (DXT) module DXT support is disabled by default in Darshan, requiring the user to either explicitly enable tracing for all files or to provide a trace trigger configuration file describing which files should be traced at runtime. To enable tracing globally for all files, Darshan users simply need to set the DXT_ENABLE_IO_TRACE environment variable as follows: ---- export DXT_ENABLE_IO_TRACE=1 ---- To enable tracing for particular files, DXT additionally offers a trace triggering mechanism, with users specifying triggers used to decide whether or not to trace a particular file at runtime. Files that do not match any trace trigger will not store trace data in the Darshan log. Currently, DXT supports the following types of trace triggers: * file triggers: trace files based on regex matching of file paths * rank triggers: trace files based on regex matching of ranks * dynamic triggers: trace files based on runtime analysis of I/O characteristics (e.g., frequent small or unaligned I/O accesses) Users simply need to specify one or more of these triggers in a text file that is passed to DXT at runtime -- when multiple triggers are specified, DXT will keep any file traces that match at least one trigger (i.e., the trace decision is a logical OR across given triggers). An example configuration file is given below, illustrating the syntax to use for currently supported triggers: ---- FILE .h5$ # trace all files with a '.h5' extension FILE ^/tmp # trace all files with a path prefix of '/tmp' RANK [1-2] # trace all files accessed by ranks 1-2 SMALL_IO .5 # trace all files with greater than 50% small (less than 10 KB) accesses UNALIGNED_IO .5 # trace all files with greater than 50% unaligned accesses ---- FILE and RANK triggers take a single parameter representing the regex that will be compared to the file name and the rank accessing it, respectively. Regex support is provided by the POSIX `regex.h` interface -- refer to the https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/regex.h.html[manpage] for more details on regex syntax. SMALL_IO and UNALIGNED_IO triggers take a single parameter representing the lower threshold percentage of accesses of the given type. Set the DXT_TRIGGER_CONF_PATH environment variable to notify DXT of the path of the configuration file: ---- export DXT_TRIGGER_CONF_PATH=/path/to/dxt/config/file ---- == Darshan installation recipes The following recipes provide examples for prominent HPC systems. These are intended to be used as a starting point. You will most likely have to adjust paths and options to reflect the specifics of your system. === Cray platforms (XE, XC, or similar) This section describes how to compile and install Darshan, as well as how to use a software module to enable and disable Darshan instrumentation on Cray systems. ==== Building and installing Darshan Please set your environment to use the GNU programming environment before configuring or compiling Darshan. Although Darshan can be built with a variety of compilers, the GNU compiler is recommended because it will produce a Darshan library that is interoperable with the widest range of compilers and linkers. On most Cray systems you can enable the GNU programming environment with a command similar to "module swap PrgEnv-intel PrgEnv-gnu". Please see your site documentation for information about how to switch programming environments. The following example shows how to configure and build Darshan on a Cray system using the GNU programming environment. Adjust the --with-log-path and --prefix arguments to point to the desired log file path and installation path, respectively. ---- module swap PrgEnv-pgi PrgEnv-gnu ./configure \ --with-log-path=/shared-file-system/darshan-logs \ --prefix=/soft/darshan-2.2.3 \ --with-jobid-env=PBS_JOBID --disable-cuserid CC=cc make install module swap PrgEnv-gnu PrgEnv-pgi ---- .Rationale [NOTE] ==== The job ID is set to `PBS_JOBID` for use with a Torque or PBS based scheduler. The `CC` variable is configured to point the standard MPI compiler. The --disable-cuserid argument is used to prevent Darshan from attempting to use the cuserid() function to retrieve the user name associated with a job. Darshan automatically falls back to other methods if this function fails, but on some Cray environments (notably the Beagle XE6 system as of March 2012) the cuserid() call triggers a segmentation fault. With this option set, Darshan will typically use the LOGNAME environment variable to determine a userid. ==== If instrumentation of the HDF5 library is desired, additionally load an acceptable HDF5 module (e.g., `module load cray-hdf5-parallel`) prior to building and and use the the `--enable-hdf5-mod=/path/to/hdf5/installation/prefix` configure argument. We additionally recommend that you modify Darshan's generated Cray software module to include a dependency on the HDF5 software module used -- this is necessary to ensure Darshan library dependencies are satisfied at application link and run time. ---- prereq cray-hdf5-parallel ---- Note that the Darshan-enabled Cray compiler wrappers will always prefer user-supplied HDF5 libraries over the library used to build Darshan. However, due to ABI changes in the HDF5 library, the two HDF5 libraries used must be compatible. Specifically, the HDF5 library versions need to be either both greater than or equal to 1.10 or both less than 1.10. If users use an HDF5 version that is incompatible with Darshan, either link or runtime errors will occur and the user will have to switch HDF5 versions or unload the Darshan module. As in any Darshan installation, the darshan-mk-log-dirs.pl script can then be used to create the appropriate directory hierarchy for storing Darshan log files in the --with-log-path directory. Note that Darshan is not currently capable of detecting the stripe size (and therefore the Darshan FILE_ALIGNMENT value) on Lustre file systems. If a Lustre file system is detected, then Darshan assumes an optimal file alignment of 1 MiB. ==== Enabling Darshan instrumentation Darshan will automatically install example software module files in the following locations (depending on how you specified the --prefix option in the previous section): ---- /soft/darshan-2.2.3/share/craype-1.x/modulefiles/darshan /soft/darshan-2.2.3/share/craype-2.x/modulefiles/darshan ---- Select the one that is appropriate for your Cray programming environment (see the version number of the craype module in `module list`). If you are using the Cray Programming Environment version 1.x, then you must modify the corresponding modulefile before using it. Please see the comments at the end of the file and choose an environment variable method that is appropriate for your system. If this is not done, then the compiler may fail to link some applications when the Darshan module is loaded. If you are using the Cray Programming Environment version 2.x then you can likely use the modulefile as is. Note that it pulls most of its configuration from the lib/pkgconfig/darshan-runtime.pc file installed with Darshan. The modulefile that you select can be copied to a system location, or the install location can be added to your local module path with the following command: ---- module use /soft/darshan-2.2.3/share/craype-/modulefiles ---- From this point, Darshan instrumentation can be enabled for all future application compilations by running "module load darshan". === Linux clusters using MPICH Most MPICH installations produce dynamic executables by default. To configure Darshan in this environment you can use the following example. We recommend using mpicc with GNU compilers to compile Darshan. ---- ./configure --with-log-path=/darshan-logs --with-jobid-env=PBS_JOBID CC=mpicc ---- The darshan-gen-* scripts described earlier in this document can be used to create variants of the standard mpicc/mpicxx/mpif77/mpif90 scipts that are Darshan enabled. These scripts will work correctly for both dynamic and statically linked executables. === Linux clusters using Intel MPI Most Intel MPI installations produce dynamic executables by default. To configure Darshan in this environment you can use the following example: ---- ./configure --with-log-path=/darshan-logs --with-jobid-env=PBS_JOBID CC=mpicc ---- .Rationale [NOTE] ==== There is nothing unusual in this configuration except that you should use the underlying GNU compilers rather than the Intel ICC compilers to compile Darshan itself. ==== You can enable Darshan instrumentation at compile time by adding `darshan-config --dyn-ld-flags` options to your linker command line. Alternatively you can use `LD_PRELOAD` runtime instrumentation method to instrument executables that have already been compiled. === Linux clusters using Open MPI Follow the generic instructions provided at the top of this document for compilation, and make sure that the `CC` used for compilation is based on a GNU compiler. You can enable Darshan instrumentation at compile time by adding `darshan-config --dyn-ld-flags` options to your linker command line. Alternatively you can use `LD_PRELOAD` runtime instrumentation method to instrument executables that have already been compiled. == Runtime environment variables The Darshan library honors the following environment variables to modify behavior at runtime: * DARSHAN_DISABLE: disables Darshan instrumentation * DARSHAN_INTERNAL_TIMING: enables internal instrumentation that will print the time required to startup and shutdown Darshan to stderr at run time. * DARSHAN_LOGHINTS: specifies the MPI-IO hints to use when storing the Darshan output file. The format is a semicolon-delimited list of key=value pairs, for example: hint1=value1;hint2=value2 * DARSHAN_MEMALIGN: specifies a value for system memory alignment * DARSHAN_JOBID: specifies the name of the environment variable to use for the job identifier, such as PBS_JOBID * DARSHAN_DISABLE_SHARED_REDUCTION: disables the step in Darshan aggregation in which files that were accessed by all ranks are collapsed into a single cumulative file record at rank 0. This option retains more per-process information at the expense of creating larger log files. Note that it is up to individual instrumentation module implementations whether this environment variable is actually honored. * DARSHAN_LOGPATH: specifies the path to write Darshan log files to. Note that this directory needs to be formatted using the darshan-mk-log-dirs script. * DARSHAN_LOGFILE: specifies the path (directory + Darshan log file name) to write the output Darshan log to. This overrides the default Darshan behavior of automatically generating a log file name and adding it to a log file directory formatted using darshan-mk-log-dirs script. * DARSHAN_MODMEM: specifies the maximum amount of memory (in MiB) Darshan instrumentation modules can collectively consume at runtime (if not specified, Darshan uses a default quota of 2 MiB). * DARSHAN_MMAP_LOGPATH: if Darshan's mmap log file mechanism is enabled, this variable specifies what path the mmap log files should be stored in (if not specified, log files will be stored in `/tmp`). * DARSHAN_EXCLUDE_DIRS: specifies a list of comma-separated paths that Darshan will not instrument at runtime (in addition to Darshan's default blacklist) * DXT_ENABLE_IO_TRACE: setting this environment variable enables the DXT (Darshan eXtended Tracing) modules at runtime for all files instrumented by Darshan. Currently, DXT is hard-coded to use a maximum of 4 MiB of trace memory per process (in addition to memory used by other modules). * DXT_DISABLE_IO_TRACE: setting this environment variable disables the DXT module at runtime for all files instrumented by Darshan. * DXT_TRIGGER_CONF_PATH: File path to a DXT trace trigger configuration file, which specifies triggers used by DXT to decide which files to trace at runtime. Note that the trace triggering mechanism is overridden by the DXT_ENABLE_IO_TRACE and DXT_DISABLE_IO_TRACE environment variables. * DARSHAN_ENABLE_NONMPI: setting this environment variable is required to generate Darshan logs for non-MPI applications == Debugging === No log file In cases where Darshan is not generating a log file for an application, some common things to check are: * Make sure you are looking in the correct place for logs. Confirm the location with the `darshan-config --log-path` command. * Check stderr to ensure Darshan isn't indicating any internal errors (e.g., invalid log file path) For statically linked executables: * Ensure that Darshan symbols are present in the underlying executable by running `nm` on it: ---- > nm test | grep darshan 0000000000772260 b darshan_core 0000000000404440 t darshan_core_cleanup 00000000004049b0 T darshan_core_initialize 000000000076b660 d darshan_core_mutex 00000000004070a0 T darshan_core_register_module ---- For dynamically linked executables: * Ensure that the Darshan library is present in the list of shared libraries to be used by the application, and that it appears before the MPI library: ---- > ldd mpi-io-test linux-vdso.so.1 (0x00007ffd83925000) libdarshan.so => /home/carns/working/install/lib/libdarshan.so (0x00007f0f4a7a6000) libmpi.so.12 => /home/carns/working/src/spack/opt/spack/linux-ubuntu19.10-skylake/gcc-9.2.1/mpich-3.3.2-h3dybprufq7i5kt4hcyfoyihnrnbaogk/lib/libmpi.so.12 (0x00007f0f4a44f000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f0f4a241000) ... ---- General: * Ensure that the linker is correctly linking in Darshan's runtime libraries: ** A common mistake is to explicitly link in the underlying MPI libraries (e.g., `-lmpich` or `-lmpichf90`) in the link command, which can interfere with Darshan's instrumentation *** These libraries are usually linked in automatically by the compiler *** MPICH's `mpicc` comipler's `-show` flag can be used to examine the invoked link command, for instance ** The linker's `-y` option can be used to verify that Darshan is properly intercepting MPI_Init function (e.g. by setting `CFLAGS='-Wl,-yMPI_Init'`), which it uses to initialize its runtime structures ---- /usr/common/software/darshan/3.0.0-pre3/lib/libdarshan.a(darshan-core-init-finalize.o): definition of MPI_Init ----