Discussion:
[gmx-users] About Plumed+Gromacs+MPI
yong zhou
2018-12-02 03:13:31 UTC
Permalink
Dear all,

I have compiled the plumed+gromacs with the following options

CC=/usr/lib64/openmpi3/bin/mpicc FC=/usr/lib64/openmpi3/bin/mpif90 F77=/usr/lib64/openmpi3/bin/mpif90 CXX=/usr/lib64/openmpi3/bin/mpicxx CMAKE_PREFIX_PATH=//usr/lib64/openmpi3/ cmake3 .. -DGMX_BUILD_OWN_FFTW=on -DGMX_MPI=on -DCMAKE_C_COMPILER=/usr/lib64/openmpi3/bin/mpicc -DCMAKE_CXX_COMPILER=/usr/lib64/openmpi3/bin/mpicxx -DGMX_GPU=on -DNVML_INCLUDE_DIR=/usr/local/cuda-9.0/include -DNVML_LIBRARY=/usr/lib64/libnvidia-ml.so -DBUILD_SHARED_LIBS=OFF -DGMX_PREFER_STATIC_LIBS=ON -DCMAKE_INSTALL_PREFIX=/home/yzhou/xtal/gromacs

But with running the tutorial of Plumed, I got the following errors.

:-) GROMACS - gmx mdrun, 2016.5 (-:

GROMACS is written by:
Emile Apol Rossen Apostolov Herman J.C. Berendsen Par Bjelkmar
Aldert van Buuren Rudi van Drunen Anton Feenstra Gerrit Groenhof
Christoph Junghans Anca Hamuraru Vincent Hindriksen Dimitrios Karkoulis
Peter Kasson Jiri Kraus Carsten Kutzner Per Larsson
Justin A. Lemkul Magnus Lundborg Pieter Meulenhoff Erik Marklund
Teemu Murtola Szilard Pall Sander Pronk Roland Schulz
Alexey Shvetsov Michael Shirts Alfons Sijbers Peter Tieleman
Teemu Virolainen Christian Wennberg Maarten Wolf
and the project leaders:
Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel

Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2017, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.

GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.

GROMACS: gmx mdrun, version 2016.5
Executable: /home/yzhou/xtal/gromacs/bin/gmx_mpi
Data prefix: /home/yzhou/xtal/gromacs
Working dir: /home/yzhou/data/simulation/plumed/belfast-8/second
Command line:
gmx_mpi mdrun -s topol -plumed plumed -multi 2 -replex 20


Back Off! I just backed up md1.log to ./#md1.log.18#

Back Off! I just backed up md0.log to ./#md0.log.18#

Running on 1 node with total 10 cores, 10 logical cores, 1 compatible GPU
Hardware detected on host localhost.localdomain (the node of MPI rank 0):
CPU info:
Vendor: Intel
Brand: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
SIMD instructions most likely to fit this hardware: AVX2_256
SIMD instructions selected at GROMACS compile time: AVX2_256

Hardware topology: Basic
GPU info:
Number of GPUs detected: 1
#0: NVIDIA GeForce GTX 1080 Ti, compute cap.: 6.1, ECC: no, stat: compatible

Reading file topol1.tpr, VERSION 4.6.5 (single precision)
Reading file topol0.tpr, VERSION 4.6.5 (single precision)
Note: file tpx version 83, software tpx version 110
Note: file tpx version 83, software tpx version 110
This is simulation 1 out of 2 running as a composite GROMACS
multi-simulation job. Setup for this simulation:

Using 1 MPI process
Using 5 OpenMP threads

This is simulation 0 out of 2 running as a composite GROMACS
multi-simulation job. Setup for this simulation:

Using 1 MPI process
Using 5 OpenMP threads

1 compatible GPU is present, with ID 0
1 GPU auto-selected for this run.
Mapping of GPU ID to the 2 PP ranks in this node: 0,0

1 compatible GPU is present, with ID 0
1 GPU auto-selected for this run.
Mapping of GPU ID to the 2 PP ranks in this node: 0,0


Non-default thread affinity set probably by the OpenMP library,
disabling internal thread affinity

Non-default thread affinity set probably by the OpenMP library,
disabling internal thread affinity

-------------------------------------------------------
Program: gmx mdrun, version 2016.5
Source file: src/programs/mdrun/runner.cpp (line 677)
Function: double (* gmx::my_integrator(unsigned int))(FILE*, t_commrec*, int, const t_filenm*, const gmx_output_env_t*, gmx_bool, int, gmx_vsite_t*, gmx_constr_t, int, t_inputrec*, gmx_mtop_t*, t_fcdata*, t_state*, t_mdatoms*, t_nrnb*, gmx_wallcycle_t, gmx_edsam_t, t_forcerec*, int, int, int, gmx_membed_t*, real, real, int, long unsigned int, gmx_walltime_accounting_t)
MPI rank: 1 (out of 2)

Feature not implemented:

-------------------------------------------------------
Program: gmx mdrun, version 2016.5
Source file: src/programs/mdrun/runner.cpp (line 677)
Function: double (* gmx::my_integrator(unsigned int))(FILE*, t_commrec*, int, const t_filenm*, const gmx_output_env_t*, gmx_bool, int, gmx_vsite_t*, gmx_constr_t, int, t_inputrec*, gmx_mtop_t*, t_fcdata*, t_state*, t_mdatoms*, t_nrnb*, gmx_wallcycle_t, gmx_edsam_t, t_forcerec*, int, int, int, gmx_membed_t*, real, real, int, long unsigned int, gmx_walltime_accounting_t)
MPI rank: 0 (out of 2)

Feature not implemented:
SD2 integrator has been removed

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
SD2 integrator has been removed

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------

Would you please tell me what is wrong with the process and how to solve the problem?
Thanks a lot.

Best regards

Zhou Yong
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-***@gromacs.org.
Justin Lemkul
2018-12-02 16:36:54 UTC
Permalink
Post by yong zhou
Dear all,
I have compiled the plumed+gromacs with the following options
CC=/usr/lib64/openmpi3/bin/mpicc FC=/usr/lib64/openmpi3/bin/mpif90 F77=/usr/lib64/openmpi3/bin/mpif90 CXX=/usr/lib64/openmpi3/bin/mpicxx CMAKE_PREFIX_PATH=//usr/lib64/openmpi3/ cmake3 .. -DGMX_BUILD_OWN_FFTW=on -DGMX_MPI=on -DCMAKE_C_COMPILER=/usr/lib64/openmpi3/bin/mpicc -DCMAKE_CXX_COMPILER=/usr/lib64/openmpi3/bin/mpicxx -DGMX_GPU=on -DNVML_INCLUDE_DIR=/usr/local/cuda-9.0/include -DNVML_LIBRARY=/usr/lib64/libnvidia-ml.so -DBUILD_SHARED_LIBS=OFF -DGMX_PREFER_STATIC_LIBS=ON -DCMAKE_INSTALL_PREFIX=/home/yzhou/xtal/gromacs
But with running the tutorial of Plumed, I got the following errors.
Emile Apol Rossen Apostolov Herman J.C. Berendsen Par Bjelkmar
Aldert van Buuren Rudi van Drunen Anton Feenstra Gerrit Groenhof
Christoph Junghans Anca Hamuraru Vincent Hindriksen Dimitrios Karkoulis
Peter Kasson Jiri Kraus Carsten Kutzner Per Larsson
Justin A. Lemkul Magnus Lundborg Pieter Meulenhoff Erik Marklund
Teemu Murtola Szilard Pall Sander Pronk Roland Schulz
Alexey Shvetsov Michael Shirts Alfons Sijbers Peter Tieleman
Teemu Virolainen Christian Wennberg Maarten Wolf
Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
Copyright (c) 1991-2000, University of Groningen, The Netherlands.
Copyright (c) 2001-2017, The GROMACS development team at
Uppsala University, Stockholm University and
the Royal Institute of Technology, Sweden.
check out http://www.gromacs.org for more information.
GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.
GROMACS: gmx mdrun, version 2016.5
Executable: /home/yzhou/xtal/gromacs/bin/gmx_mpi
Data prefix: /home/yzhou/xtal/gromacs
Working dir: /home/yzhou/data/simulation/plumed/belfast-8/second
gmx_mpi mdrun -s topol -plumed plumed -multi 2 -replex 20
Back Off! I just backed up md1.log to ./#md1.log.18#
Back Off! I just backed up md0.log to ./#md0.log.18#
Running on 1 node with total 10 cores, 10 logical cores, 1 compatible GPU
Vendor: Intel
SIMD instructions most likely to fit this hardware: AVX2_256
SIMD instructions selected at GROMACS compile time: AVX2_256
Hardware topology: Basic
Number of GPUs detected: 1
#0: NVIDIA GeForce GTX 1080 Ti, compute cap.: 6.1, ECC: no, stat: compatible
Reading file topol1.tpr, VERSION 4.6.5 (single precision)
Reading file topol0.tpr, VERSION 4.6.5 (single precision)
Note: file tpx version 83, software tpx version 110
Note: file tpx version 83, software tpx version 110
This is simulation 1 out of 2 running as a composite GROMACS
Using 1 MPI process
Using 5 OpenMP threads
This is simulation 0 out of 2 running as a composite GROMACS
Using 1 MPI process
Using 5 OpenMP threads
1 compatible GPU is present, with ID 0
1 GPU auto-selected for this run.
Mapping of GPU ID to the 2 PP ranks in this node: 0,0
1 compatible GPU is present, with ID 0
1 GPU auto-selected for this run.
Mapping of GPU ID to the 2 PP ranks in this node: 0,0
Non-default thread affinity set probably by the OpenMP library,
disabling internal thread affinity
Non-default thread affinity set probably by the OpenMP library,
disabling internal thread affinity
-------------------------------------------------------
Program: gmx mdrun, version 2016.5
Source file: src/programs/mdrun/runner.cpp (line 677)
Function: double (* gmx::my_integrator(unsigned int))(FILE*, t_commrec*, int, const t_filenm*, const gmx_output_env_t*, gmx_bool, int, gmx_vsite_t*, gmx_constr_t, int, t_inputrec*, gmx_mtop_t*, t_fcdata*, t_state*, t_mdatoms*, t_nrnb*, gmx_wallcycle_t, gmx_edsam_t, t_forcerec*, int, int, int, gmx_membed_t*, real, real, int, long unsigned int, gmx_walltime_accounting_t)
MPI rank: 1 (out of 2)
-------------------------------------------------------
Program: gmx mdrun, version 2016.5
Source file: src/programs/mdrun/runner.cpp (line 677)
Function: double (* gmx::my_integrator(unsigned int))(FILE*, t_commrec*, int, const t_filenm*, const gmx_output_env_t*, gmx_bool, int, gmx_vsite_t*, gmx_constr_t, int, t_inputrec*, gmx_mtop_t*, t_fcdata*, t_state*, t_mdatoms*, t_nrnb*, gmx_wallcycle_t, gmx_edsam_t, t_forcerec*, int, int, int, gmx_membed_t*, real, real, int, long unsigned int, gmx_walltime_accounting_t)
MPI rank: 0 (out of 2)
SD2 integrator has been removed
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
SD2 integrator has been removed
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
Would you please tell me what is wrong with the process and how to solve the problem?
The error message is quite clear:

Feature not implemented:
SD2 integrator has been removed

Tutorials do not always keep up with development, so you cannot necessarily use any tutorial you find with any version of GROMACS. Use whatever version they suggest, or simply use an integrator that is supported by your chosen version of GROMACS (see the online manual for what those options are).

-Justin
--
==================================================

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

***@vt.edu | (540) 231-3129
http://www.thelemkullab.com

==================================================
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-***@gromacs.org.
Continue reading on narkive:
Loading...