Discussion:
[gmx-users] MPI GPU job failed
Albert
2016-08-10 14:03:33 UTC
Permalink
Hello:

I am trying to submit gromacs jobs with command line:

mpirun -np 2 gmx_mpi mdrun -s 61.tpr -v -g 61.log -c 61.gro -x 61.xtc
-ntomp 10 -gpu_id 01

However, it failed with messages:



Number of GPUs detected: 2
#0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat:
compatible
#1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat:
compatible

Reading file 61.tpr, VERSION 5.1.3 (single precision)
Reading file 61.tpr, VERSION 5.1.3 (single precision)
Using 1 MPI process
Using 10 OpenMP threads

2 GPUs user-selected for this run.
Mapping of GPU IDs to the 1 PP rank in this node: 0,1
-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
Source code file:
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/gromacs/gmxlib/gmx_detect_hardware.cpp,
line: 458

Fatal error:
Incorrect launch configuration: mismatching number of PP MPI processes
and GPUs per node.
gmx_mpi was started with 1 PP MPI process per node, but you provided 2 GPUs.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------



I have used the option "mpirun -np 2" and I don't know why it calimed
"gmx_mpi was started with 1 PP MPI"

Thanks a lot
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-***@gromacs.org.
Albert
2016-08-10 18:21:21 UTC
Permalink
Does anybody have any idea?
Post by Albert
mpirun -np 2 gmx_mpi mdrun -s 61.tpr -v -g 61.log -c 61.gro -x 61.xtc
-ntomp 10 -gpu_id 01
Number of GPUs detected: 2
compatible
compatible
Reading file 61.tpr, VERSION 5.1.3 (single precision)
Reading file 61.tpr, VERSION 5.1.3 (single precision)
Using 1 MPI process
Using 10 OpenMP threads
2 GPUs user-selected for this run.
Mapping of GPU IDs to the 1 PP rank in this node: 0,1
-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/gromacs/gmxlib/gmx_detect_hardware.cpp,
line: 458
Incorrect launch configuration: mismatching number of PP MPI processes
and GPUs per node.
gmx_mpi was started with 1 PP MPI process per node, but you provided 2 GPUs.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
I have used the option "mpirun -np 2" and I don't know why it calimed
"gmx_mpi was started with 1 PP MPI"
Thanks a lot
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-***@gromacs.org.
Nikhil Maroli
2016-08-11 04:50:41 UTC
Permalink
gmx mdrun -nt X -v -deffnm XXX -gpu_id XYZ

What about this?

Assign sufficient number of threads
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-***@gromacs.org.
Albert
2016-08-11 08:33:48 UTC
Permalink
Hello:

I try to run command:


gmx_mpi mdrun -nt 2 -v -s 62.tpr -gpu_id 01

but it failed with messages:

-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
Source code file:
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/programs/mdrun/resource-division.cpp,
line: 625

Fatal error:
Setting the total number of threads is only supported with thread-MPI
and GROMACS was compiled without thread-MPI
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors

MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
-------------------------------------------------------
Post by Nikhil Maroli
gmx mdrun -nt X -v -deffnm XXX -gpu_id XYZ
What about this?
Assign sufficient number of threads
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-***@gromacs.org.
j***@mrc-lmb.cam.ac.uk
2016-08-11 11:18:20 UTC
Permalink
The problem is you compiled gromacs with mpi (hence the default _mpi in
your command). You therefore need to set the number of mpi processes
rather than threads. The appropriate command would instead be the
following:

mpirun -np 2 gmx_mpi mdrun -v -s 62.tpr -gpu_id 01

Alternatively you could compile a different gromacs version without mpi.
This should have thread-mpi and openmp by default if you leave out
DGMX_MPI=ON from the cmake command.

Best wishes
James
Post by Albert
gmx_mpi mdrun -nt 2 -v -s 62.tpr -gpu_id 01
-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/programs/mdrun/resource-division.cpp,
line: 625
Setting the total number of threads is only supported with thread-MPI
and GROMACS was compiled without thread-MPI
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
-------------------------------------------------------
Post by Nikhil Maroli
gmx mdrun -nt X -v -deffnm XXX -gpu_id XYZ
What about this?
Assign sufficient number of threads
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-***@gromacs.org.
Albert
2016-08-11 13:08:13 UTC
Permalink
Hi, I used your suggested command line, but it failed with the following
messages:


-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
Source code file:
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/gromacs/gmxlib/gmx_detect_hardware.cpp,
line: 458

Fatal error:
Incorrect launch configuration: mismatching number of PP MPI processes
and GPUs per node.
gmx_mpi was started with 1 PP MPI process per node, but you provided 2 GPUs.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------

Halting program gmx mdrun
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
Using 1 MPI process
Using 20 OpenMP threads

2 GPUs user-selected for this run.
Mapping of GPU IDs to the 1 PP rank in this node: 0,1


-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
Source code file:
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/gromacs/gmxlib/gmx_detect_hardware.cpp,
line: 458

Fatal error:
Incorrect launch configuration: mismatching number of PP MPI processes
and GPUs per node.
gmx_mpi was started with 1 PP MPI process per node, but you provided 2 GPUs.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
Post by j***@mrc-lmb.cam.ac.uk
The problem is you compiled gromacs with mpi (hence the default _mpi in
your command). You therefore need to set the number of mpi processes
rather than threads. The appropriate command would instead be the
mpirun -np 2 gmx_mpi mdrun -v -s 62.tpr -gpu_id 01
Alternatively you could compile a different gromacs version without mpi.
This should have thread-mpi and openmp by default if you leave out
DGMX_MPI=ON from the cmake command.
Best wishes
James
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-***@gromacs.org.
j***@mrc-lmb.cam.ac.uk
2016-08-11 13:30:03 UTC
Permalink
I'd suggest installing another gromacs version without MPI then. Your
system doesn't have enough CPU nodes to support it I imagine as you asked
for 2 and got 1. You could try the following first though:

mpirun -np 2 gmx_mpi mdrun -ntomp 10 -v -s 62.tpr -gpu_id 01

That way rather than having 1 mpi process with 20 openmp threads you have
2 mpi processes with 10 openmp threads if it works. I'm not sure whether
it will though.
Post by Albert
Hi, I used your suggested command line, but it failed with the following
-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/gromacs/gmxlib/gmx_detect_hardware.cpp,
line: 458
Incorrect launch configuration: mismatching number of PP MPI processes
and GPUs per node.
gmx_mpi was started with 1 PP MPI process per node, but you provided 2 GPUs.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
Halting program gmx mdrun
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
Using 1 MPI process
Using 20 OpenMP threads
2 GPUs user-selected for this run.
Mapping of GPU IDs to the 1 PP rank in this node: 0,1
-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/gromacs/gmxlib/gmx_detect_hardware.cpp,
line: 458
Incorrect launch configuration: mismatching number of PP MPI processes
and GPUs per node.
gmx_mpi was started with 1 PP MPI process per node, but you provided 2 GPUs.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
Post by j***@mrc-lmb.cam.ac.uk
The problem is you compiled gromacs with mpi (hence the default _mpi in
your command). You therefore need to set the number of mpi processes
rather than threads. The appropriate command would instead be the
mpirun -np 2 gmx_mpi mdrun -v -s 62.tpr -gpu_id 01
Alternatively you could compile a different gromacs version without mpi.
This should have thread-mpi and openmp by default if you leave out
DGMX_MPI=ON from the cmake command.
Best wishes
James
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
posting!
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-***@gromacs.org.
Justin Lemkul
2016-08-11 13:33:00 UTC
Permalink
Post by Albert
-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/gromacs/gmxlib/gmx_detect_hardware.cpp,
line: 458
Incorrect launch configuration: mismatching number of PP MPI processes and GPUs
per node.
So you're trying to run on two nodes, each of which has one GPU? I haven't done
such a run, but perhaps mpirun -np 2 gmx_mpi mdrun -v -s 62.tpr -gpu_id 0 would
do the trick, by finding the first GPU on each node?

-Justin
Post by Albert
gmx_mpi was started with 1 PP MPI process per node, but you provided 2 GPUs.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
Halting program gmx mdrun
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
Using 1 MPI process
Using 20 OpenMP threads
2 GPUs user-selected for this run.
Mapping of GPU IDs to the 1 PP rank in this node: 0,1
-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/gromacs/gmxlib/gmx_detect_hardware.cpp,
line: 458
Incorrect launch configuration: mismatching number of PP MPI processes and GPUs
per node.
gmx_mpi was started with 1 PP MPI process per node, but you provided 2 GPUs.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
Post by j***@mrc-lmb.cam.ac.uk
The problem is you compiled gromacs with mpi (hence the default _mpi in
your command). You therefore need to set the number of mpi processes
rather than threads. The appropriate command would instead be the
mpirun -np 2 gmx_mpi mdrun -v -s 62.tpr -gpu_id 01
Alternatively you could compile a different gromacs version without mpi.
This should have thread-mpi and openmp by default if you leave out
DGMX_MPI=ON from the cmake command.
Best wishes
James
--
==================================================

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

***@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==================================================
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-***@gromacs.org.
Albert
2016-08-11 13:37:24 UTC
Permalink
Here is what I got for command:
mpirun -np 2 gmx_mpi mdrun -v -s 62.tpr -gpu_id 0

It seems that it still used 1 GPU instead of 2. I don't understand why.....
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Running on 1 node with total 10 cores, 20 logical cores, 2 compatible GPUs
Hardware detected on host cudaB (the node of MPI rank 0):
CPU info:
Vendor: GenuineIntel
Brand: Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
SIMD instructions most likely to fit this hardware: AVX_256
SIMD instructions selected at GROMACS compile time: AVX_256
GPU info:
Number of GPUs detected: 2
#0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat:
compatible
#1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat:
compatible

Reading file 62.tpr, VERSION 5.1.3 (single precision)
Reading file 62.tpr, VERSION 5.1.3 (single precision)
Using 1 MPI process
Using 20 OpenMP threads

1 GPU user-selected for this run.
Mapping of GPU ID to the 1 PP rank in this node: 0

Using 1 MPI process
Using 20 OpenMP threads

1 GPU user-selected for this run.
Mapping of GPU ID to the 1 PP rank in this node: 0
---------------------------------------------------------------------------------------------------------------------------------------------------------------------



Here is what I got for command:

mpirun -np 2 gmx_mpi mdrun -ntomp 10 -v -s 62.tpr -gpu_id 01


It stilled failed.................

-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
Source code file:
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/gromacs/gmxlib/gmx_detect_hardware.cpp,
line: 458

Fatal error:
Incorrect launch configuration: mismatching number of PP MPI processes
and GPUs per node.
gmx_mpi was started with 1 PP MPI process per node, but you provided 2 GPUs.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------

Halting program gmx mdrun
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
Using 1 MPI process
Using 10 OpenMP threads

2 GPUs user-selected for this run.
Mapping of GPU IDs to the 1 PP rank in this node: 0,1


-------------------------------------------------------
Post by Justin Lemkul
So you're trying to run on two nodes, each of which has one GPU? I
haven't done such a run, but perhaps mpirun -np 2 gmx_mpi mdrun -v -s
62.tpr -gpu_id 0 would do the trick, by finding the first GPU on each
node?
-Justin
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-***@gromacs.org.
Justin Lemkul
2016-08-11 13:39:27 UTC
Permalink
Post by j***@mrc-lmb.cam.ac.uk
mpirun -np 2 gmx_mpi mdrun -v -s 62.tpr -gpu_id 0
It seems that it still used 1 GPU instead of 2. I don't understand why.....
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Running on 1 node with total 10 cores, 20 logical cores, 2 compatible GPUs
Then this is inconsistent with my first question in the last reply. You have
two GPU on a single, physical node. For this, you should not need an external
mpirun.

gmx mdrun -ntmpi 2 -v -s 62.tpr -gpu_id 01

-Justin
Post by j***@mrc-lmb.cam.ac.uk
Vendor: GenuineIntel
SIMD instructions most likely to fit this hardware: AVX_256
SIMD instructions selected at GROMACS compile time: AVX_256
Number of GPUs detected: 2
#0: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat: compatible
#1: NVIDIA GeForce GTX 780 Ti, compute cap.: 3.5, ECC: no, stat: compatible
Reading file 62.tpr, VERSION 5.1.3 (single precision)
Reading file 62.tpr, VERSION 5.1.3 (single precision)
Using 1 MPI process
Using 20 OpenMP threads
1 GPU user-selected for this run.
Mapping of GPU ID to the 1 PP rank in this node: 0
Using 1 MPI process
Using 20 OpenMP threads
1 GPU user-selected for this run.
Mapping of GPU ID to the 1 PP rank in this node: 0
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
mpirun -np 2 gmx_mpi mdrun -ntomp 10 -v -s 62.tpr -gpu_id 01
It stilled failed.................
-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/gromacs/gmxlib/gmx_detect_hardware.cpp,
line: 458
Incorrect launch configuration: mismatching number of PP MPI processes and GPUs
per node.
gmx_mpi was started with 1 PP MPI process per node, but you provided 2 GPUs.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
Halting program gmx mdrun
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
Using 1 MPI process
Using 10 OpenMP threads
2 GPUs user-selected for this run.
Mapping of GPU IDs to the 1 PP rank in this node: 0,1
-------------------------------------------------------
Post by Justin Lemkul
So you're trying to run on two nodes, each of which has one GPU? I haven't
done such a run, but perhaps mpirun -np 2 gmx_mpi mdrun -v -s 62.tpr -gpu_id 0
would do the trick, by finding the first GPU on each node?
-Justin
--
==================================================

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

***@outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==================================================
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-***@gromacs.org.
Szilárd Páll
2016-08-11 14:13:13 UTC
Permalink
Using a non-MPI launch command won't be useful in starting an
MPI-enabled build, so that's not correct.

Additionally, please use _reply_ to answer emails to avoid breaking threads.

--
Szilárd
Post by Nikhil Maroli
gmx mdrun -nt X -v -deffnm XXX -gpu_id XYZ
What about this?
Assign sufficient number of threads
--
Gromacs Users mailing list
* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
* For (un)subscribe requests visit
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users
Szilárd Páll
2016-08-11 14:05:33 UTC
Permalink
mpirun -np 2 gmx_mpi mdrun -s 61.tpr -v -g 61.log -c 61.gro -x 61.xtc -ntomp
10 -gpu_id 01
Number of GPUs detected: 2
compatible
compatible
Reading file 61.tpr, VERSION 5.1.3 (single precision)
Reading file 61.tpr, VERSION 5.1.3 (single precision)
Using 1 MPI process
Using 10 OpenMP threads
2 GPUs user-selected for this run.
Mapping of GPU IDs to the 1 PP rank in this node: 0,1
-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/gromacs/gmxlib/gmx_detect_hardware.cpp,
line: 458
Incorrect launch configuration: mismatching number of PP MPI processes and
GPUs per node.
gmx_mpi was started with 1 PP MPI process per node, but you provided 2 GPUs.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
I have used the option "mpirun -np 2" and I don't know why it calimed
"gmx_mpi was started with 1 PP MPI"
Most likely because your MPI is not configured correctly.
Thanks a lot
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-***@gromacs.org.
Szilárd Páll
2016-08-11 14:06:12 UTC
Permalink
PS: Or your GROMACS installation uses _mpi suffixes, but it is
actually not building with MPI enabled.
--
Szilárd
Post by Szilárd Páll
mpirun -np 2 gmx_mpi mdrun -s 61.tpr -v -g 61.log -c 61.gro -x 61.xtc -ntomp
10 -gpu_id 01
Number of GPUs detected: 2
compatible
compatible
Reading file 61.tpr, VERSION 5.1.3 (single precision)
Reading file 61.tpr, VERSION 5.1.3 (single precision)
Using 1 MPI process
Using 10 OpenMP threads
2 GPUs user-selected for this run.
Mapping of GPU IDs to the 1 PP rank in this node: 0,1
-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/gromacs/gmxlib/gmx_detect_hardware.cpp,
line: 458
Incorrect launch configuration: mismatching number of PP MPI processes and
GPUs per node.
gmx_mpi was started with 1 PP MPI process per node, but you provided 2 GPUs.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
I have used the option "mpirun -np 2" and I don't know why it calimed
"gmx_mpi was started with 1 PP MPI"
Most likely because your MPI is not configured correctly.
Thanks a lot
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-***@g
Albert
2016-08-11 14:16:17 UTC
Permalink
I see. I will try to compile everything from scratch to see what's
happening....

thx a lot
Post by Szilárd Páll
PS: Or your GROMACS installation uses _mpi suffixes, but it is
actually not building with MPI enabled.
--
Szilárd
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
Szilárd Páll
2016-08-11 14:18:23 UTC
Permalink
PPS: given the double output (e.g. "Reading file 61.tpr, ...") it's
even more likely that you're using a non-PI build.

BTW, looks like you had the same issue about two years ago:
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2014-September/092046.html

--
Szilárd
Post by Szilárd Páll
PS: Or your GROMACS installation uses _mpi suffixes, but it is
actually not building with MPI enabled.
--
Szilárd
Post by Szilárd Páll
mpirun -np 2 gmx_mpi mdrun -s 61.tpr -v -g 61.log -c 61.gro -x 61.xtc -ntomp
10 -gpu_id 01
Number of GPUs detected: 2
compatible
compatible
Reading file 61.tpr, VERSION 5.1.3 (single precision)
Reading file 61.tpr, VERSION 5.1.3 (single precision)
Using 1 MPI process
Using 10 OpenMP threads
2 GPUs user-selected for this run.
Mapping of GPU IDs to the 1 PP rank in this node: 0,1
-------------------------------------------------------
Program gmx mdrun, VERSION 5.1.3
/home/albert/Downloads/gromacs/gromacs-5.1.3/src/gromacs/gmxlib/gmx_detect_hardware.cpp,
line: 458
Incorrect launch configuration: mismatching number of PP MPI processes and
GPUs per node.
gmx_mpi was started with 1 PP MPI process per node, but you provided 2 GPUs.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
I have used the option "mpirun -np 2" and I don't know why it calimed
"gmx_mpi was started with 1 PP MPI"
Most likely because your MPI is not configured correctly.
Thanks a lot
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users o
Albert
2016-08-11 14:22:04 UTC
Permalink
well, here is the command line I used for compiling:


env CC=mpicc CXX=mpicxx F77=mpif90 FC=mpif90 LDF90=mpif90
CMAKE_PREFIX_PATH=/soft/gromacs/fftw-3.3.4:/soft/intel/impi/5.1.3.223
cmake .. -DBUILD_SHARED_LIB=OFF -DBUILD_TESTING=OFF
-DCMAKE_INSTALL_PREFIX=/soft/gromacs/5.1.3_intel -DGMX_MPI=ON
-DGMX_GPU=ON -DGMX_PREFER_STATIC_LIBS=ON
-DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda

here is my cshrc:

source /soft/intel/bin/compilervars.csh intel64
source /soft/intel/impi/5.1.3.223/bin64/mpivars.csh
set path=(/soft/intel/impi/5.1.3.223/intel64/bin $path)
setenv CUDA_HOME /usr/local/cuda
setenv MKL_HOME /soft/intel/mkl/
setenv LD_LIBRARY_PATH
/soft/intel/compilers_and_libraries_2016.3.223/linux/mpi/lib64:/usr/local/cuda/lib64:/soft/intel/lib/intel64:/soft/intel/lib/ia32:/soft/intel/mkl/lib/intel64:/soft/intel/mkl/lib/ia32:{$LD_LIBRARY_PATH}


It should build with MPI support with above settings. Does it?
Post by Szilárd Páll
PPS: given the double output (e.g. "Reading file 61.tpr, ...") it's
even more likely that you're using a non-PI build.
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2014-September/092046.html
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-***@gromacs.o
Mark Abraham
2016-08-11 15:42:45 UTC
Permalink
Hi,

Configuration of MPI also happens when mpirun acts. You need to have set
things up so that those two ranks are assigned to hardware the way you
want. Your output looks like there are two processes, but that they aren't
organised by mpirun to know to talk to each other.

Mark
Post by Albert
env CC=mpicc CXX=mpicxx F77=mpif90 FC=mpif90 LDF90=mpif90
CMAKE_PREFIX_PATH=/soft/gromacs/fftw-3.3.4:/soft/intel/impi/5.1.3.223
cmake .. -DBUILD_SHARED_LIB=OFF -DBUILD_TESTING=OFF
-DCMAKE_INSTALL_PREFIX=/soft/gromacs/5.1.3_intel -DGMX_MPI=ON
-DGMX_GPU=ON -DGMX_PREFER_STATIC_LIBS=ON
-DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda
source /soft/intel/bin/compilervars.csh intel64
source /soft/intel/impi/5.1.3.223/bin64/mpivars.csh
set path=(/soft/intel/impi/5.1.3.223/intel64/bin $path)
setenv CUDA_HOME /usr/local/cuda
setenv MKL_HOME /soft/intel/mkl/
setenv LD_LIBRARY_PATH
/soft/intel/compilers_and_libraries_2016.3.223/linux/mpi/lib64:/usr/local/cuda/lib64:/soft/intel/lib/intel64:/soft/intel/lib/ia32:/soft/intel/mkl/lib/intel64:/soft/intel/mkl/lib/ia32:{$LD_LIBRARY_PATH}
It should build with MPI support with above settings. Does it?
Post by Szilárd Páll
PPS: given the double output (e.g. "Reading file 61.tpr, ...") it's
even more likely that you're using a non-PI build.
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2014-September/092046.html
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a m
Szilárd Páll
2016-08-11 15:55:44 UTC
Permalink
Post by Albert
env CC=mpicc CXX=mpicxx F77=mpif90 FC=mpif90 LDF90=mpif90
CMAKE_PREFIX_PATH=/soft/gromacs/fftw-3.3.4:/soft/intel/impi/5.1.3.223 cmake
.. -DBUILD_SHARED_LIB=OFF -DBUILD_TESTING=OFF
-DCMAKE_INSTALL_PREFIX=/soft/gromacs/5.1.3_intel -DGMX_MPI=ON -DGMX_GPU=ON
-DGMX_PREFER_STATIC_LIBS=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda
source /soft/intel/bin/compilervars.csh intel64
source /soft/intel/impi/5.1.3.223/bin64/mpivars.csh
set path=(/soft/intel/impi/5.1.3.223/intel64/bin $path)
setenv CUDA_HOME /usr/local/cuda
setenv MKL_HOME /soft/intel/mkl/
setenv LD_LIBRARY_PATH
/soft/intel/compilers_and_libraries_2016.3.223/linux/mpi/lib64:/usr/local/cuda/lib64:/soft/intel/lib/intel64:/soft/intel/lib/ia32:/soft/intel/mkl/lib/intel64:/soft/intel/mkl/lib/ia32:{$LD_LIBRARY_PATH}
It should build with MPI support with above settings. Does it?
It should. You can always verify it in the header of the log file.
It's always useful to post full logs here.
Post by Albert
Post by Szilárd Páll
PPS: given the double output (e.g. "Reading file 61.tpr, ...") it's
even more likely that you're using a non-PI build.
https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2014-September/092046.html
--
Gromacs Users mailing list
* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a m
Albert
2016-08-11 18:38:52 UTC
Permalink
I just found that I compiled PLumed plugin with a different MPI, and
then patched Gromacs.

Now, I recompiled everything from scratch, finally it works.

thx a lot
Post by Szilárd Páll
It should. You can always verify it in the header of the log file.
It's always useful to post full logs here.
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx
Loading...