gcc/libgomp/testsuite/libgomp.oacc-c-c++-common/acc_get_property-nvptx.c
Jakub Jelinek 79e3f7d54b libgomp: Add openacc_{cuda,cublas,cudart} effective targets and use them in openacc testsuite
When gcc is configured for nvptx offloading with --without-cuda-driver
and full CUDA isn't installed, many libgomp.oacc-*/* tests fail,
some of them because cuda.h header can't be found, others because
the tests can't be linked against -lcuda, -lcudart or -lcublas.
I usually only have akmod-nvidia and xorg-x11-drv-nvidia-cuda rpms
installed, so libcuda.so.1 can be dlopened and the offloading works,
but linking against those libraries isn't possible nor are the
headers around (for the plugin itself there is the fallback
libgomp/plugin/cuda/cuda.h).

The following patch adds 3 new effective targets and uses them in tests that
needs those.

2021-05-27  Jakub Jelinek  <jakub@redhat.com>

	* testsuite/lib/libgomp.exp (check_effective_target_openacc_cuda,
	check_effective_target_openacc_cublas,
	check_effective_target_openacc_cudart): New.
	* testsuite/libgomp.oacc-fortran/host_data-4.f90: Require effective
	target openacc_cublas.
	* testsuite/libgomp.oacc-fortran/host_data-2.f90: Likewise.
	* testsuite/libgomp.oacc-fortran/host_data-3.f: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/lib-91.c: Require effective
	target openacc_cuda.
	* testsuite/libgomp.oacc-c-c++-common/lib-70.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/lib-90.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/lib-75.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/lib-69.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/lib-74.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/lib-81.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/lib-72.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/lib-85.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/pr87835.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/lib-82.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/lib-73.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/lib-83.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/lib-78.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/lib-76.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/lib-84.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/lib-79.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/host_data-1.c: Require effective
	targets openacc_cublas and openacc_cudart.
	* testsuite/libgomp.oacc-c-c++-common/context-1.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/context-2.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/context-3.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/context-4.c: Likewise.
	* testsuite/libgomp.oacc-c-c++-common/acc_get_property-nvptx.c:
	Require effective target openacc_cudart.
	* testsuite/libgomp.oacc-c-c++-common/asyncwait-1.c: Add -DUSE_CUDA_H
	for effective target openacc_cuda and add && defined USE_CUDA_H to
	preprocessor conditionals.  Guard -lcuda also on openacc_cuda
	effective target.
2021-05-27 22:44:36 +02:00

73 lines
2.2 KiB
C

/* Test the `acc_get_property' and '`acc_get_property_string' library
functions on Nvidia devices by comparing property values with
those obtained through the CUDA API. */
/* { dg-additional-sources acc_get_property-aux.c } */
/* { dg-additional-options "-lcuda -lcudart" } */
/* { dg-do run { target openacc_nvidia_accel_selected } } */
/* { dg-require-effective-target openacc_cudart } */
#include <openacc.h>
#include <cuda.h>
#include <cuda_runtime_api.h>
#include <string.h>
#include <stdio.h>
void expect_device_properties (acc_device_t dev_type, int dev_num,
size_t expected_memory,
const char* expected_vendor,
const char* expected_name,
const char* expected_driver);
int
main ()
{
int dev_count;
cudaGetDeviceCount (&dev_count);
for (int dev_num = 0; dev_num < dev_count; ++dev_num)
{
if (cudaSetDevice (dev_num) != cudaSuccess)
{
fprintf (stderr, "cudaSetDevice failed.\n");
abort ();
}
printf ("Checking device %d\n", dev_num);
const char *vendor = "Nvidia";
size_t free_mem;
size_t total_mem;
if (cudaMemGetInfo (&free_mem, &total_mem) != cudaSuccess)
{
fprintf (stderr, "cudaMemGetInfo failed.\n");
abort ();
}
struct cudaDeviceProp p;
if (cudaGetDeviceProperties (&p, dev_num) != cudaSuccess)
{
fprintf (stderr, "cudaGetDeviceProperties failed.\n");
abort ();
}
int driver_version;
if (cudaDriverGetVersion (&driver_version) != cudaSuccess)
{
fprintf (stderr, "cudaDriverGetVersion failed.\n");
abort ();
}
/* The version string should contain the version of the CUDA Toolkit
in the same MAJOR.MINOR format that is used by Nvidia.
The format string below is the same that is used by the deviceQuery
program, which belongs to Nvidia's CUDA samples, to print the version. */
char driver[30];
snprintf (driver, sizeof driver, "CUDA Driver %u.%u",
driver_version / 1000, driver_version % 1000 / 10);
/* Note that this check relies on the fact that the device numbering
used by the nvptx plugin agrees with the CUDA device ordering. */
expect_device_properties (acc_device_nvidia, dev_num,
total_mem, vendor, p.name, driver);
}
}