howto.html: Update commentary.

* docs/html/ext/howto.html: Update commentary.
	* include/bits/c++config: Update threading configuration comment.
	(__STL_GTHREADS): Remove macro definition.
	(__STL_THREADS): Likewise.
	* include/bits/stl_threads.h: Leave only the configuration
	path which had been guarded by __STL_GTHREADS.  Remove all
	guards related to __STL_GTHREADS, __STL_SGI_THREADS,
	__STL_PTHREADS, __STL_UITHREADS and __STL_WIN32THREADS.
	* include/bits/stl_alloc.h: Leave only the configuration path
	which had been guarded by __STL_THREADS.  Remove configuration
	path and guards for __STL_SGI_THREADS.
	(__NODE_ALLOCATOR_THREADS): Remove macro definition.  Unroll its use.
	(__NODE_ALLOCATOR_LOCK): Likewise.
	(__NODE_ALLOCATOR_UNLOCK): Likewise.
	(_NOTHREADS): Remove guards related to macro.
	* include/ext/stl_rope.h: Remove configuration path and guards
	for __STL_SGI_THREADS.
	* src/stl-inst.cc: Remove use of __NODE_ALLOCATOR_THREADS.

From-SVN: r47557
This commit is contained in:
Loren J. Rittle 2001-12-03 19:11:01 +00:00 committed by Loren J. Rittle
parent 13f08f0368
commit bd8fd826dd
7 changed files with 42 additions and 410 deletions

View File

@ -1,3 +1,24 @@
2001-12-03 Loren J. Rittle <ljrittle@acm.org>
* docs/html/ext/howto.html: Update commentary.
* include/bits/c++config: Update threading configuration comment.
(__STL_GTHREADS): Remove macro definition.
(__STL_THREADS): Likewise.
* include/bits/stl_threads.h: Leave only the configuration
path which had been guarded by __STL_GTHREADS. Remove all
guards related to __STL_GTHREADS, __STL_SGI_THREADS,
__STL_PTHREADS, __STL_UITHREADS and __STL_WIN32THREADS.
* include/bits/stl_alloc.h: Leave only the configuration path
which had been guarded by __STL_THREADS. Remove configuration
path and guards for __STL_SGI_THREADS.
(__NODE_ALLOCATOR_THREADS): Remove macro definition. Unroll its use.
(__NODE_ALLOCATOR_LOCK): Likewise.
(__NODE_ALLOCATOR_UNLOCK): Likewise.
(_NOTHREADS): Remove guards related to macro.
* include/ext/stl_rope.h: Remove configuration path and guards
for __STL_SGI_THREADS.
* src/stl-inst.cc: Remove use of __NODE_ALLOCATOR_THREADS.
2001-12-02 Phil Edwards <pme@gcc.gnu.org>
* docs/html/ext/howto.html: Update list of implemented DRs.

View File

@ -344,7 +344,8 @@
than you would depend on implementation-only names.
</p>
<p>Certain macros like <code>_NOTHREADS</code> and <code>__STL_THREADS</code>
can affect the 3.0.x allocators. Do not use them.
can affect the 3.0.x allocators. Do not use them. Those macros have
been completely removed for 3.1.
</p>
<p>More notes as we remember them...
</p>

View File

@ -55,16 +55,12 @@
// Use corrected code from the committee library group's issues list.
#define _GLIBCPP_RESOLVE_LIB_DEFECTS 1
// Map gthr.h abstraction to that required for STL. Do not key off of
// __GTHREADS at this point since we haven't seen the correct symbol
// yet, instead setup so that include/bits/stl_threads.h will know to
// include gthr.h instead of any other type of thread support. Note:
// that gthr.h may well map to gthr-single.h which is a correct way to
// express no threads support in gcc. As a user, do not define
// _NOTHREADS without consideration of the consequences (e.g. it is an
// internal ABI change).
#define __STL_GTHREADS
#define __STL_THREADS
// In those parts of the standard C++ library that use a mutex instead
// of a spin-lock, we now unconditionally use GCC's gthr.h mutex
// abstraction layer. All support to directly map to various
// threading models has been removed. Note: gthr.h may well map to
// gthr-single.h which is a correct way to express no threads support
// in gcc. Support for the undocumented _NOTHREADS has been removed.
// Default to the typically high-speed, pool-based allocator (as
// libstdc++-v2) instead of the malloc-based allocator (libstdc++-v3

View File

@ -73,39 +73,7 @@
#include <bits/std_cstdlib.h>
#include <bits/std_cstring.h>
#include <bits/std_cassert.h>
// To see the effects of this block of macro wrangling, jump to
// "Default node allocator" below.
#ifdef __STL_THREADS
#include <bits/stl_threads.h>
# define __NODE_ALLOCATOR_THREADS true
# ifdef __STL_SGI_THREADS
// We test whether threads are in use before locking.
// Perhaps this should be moved into stl_threads.h, but that
// probably makes it harder to avoid the procedure call when
// it isn't needed.
extern "C" {
extern int __us_rsthread_malloc;
}
// The above is copied from malloc.h. Including <malloc.h>
// would be cleaner but fails with certain levels of standard
// conformance.
# define __NODE_ALLOCATOR_LOCK if (__threads && __us_rsthread_malloc) \
{ _S_node_allocator_lock._M_acquire_lock(); }
# define __NODE_ALLOCATOR_UNLOCK if (__threads && __us_rsthread_malloc) \
{ _S_node_allocator_lock._M_release_lock(); }
# else /* !__STL_SGI_THREADS */
# define __NODE_ALLOCATOR_LOCK \
{ if (__threads) _S_node_allocator_lock._M_acquire_lock(); }
# define __NODE_ALLOCATOR_UNLOCK \
{ if (__threads) _S_node_allocator_lock._M_release_lock(); }
# endif
#else
// Thread-unsafe
# define __NODE_ALLOCATOR_LOCK
# define __NODE_ALLOCATOR_UNLOCK
# define __NODE_ALLOCATOR_THREADS false
#endif
namespace std
{
@ -364,9 +332,7 @@ private:
static char* _S_end_free;
static size_t _S_heap_size;
#ifdef __STL_THREADS
static _STL_mutex_lock _S_node_allocator_lock;
#endif
// It would be nice to use _STL_auto_lock here. But we
// don't need the NULL check. And we do need a test whether
@ -375,8 +341,8 @@ private:
friend class _Lock;
class _Lock {
public:
_Lock() { __NODE_ALLOCATOR_LOCK; }
~_Lock() { __NODE_ALLOCATOR_UNLOCK; }
_Lock() { if (__threads) _S_node_allocator_lock._M_acquire_lock(); }
~_Lock() { if (__threads) _S_node_allocator_lock._M_release_lock(); }
};
public:
@ -394,10 +360,7 @@ public:
// Acquire the lock here with a constructor call.
// This ensures that it is released in exit or during stack
// unwinding.
# ifndef _NOTHREADS
/*REFERENCED*/
_Lock __lock_instance;
# endif
_Obj* __restrict__ __result = *__my_free_list;
if (__result == 0)
__ret = _S_refill(_S_round_up(__n));
@ -423,10 +386,7 @@ public:
_Obj* __q = (_Obj*)__p;
// acquire lock
# ifndef _NOTHREADS
/*REFERENCED*/
_Lock __lock_instance;
# endif /* _NOTHREADS */
__q -> _M_free_list_link = *__my_free_list;
*__my_free_list = __q;
// lock is released here
@ -582,13 +542,10 @@ __default_alloc_template<threads, inst>::reallocate(void* __p,
return(__result);
}
#ifdef __STL_THREADS
template <bool __threads, int __inst>
_STL_mutex_lock
__default_alloc_template<__threads, __inst>::_S_node_allocator_lock
__STL_MUTEX_INITIALIZER;
#endif
template <bool __threads, int __inst>
char* __default_alloc_template<__threads, __inst>::_S_start_free = 0;
@ -606,8 +563,7 @@ __default_alloc_template<__threads, __inst> ::_S_free_list[
] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
// __NODE_ALLOCATOR_THREADS is predicated on __STL_THREADS being defined or not
typedef __default_alloc_template<__NODE_ALLOCATOR_THREADS, 0> alloc;
typedef __default_alloc_template<true, 0> alloc;
typedef __default_alloc_template<false, 0> single_client_alloc;

View File

@ -48,34 +48,8 @@
#ifndef __SGI_STL_INTERNAL_THREADS_H
#define __SGI_STL_INTERNAL_THREADS_H
// Supported threading models are native SGI, pthreads, uithreads
// (similar to pthreads, but based on an earlier draft of the Posix
// threads standard), and Win32 threads. Uithread support by Jochen
// Schlick, 1999.
// GCC extension begin
// In order to present a stable threading configuration, in all cases,
// gcc looks for it's own abstraction layer before all others. All
// modifications to this file are marked to allow easier importation of
// STL upgrades.
#if defined(__STL_GTHREADS)
// The only supported threading model is GCC's own gthr.h abstraction layer.
#include "bits/gthr.h"
#else
// GCC extension end
#if defined(__STL_SGI_THREADS)
#include <mutex.h>
#include <time.h>
#elif defined(__STL_PTHREADS)
#include <pthread.h>
#elif defined(__STL_UITHREADS)
#include <thread.h>
#include <synch.h>
#elif defined(__STL_WIN32THREADS)
#include <windows.h>
#endif
// GCC extension begin
#endif
// GCC extension end
namespace std
{
@ -84,29 +58,15 @@ namespace std
// _M_ref_count, and member functions _M_incr and _M_decr, which perform
// atomic preincrement/predecrement. The constructor initializes
// _M_ref_count.
// Hack for SGI o32 compilers.
#if defined(__STL_SGI_THREADS) && !defined(__add_and_fetch) && \
(__mips < 3 || !(defined (_ABIN32) || defined(_ABI64)))
# define __add_and_fetch(__l,__v) add_then_test((unsigned long*)__l,__v)
# define __test_and_set(__l,__v) test_and_set(__l,__v)
#endif /* o32 */
struct _Refcount_Base
{
// The type _RC_t
# ifdef __STL_WIN32THREADS
typedef long _RC_t;
# else
typedef size_t _RC_t;
#endif
// The data member _M_ref_count
volatile _RC_t _M_ref_count;
// Constructor
// GCC extension begin
#ifdef __STL_GTHREADS
__gthread_mutex_t _M_ref_count_lock;
_Refcount_Base(_RC_t __n) : _M_ref_count(__n)
{
@ -119,25 +79,7 @@ struct _Refcount_Base
#error __GTHREAD_MUTEX_INIT or __GTHREAD_MUTEX_INIT_FUNCTION should be defined by gthr.h abstraction layer, report problem to libstdc++@gcc.gnu.org.
#endif
}
#else
// GCC extension end
# ifdef __STL_PTHREADS
pthread_mutex_t _M_ref_count_lock;
_Refcount_Base(_RC_t __n) : _M_ref_count(__n)
{ pthread_mutex_init(&_M_ref_count_lock, 0); }
# elif defined(__STL_UITHREADS)
mutex_t _M_ref_count_lock;
_Refcount_Base(_RC_t __n) : _M_ref_count(__n)
{ mutex_init(&_M_ref_count_lock, USYNC_THREAD, 0); }
# else
_Refcount_Base(_RC_t __n) : _M_ref_count(__n) {}
# endif
// GCC extension begin
#endif
// GCC extension end
// GCC extension begin
#ifdef __STL_GTHREADS
void _M_incr() {
__gthread_mutex_lock(&_M_ref_count_lock);
++_M_ref_count;
@ -149,59 +91,14 @@ struct _Refcount_Base
__gthread_mutex_unlock(&_M_ref_count_lock);
return __tmp;
}
#else
// GCC extension end
// _M_incr and _M_decr
# ifdef __STL_SGI_THREADS
void _M_incr() { __add_and_fetch(&_M_ref_count, 1); }
_RC_t _M_decr() { return __add_and_fetch(&_M_ref_count, (size_t) -1); }
# elif defined (__STL_WIN32THREADS)
void _M_incr() { InterlockedIncrement((_RC_t*)&_M_ref_count); }
_RC_t _M_decr() { return InterlockedDecrement((_RC_t*)&_M_ref_count); }
# elif defined(__STL_PTHREADS)
void _M_incr() {
pthread_mutex_lock(&_M_ref_count_lock);
++_M_ref_count;
pthread_mutex_unlock(&_M_ref_count_lock);
}
_RC_t _M_decr() {
pthread_mutex_lock(&_M_ref_count_lock);
volatile _RC_t __tmp = --_M_ref_count;
pthread_mutex_unlock(&_M_ref_count_lock);
return __tmp;
}
# elif defined(__STL_UITHREADS)
void _M_incr() {
mutex_lock(&_M_ref_count_lock);
++_M_ref_count;
mutex_unlock(&_M_ref_count_lock);
}
_RC_t _M_decr() {
mutex_lock(&_M_ref_count_lock);
/*volatile*/ _RC_t __tmp = --_M_ref_count;
mutex_unlock(&_M_ref_count_lock);
return __tmp;
}
# else /* No threads */
void _M_incr() { ++_M_ref_count; }
_RC_t _M_decr() { return --_M_ref_count; }
# endif
// GCC extension begin
#endif
// GCC extension end
};
// Atomic swap on unsigned long
// This is guaranteed to behave as though it were atomic only if all
// possibly concurrent updates use _Atomic_swap.
// In some cases the operation is emulated with a lock.
// GCC extension begin
#if defined (__STL_GTHREADS) && defined (__GTHREAD_MUTEX_INIT)
// We don't provide an _Atomic_swap in this configuration. This only
// affects the use of ext/rope with threads. Someone could add this
// later, if required. You can start by cloning the __STL_PTHREADS
// path while making the obvious changes. Later it could be optimized
// to use the atomicity.h abstraction layer from libstdc++-v3.
#if defined (__GTHREAD_MUTEX_INIT)
// This could be optimized to use the atomicity.h abstraction layer.
// vyzo: simple _Atomic_swap implementation following the guidelines above
// We use a template here only to get a unique initialized instance.
template<int __dummy>
@ -223,100 +120,7 @@ struct _Refcount_Base
__gthread_mutex_unlock(&_Swap_lock_struct<0>::_S_swap_lock);
return __result;
}
#else
// GCC extension end
# ifdef __STL_SGI_THREADS
inline unsigned long _Atomic_swap(unsigned long * __p, unsigned long __q) {
# if __mips < 3 || !(defined (_ABIN32) || defined(_ABI64))
return test_and_set(__p, __q);
# else
return __test_and_set(__p, (unsigned long)__q);
#endif
}
# elif defined(__STL_WIN32THREADS)
inline unsigned long _Atomic_swap(unsigned long * __p, unsigned long __q) {
return (unsigned long) InterlockedExchange((LPLONG)__p, (LONG)__q);
}
# elif defined(__STL_PTHREADS)
// We use a template here only to get a unique initialized instance.
template<int __dummy>
struct _Swap_lock_struct {
static pthread_mutex_t _S_swap_lock;
};
template<int __dummy>
pthread_mutex_t
_Swap_lock_struct<__dummy>::_S_swap_lock = PTHREAD_MUTEX_INITIALIZER;
// This should be portable, but performance is expected
// to be quite awful. This really needs platform specific
// code.
inline unsigned long _Atomic_swap(unsigned long * __p, unsigned long __q) {
pthread_mutex_lock(&_Swap_lock_struct<0>::_S_swap_lock);
unsigned long __result = *__p;
*__p = __q;
pthread_mutex_unlock(&_Swap_lock_struct<0>::_S_swap_lock);
return __result;
}
# elif defined(__STL_UITHREADS)
// We use a template here only to get a unique initialized instance.
template<int __dummy>
struct _Swap_lock_struct {
static mutex_t _S_swap_lock;
};
template<int __dummy>
mutex_t
_Swap_lock_struct<__dummy>::_S_swap_lock = DEFAULTMUTEX;
// This should be portable, but performance is expected
// to be quite awful. This really needs platform specific
// code.
inline unsigned long _Atomic_swap(unsigned long * __p, unsigned long __q) {
mutex_lock(&_Swap_lock_struct<0>::_S_swap_lock);
unsigned long __result = *__p;
*__p = __q;
mutex_unlock(&_Swap_lock_struct<0>::_S_swap_lock);
return __result;
}
# elif defined (__STL_SOLARIS_THREADS)
// any better solutions ?
// We use a template here only to get a unique initialized instance.
template<int __dummy>
struct _Swap_lock_struct {
static mutex_t _S_swap_lock;
};
# if ( __STL_STATIC_TEMPLATE_DATA > 0 )
template<int __dummy>
mutex_t
_Swap_lock_struct<__dummy>::_S_swap_lock = DEFAULTMUTEX;
# else
__DECLARE_INSTANCE(mutex_t, _Swap_lock_struct<__dummy>::_S_swap_lock,
=DEFAULTMUTEX);
# endif /* ( __STL_STATIC_TEMPLATE_DATA > 0 ) */
// This should be portable, but performance is expected
// to be quite awful. This really needs platform specific
// code.
inline unsigned long _Atomic_swap(unsigned long * __p, unsigned long __q) {
mutex_lock(&_Swap_lock_struct<0>::_S_swap_lock);
unsigned long __result = *__p;
*__p = __q;
mutex_unlock(&_Swap_lock_struct<0>::_S_swap_lock);
return __result;
}
# else
static inline unsigned long _Atomic_swap(unsigned long * __p, unsigned long __q) {
unsigned long __result = *__p;
*__p = __q;
return __result;
}
# endif
// GCC extension begin
#endif
// GCC extension end
// Locking class. Note that this class *does not have a constructor*.
// It must be initialized either statically, with __STL_MUTEX_INITIALIZER,
@ -330,25 +134,6 @@ struct _Refcount_Base
// constructors, no base classes, no virtual functions, and no private or
// protected members.
// Helper struct. This is a workaround for various compilers that don't
// handle static variables in inline functions properly.
template <int __inst>
struct _STL_mutex_spin {
enum { __low_max = 30, __high_max = 1000 };
// Low if we suspect uniprocessor, high for multiprocessor.
static unsigned __max;
static unsigned __last;
};
template <int __inst>
unsigned _STL_mutex_spin<__inst>::__max = _STL_mutex_spin<__inst>::__low_max;
template <int __inst>
unsigned _STL_mutex_spin<__inst>::__last = 0;
// GCC extension begin
#if defined(__STL_GTHREADS)
#if !defined(__GTHREAD_MUTEX_INIT) && defined(__GTHREAD_MUTEX_INIT_FUNCTION)
extern __gthread_mutex_t _GLIBCPP_mutex;
extern __gthread_mutex_t *_GLIBCPP_mutex_address;
@ -356,13 +141,9 @@ extern __gthread_once_t _GLIBCPP_once;
extern void _GLIBCPP_mutex_init (void);
extern void _GLIBCPP_mutex_address_init (void);
#endif
#endif
// GCC extension end
struct _STL_mutex_lock
{
// GCC extension begin
#if defined(__STL_GTHREADS)
// The class must be statically initialized with __STL_MUTEX_INITIALIZER.
#if !defined(__GTHREAD_MUTEX_INIT) && defined(__GTHREAD_MUTEX_INIT_FUNCTION)
volatile int _M_init_flag;
@ -403,108 +184,8 @@ struct _STL_mutex_lock
#endif
__gthread_mutex_unlock(&_M_lock);
}
#else
// GCC extension end
#if defined(__STL_SGI_THREADS) || defined(__STL_WIN32THREADS)
// It should be relatively easy to get this to work on any modern Unix.
volatile unsigned long _M_lock;
void _M_initialize() { _M_lock = 0; }
static void _S_nsec_sleep(int __log_nsec) {
# ifdef __STL_SGI_THREADS
struct timespec __ts;
/* Max sleep is 2**27nsec ~ 60msec */
__ts.tv_sec = 0;
__ts.tv_nsec = 1L << __log_nsec;
nanosleep(&__ts, 0);
# elif defined(__STL_WIN32THREADS)
if (__log_nsec <= 20) {
Sleep(0);
} else {
Sleep(1 << (__log_nsec - 20));
}
# else
# error unimplemented
# endif
}
void _M_acquire_lock() {
volatile unsigned long* __lock = &this->_M_lock;
if (!_Atomic_swap((unsigned long*)__lock, 1)) {
return;
}
unsigned __my_spin_max = _STL_mutex_spin<0>::__max;
unsigned __my_last_spins = _STL_mutex_spin<0>::__last;
volatile unsigned __junk = 17; // Value doesn't matter.
unsigned __i;
for (__i = 0; __i < __my_spin_max; __i++) {
if (__i < __my_last_spins/2 || *__lock) {
__junk *= __junk; __junk *= __junk;
__junk *= __junk; __junk *= __junk;
continue;
}
if (!_Atomic_swap((unsigned long*)__lock, 1)) {
// got it!
// Spinning worked. Thus we're probably not being scheduled
// against the other process with which we were contending.
// Thus it makes sense to spin longer the next time.
_STL_mutex_spin<0>::__last = __i;
_STL_mutex_spin<0>::__max = _STL_mutex_spin<0>::__high_max;
return;
}
}
// We are probably being scheduled against the other process. Sleep.
_STL_mutex_spin<0>::__max = _STL_mutex_spin<0>::__low_max;
for (__i = 0 ;; ++__i) {
int __log_nsec = __i + 6;
if (__log_nsec > 27) __log_nsec = 27;
if (!_Atomic_swap((unsigned long *)__lock, 1)) {
return;
}
_S_nsec_sleep(__log_nsec);
}
}
void _M_release_lock() {
volatile unsigned long* __lock = &_M_lock;
# if defined(__STL_SGI_THREADS) && defined(__GNUC__) && __mips >= 3
asm("sync");
*__lock = 0;
# elif defined(__STL_SGI_THREADS) && __mips >= 3 \
&& (defined (_ABIN32) || defined(_ABI64))
__lock_release(__lock);
# else
*__lock = 0;
// This is not sufficient on many multiprocessors, since
// writes to protected variables and the lock may be reordered.
# endif
}
// We no longer use win32 critical sections.
// They appear to be slower in the contention-free case,
// and they appear difficult to initialize without introducing a race.
#elif defined(__STL_PTHREADS)
pthread_mutex_t _M_lock;
void _M_initialize() { pthread_mutex_init(&_M_lock, NULL); }
void _M_acquire_lock() { pthread_mutex_lock(&_M_lock); }
void _M_release_lock() { pthread_mutex_unlock(&_M_lock); }
#elif defined(__STL_UITHREADS)
mutex_t _M_lock;
void _M_initialize() { mutex_init(&_M_lock, USYNC_THREAD, 0); }
void _M_acquire_lock() { mutex_lock(&_M_lock); }
void _M_release_lock() { mutex_unlock(&_M_lock); }
#else /* No threads */
void _M_initialize() {}
void _M_acquire_lock() {}
void _M_release_lock() {}
#endif
// GCC extension begin
#endif
// GCC extension end
};
// GCC extension begin
#if defined(__STL_GTHREADS)
#ifdef __GTHREAD_MUTEX_INIT
#define __STL_MUTEX_INITIALIZER = { __GTHREAD_MUTEX_INIT }
#elif defined(__GTHREAD_MUTEX_INIT_FUNCTION)
@ -515,25 +196,6 @@ struct _STL_mutex_lock
#define __STL_MUTEX_INITIALIZER = { 0, __GTHREAD_ONCE_INIT }
#endif
#endif
#else
// GCC extension end
#ifdef __STL_PTHREADS
// Pthreads locks must be statically initialized to something other than
// the default value of zero.
# define __STL_MUTEX_INITIALIZER = { PTHREAD_MUTEX_INITIALIZER }
#elif defined(__STL_UITHREADS)
// UIthreads locks must be statically initialized to something other than
// the default value of zero.
# define __STL_MUTEX_INITIALIZER = { DEFAULTMUTEX }
#elif defined(__STL_SGI_THREADS) || defined(__STL_WIN32THREADS)
# define __STL_MUTEX_INITIALIZER = { 0 }
#else
# define __STL_MUTEX_INITIALIZER
#endif
// GCC extension begin
#endif
// GCC extension end
// A locking class that uses _STL_mutex_lock. The constructor takes a
// reference to an _STL_mutex_lock, and acquires a lock. The
@ -560,4 +222,3 @@ private:
// Local Variables:
// mode:C++
// End:

View File

@ -60,9 +60,6 @@
# include <bits/stl_threads.h>
# define __GC_CONST // constant except for deallocation
# endif
# ifdef __STL_SGI_THREADS
# include <mutex.h>
# endif
namespace std
{

View File

@ -42,7 +42,7 @@ namespace std
template class __malloc_alloc_template<0>;
#ifndef __USE_MALLOC
template class __default_alloc_template<__NODE_ALLOCATOR_THREADS, 0>;
template class __default_alloc_template<true, 0>;
#endif
template