Subversion Repositories Kolibri OS

Rev

Rev 3039 | Rev 3391 | Go to most recent revision | Blame | Compare with Previous | Last modification | View Log | Download | RSS feed

  1. /*
  2.   This is a version (aka dlmalloc) of malloc/free/realloc written by
  3.   Doug Lea and released to the public domain, as explained at
  4.   http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
  5.   comments, complaints, performance data, etc to dl@cs.oswego.edu
  6.  
  7. * Version 2.8.6 Wed Aug 29 06:57:58 2012  Doug Lea
  8.    Note: There may be an updated version of this malloc obtainable at
  9.            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
  10.          Check before installing!
  11.  
  12. * Quickstart
  13.  
  14.   This library is all in one file to simplify the most common usage:
  15.   ftp it, compile it (-O3), and link it into another program. All of
  16.   the compile-time options default to reasonable values for use on
  17.   most platforms.  You might later want to step through various
  18.   compile-time and dynamic tuning options.
  19.  
  20.   For convenience, an include file for code using this malloc is at:
  21.      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h
  22.   You don't really need this .h file unless you call functions not
  23.   defined in your system include files.  The .h file contains only the
  24.   excerpts from this file needed for using this malloc on ANSI C/C++
  25.   systems, so long as you haven't changed compile-time options about
  26.   naming and tuning parameters.  If you do, then you can create your
  27.   own malloc.h that does include all settings by cutting at the point
  28.   indicated below. Note that you may already by default be using a C
  29.   library containing a malloc that is based on some version of this
  30.   malloc (for example in linux). You might still want to use the one
  31.   in this file to customize settings or to avoid overheads associated
  32.   with library versions.
  33.  
  34. * Vital statistics:
  35.  
  36.   Supported pointer/size_t representation:       4 or 8 bytes
  37.        size_t MUST be an unsigned type of the same width as
  38.        pointers. (If you are using an ancient system that declares
  39.        size_t as a signed type, or need it to be a different width
  40.        than pointers, you can use a previous release of this malloc
  41.        (e.g. 2.7.2) supporting these.)
  42.  
  43.   Alignment:                                     8 bytes (minimum)
  44.        This suffices for nearly all current machines and C compilers.
  45.        However, you can define MALLOC_ALIGNMENT to be wider than this
  46.        if necessary (up to 128bytes), at the expense of using more space.
  47.  
  48.   Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
  49.                                           8 or 16 bytes (if 8byte sizes)
  50.        Each malloced chunk has a hidden word of overhead holding size
  51.        and status information, and additional cross-check word
  52.        if FOOTERS is defined.
  53.  
  54.   Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
  55.                           8-byte ptrs:  32 bytes    (including overhead)
  56.  
  57.        Even a request for zero bytes (i.e., malloc(0)) returns a
  58.        pointer to something of the minimum allocatable size.
  59.        The maximum overhead wastage (i.e., number of extra bytes
  60.        allocated than were requested in malloc) is less than or equal
  61.        to the minimum size, except for requests >= mmap_threshold that
  62.        are serviced via mmap(), where the worst case wastage is about
  63.        32 bytes plus the remainder from a system page (the minimal
  64.        mmap unit); typically 4096 or 8192 bytes.
  65.  
  66.   Security: static-safe; optionally more or less
  67.        The "security" of malloc refers to the ability of malicious
  68.        code to accentuate the effects of errors (for example, freeing
  69.        space that is not currently malloc'ed or overwriting past the
  70.        ends of chunks) in code that calls malloc.  This malloc
  71.        guarantees not to modify any memory locations below the base of
  72.        heap, i.e., static variables, even in the presence of usage
  73.        errors.  The routines additionally detect most improper frees
  74.        and reallocs.  All this holds as long as the static bookkeeping
  75.        for malloc itself is not corrupted by some other means.  This
  76.        is only one aspect of security -- these checks do not, and
  77.        cannot, detect all possible programming errors.
  78.  
  79.        If FOOTERS is defined nonzero, then each allocated chunk
  80.        carries an additional check word to verify that it was malloced
  81.        from its space.  These check words are the same within each
  82.        execution of a program using malloc, but differ across
  83.        executions, so externally crafted fake chunks cannot be
  84.        freed. This improves security by rejecting frees/reallocs that
  85.        could corrupt heap memory, in addition to the checks preventing
  86.        writes to statics that are always on.  This may further improve
  87.        security at the expense of time and space overhead.  (Note that
  88.        FOOTERS may also be worth using with MSPACES.)
  89.  
  90.        By default detected errors cause the program to abort (calling
  91.        "abort()"). You can override this to instead proceed past
  92.        errors by defining PROCEED_ON_ERROR.  In this case, a bad free
  93.        has no effect, and a malloc that encounters a bad address
  94.        caused by user overwrites will ignore the bad address by
  95.        dropping pointers and indices to all known memory. This may
  96.        be appropriate for programs that should continue if at all
  97.        possible in the face of programming errors, although they may
  98.        run out of memory because dropped memory is never reclaimed.
  99.  
  100.        If you don't like either of these options, you can define
  101.        CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
  102.        else. And if if you are sure that your program using malloc has
  103.        no errors or vulnerabilities, you can define INSECURE to 1,
  104.        which might (or might not) provide a small performance improvement.
  105.  
  106.        It is also possible to limit the maximum total allocatable
  107.        space, using malloc_set_footprint_limit. This is not
  108.        designed as a security feature in itself (calls to set limits
  109.        are not screened or privileged), but may be useful as one
  110.        aspect of a secure implementation.
  111.  
  112.   Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
  113.        When USE_LOCKS is defined, each public call to malloc, free,
  114.        etc is surrounded with a lock. By default, this uses a plain
  115.        pthread mutex, win32 critical section, or a spin-lock if if
  116.        available for the platform and not disabled by setting
  117.        USE_SPIN_LOCKS=0.  However, if USE_RECURSIVE_LOCKS is defined,
  118.        recursive versions are used instead (which are not required for
  119.        base functionality but may be needed in layered extensions).
  120.        Using a global lock is not especially fast, and can be a major
  121.        bottleneck.  It is designed only to provide minimal protection
  122.        in concurrent environments, and to provide a basis for
  123.        extensions.  If you are using malloc in a concurrent program,
  124.        consider instead using nedmalloc
  125.        (http://www.nedprod.com/programs/portable/nedmalloc/) or
  126.        ptmalloc (See http://www.malloc.de), which are derived from
  127.        versions of this malloc.
  128.  
  129.   System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
  130.        This malloc can use unix sbrk or any emulation (invoked using
  131.        the CALL_MORECORE macro) and/or mmap/munmap or any emulation
  132.        (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
  133.        memory.  On most unix systems, it tends to work best if both
  134.        MORECORE and MMAP are enabled.  On Win32, it uses emulations
  135.        based on VirtualAlloc. It also uses common C library functions
  136.        like memset.
  137.  
  138.   Compliance: I believe it is compliant with the Single Unix Specification
  139.        (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
  140.        others as well.
  141.  
  142. * Overview of algorithms
  143.  
  144.   This is not the fastest, most space-conserving, most portable, or
  145.   most tunable malloc ever written. However it is among the fastest
  146.   while also being among the most space-conserving, portable and
  147.   tunable.  Consistent balance across these factors results in a good
  148.   general-purpose allocator for malloc-intensive programs.
  149.  
  150.   In most ways, this malloc is a best-fit allocator. Generally, it
  151.   chooses the best-fitting existing chunk for a request, with ties
  152.   broken in approximately least-recently-used order. (This strategy
  153.   normally maintains low fragmentation.) However, for requests less
  154.   than 256bytes, it deviates from best-fit when there is not an
  155.   exactly fitting available chunk by preferring to use space adjacent
  156.   to that used for the previous small request, as well as by breaking
  157.   ties in approximately most-recently-used order. (These enhance
  158.   locality of series of small allocations.)  And for very large requests
  159.   (>= 256Kb by default), it relies on system memory mapping
  160.   facilities, if supported.  (This helps avoid carrying around and
  161.   possibly fragmenting memory used only for large chunks.)
  162.  
  163.   All operations (except malloc_stats and mallinfo) have execution
  164.   times that are bounded by a constant factor of the number of bits in
  165.   a size_t, not counting any clearing in calloc or copying in realloc,
  166.   or actions surrounding MORECORE and MMAP that have times
  167.   proportional to the number of non-contiguous regions returned by
  168.   system allocation routines, which is often just 1. In real-time
  169.   applications, you can optionally suppress segment traversals using
  170.   NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
  171.   system allocators return non-contiguous spaces, at the typical
  172.   expense of carrying around more memory and increased fragmentation.
  173.  
  174.   The implementation is not very modular and seriously overuses
  175.   macros. Perhaps someday all C compilers will do as good a job
  176.   inlining modular code as can now be done by brute-force expansion,
  177.   but now, enough of them seem not to.
  178.  
  179.   Some compilers issue a lot of warnings about code that is
  180.   dead/unreachable only on some platforms, and also about intentional
  181.   uses of negation on unsigned types. All known cases of each can be
  182.   ignored.
  183.  
  184.   For a longer but out of date high-level description, see
  185.      http://gee.cs.oswego.edu/dl/html/malloc.html
  186.  
  187. * MSPACES
  188.   If MSPACES is defined, then in addition to malloc, free, etc.,
  189.   this file also defines mspace_malloc, mspace_free, etc. These
  190.   are versions of malloc routines that take an "mspace" argument
  191.   obtained using create_mspace, to control all internal bookkeeping.
  192.   If ONLY_MSPACES is defined, only these versions are compiled.
  193.   So if you would like to use this allocator for only some allocations,
  194.   and your system malloc for others, you can compile with
  195.   ONLY_MSPACES and then do something like...
  196.     static mspace mymspace = create_mspace(0,0); // for example
  197.     #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
  198.  
  199.   (Note: If you only need one instance of an mspace, you can instead
  200.   use "USE_DL_PREFIX" to relabel the global malloc.)
  201.  
  202.   You can similarly create thread-local allocators by storing
  203.   mspaces as thread-locals. For example:
  204.     static __thread mspace tlms = 0;
  205.     void*  tlmalloc(size_t bytes) {
  206.       if (tlms == 0) tlms = create_mspace(0, 0);
  207.       return mspace_malloc(tlms, bytes);
  208.     }
  209.     void  tlfree(void* mem) { mspace_free(tlms, mem); }
  210.  
  211.   Unless FOOTERS is defined, each mspace is completely independent.
  212.   You cannot allocate from one and free to another (although
  213.   conformance is only weakly checked, so usage errors are not always
  214.   caught). If FOOTERS is defined, then each chunk carries around a tag
  215.   indicating its originating mspace, and frees are directed to their
  216.   originating spaces. Normally, this requires use of locks.
  217.  
  218.  -------------------------  Compile-time options ---------------------------
  219.  
  220. Be careful in setting #define values for numerical constants of type
  221. size_t. On some systems, literal values are not automatically extended
  222. to size_t precision unless they are explicitly casted. You can also
  223. use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
  224.  
  225. WIN32                    default: defined if _WIN32 defined
  226.   Defining WIN32 sets up defaults for MS environment and compilers.
  227.   Otherwise defaults are for unix. Beware that there seem to be some
  228.   cases where this malloc might not be a pure drop-in replacement for
  229.   Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
  230.   SetDIBits()) may be due to bugs in some video driver implementations
  231.   when pixel buffers are malloc()ed, and the region spans more than
  232.   one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
  233.   default granularity, pixel buffers may straddle virtual allocation
  234.   regions more often than when using the Microsoft allocator.  You can
  235.   avoid this by using VirtualAlloc() and VirtualFree() for all pixel
  236.   buffers rather than using malloc().  If this is not possible,
  237.   recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
  238.   in cases where MSC and gcc (cygwin) are known to differ on WIN32,
  239.   conditions use _MSC_VER to distinguish them.
  240.  
  241. DLMALLOC_EXPORT       default: extern
  242.   Defines how public APIs are declared. If you want to export via a
  243.   Windows DLL, you might define this as
  244.     #define DLMALLOC_EXPORT extern  __declspec(dllexport)
  245.   If you want a POSIX ELF shared object, you might use
  246.     #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
  247.  
  248. MALLOC_ALIGNMENT         default: (size_t)(2 * sizeof(void *))
  249.   Controls the minimum alignment for malloc'ed chunks.  It must be a
  250.   power of two and at least 8, even on machines for which smaller
  251.   alignments would suffice. It may be defined as larger than this
  252.   though. Note however that code and data structures are optimized for
  253.   the case of 8-byte alignment.
  254.  
  255. MSPACES                  default: 0 (false)
  256.   If true, compile in support for independent allocation spaces.
  257.   This is only supported if HAVE_MMAP is true.
  258.  
  259. ONLY_MSPACES             default: 0 (false)
  260.   If true, only compile in mspace versions, not regular versions.
  261.  
  262. USE_LOCKS                default: 0 (false)
  263.   Causes each call to each public routine to be surrounded with
  264.   pthread or WIN32 mutex lock/unlock. (If set true, this can be
  265.   overridden on a per-mspace basis for mspace versions.) If set to a
  266.   non-zero value other than 1, locks are used, but their
  267.   implementation is left out, so lock functions must be supplied manually,
  268.   as described below.
  269.  
  270. USE_SPIN_LOCKS           default: 1 iff USE_LOCKS and spin locks available
  271.   If true, uses custom spin locks for locking. This is currently
  272.   supported only gcc >= 4.1, older gccs on x86 platforms, and recent
  273.   MS compilers.  Otherwise, posix locks or win32 critical sections are
  274.   used.
  275.  
  276. USE_RECURSIVE_LOCKS      default: not defined
  277.   If defined nonzero, uses recursive (aka reentrant) locks, otherwise
  278.   uses plain mutexes. This is not required for malloc proper, but may
  279.   be needed for layered allocators such as nedmalloc.
  280.  
  281. LOCK_AT_FORK            default: not defined
  282.   If defined nonzero, performs pthread_atfork upon initialization
  283.   to initialize child lock while holding parent lock. The implementation
  284.   assumes that pthread locks (not custom locks) are being used. In other
  285.   cases, you may need to customize the implementation.
  286.  
  287. FOOTERS                  default: 0
  288.   If true, provide extra checking and dispatching by placing
  289.   information in the footers of allocated chunks. This adds
  290.   space and time overhead.
  291.  
  292. INSECURE                 default: 0
  293.   If true, omit checks for usage errors and heap space overwrites.
  294.  
  295. USE_DL_PREFIX            default: NOT defined
  296.   Causes compiler to prefix all public routines with the string 'dl'.
  297.   This can be useful when you only want to use this malloc in one part
  298.   of a program, using your regular system malloc elsewhere.
  299.  
  300. MALLOC_INSPECT_ALL       default: NOT defined
  301.   If defined, compiles malloc_inspect_all and mspace_inspect_all, that
  302.   perform traversal of all heap space.  Unless access to these
  303.   functions is otherwise restricted, you probably do not want to
  304.   include them in secure implementations.
  305.  
  306. ABORT                    default: defined as abort()
  307.   Defines how to abort on failed checks.  On most systems, a failed
  308.   check cannot die with an "assert" or even print an informative
  309.   message, because the underlying print routines in turn call malloc,
  310.   which will fail again.  Generally, the best policy is to simply call
  311.   abort(). It's not very useful to do more than this because many
  312.   errors due to overwriting will show up as address faults (null, odd
  313.   addresses etc) rather than malloc-triggered checks, so will also
  314.   abort.  Also, most compilers know that abort() does not return, so
  315.   can better optimize code conditionally calling it.
  316.  
  317. PROCEED_ON_ERROR           default: defined as 0 (false)
  318.   Controls whether detected bad addresses cause them to bypassed
  319.   rather than aborting. If set, detected bad arguments to free and
  320.   realloc are ignored. And all bookkeeping information is zeroed out
  321.   upon a detected overwrite of freed heap space, thus losing the
  322.   ability to ever return it from malloc again, but enabling the
  323.   application to proceed. If PROCEED_ON_ERROR is defined, the
  324.   static variable malloc_corruption_error_count is compiled in
  325.   and can be examined to see if errors have occurred. This option
  326.   generates slower code than the default abort policy.
  327.  
  328. DEBUG                    default: NOT defined
  329.   The DEBUG setting is mainly intended for people trying to modify
  330.   this code or diagnose problems when porting to new platforms.
  331.   However, it may also be able to better isolate user errors than just
  332.   using runtime checks.  The assertions in the check routines spell
  333.   out in more detail the assumptions and invariants underlying the
  334.   algorithms.  The checking is fairly extensive, and will slow down
  335.   execution noticeably. Calling malloc_stats or mallinfo with DEBUG
  336.   set will attempt to check every non-mmapped allocated and free chunk
  337.   in the course of computing the summaries.
  338.  
  339. ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
  340.   Debugging assertion failures can be nearly impossible if your
  341.   version of the assert macro causes malloc to be called, which will
  342.   lead to a cascade of further failures, blowing the runtime stack.
  343.   ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
  344.   which will usually make debugging easier.
  345.  
  346. MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
  347.   The action to take before "return 0" when malloc fails to be able to
  348.   return memory because there is none available.
  349.  
  350. HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
  351.   True if this system supports sbrk or an emulation of it.
  352.  
  353. MORECORE                  default: sbrk
  354.   The name of the sbrk-style system routine to call to obtain more
  355.   memory.  See below for guidance on writing custom MORECORE
  356.   functions. The type of the argument to sbrk/MORECORE varies across
  357.   systems.  It cannot be size_t, because it supports negative
  358.   arguments, so it is normally the signed type of the same width as
  359.   size_t (sometimes declared as "intptr_t").  It doesn't much matter
  360.   though. Internally, we only call it with arguments less than half
  361.   the max value of a size_t, which should work across all reasonable
  362.   possibilities, although sometimes generating compiler warnings.
  363.  
  364. MORECORE_CONTIGUOUS       default: 1 (true) if HAVE_MORECORE
  365.   If true, take advantage of fact that consecutive calls to MORECORE
  366.   with positive arguments always return contiguous increasing
  367.   addresses.  This is true of unix sbrk. It does not hurt too much to
  368.   set it true anyway, since malloc copes with non-contiguities.
  369.   Setting it false when definitely non-contiguous saves time
  370.   and possibly wasted space it would take to discover this though.
  371.  
  372. MORECORE_CANNOT_TRIM      default: NOT defined
  373.   True if MORECORE cannot release space back to the system when given
  374.   negative arguments. This is generally necessary only if you are
  375.   using a hand-crafted MORECORE function that cannot handle negative
  376.   arguments.
  377.  
  378. NO_SEGMENT_TRAVERSAL       default: 0
  379.   If non-zero, suppresses traversals of memory segments
  380.   returned by either MORECORE or CALL_MMAP. This disables
  381.   merging of segments that are contiguous, and selectively
  382.   releasing them to the OS if unused, but bounds execution times.
  383.  
  384. HAVE_MMAP                 default: 1 (true)
  385.   True if this system supports mmap or an emulation of it.  If so, and
  386.   HAVE_MORECORE is not true, MMAP is used for all system
  387.   allocation. If set and HAVE_MORECORE is true as well, MMAP is
  388.   primarily used to directly allocate very large blocks. It is also
  389.   used as a backup strategy in cases where MORECORE fails to provide
  390.   space from system. Note: A single call to MUNMAP is assumed to be
  391.   able to unmap memory that may have be allocated using multiple calls
  392.   to MMAP, so long as they are adjacent.
  393.  
  394. HAVE_MREMAP               default: 1 on linux, else 0
  395.   If true realloc() uses mremap() to re-allocate large blocks and
  396.   extend or shrink allocation spaces.
  397.  
  398. MMAP_CLEARS               default: 1 except on WINCE.
  399.   True if mmap clears memory so calloc doesn't need to. This is true
  400.   for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
  401.  
  402. USE_BUILTIN_FFS            default: 0 (i.e., not used)
  403.   Causes malloc to use the builtin ffs() function to compute indices.
  404.   Some compilers may recognize and intrinsify ffs to be faster than the
  405.   supplied C version. Also, the case of x86 using gcc is special-cased
  406.   to an asm instruction, so is already as fast as it can be, and so
  407.   this setting has no effect. Similarly for Win32 under recent MS compilers.
  408.   (On most x86s, the asm version is only slightly faster than the C version.)
  409.  
  410. malloc_getpagesize         default: derive from system includes, or 4096.
  411.   The system page size. To the extent possible, this malloc manages
  412.   memory from the system in page-size units.  This may be (and
  413.   usually is) a function rather than a constant. This is ignored
  414.   if WIN32, where page size is determined using getSystemInfo during
  415.   initialization.
  416.  
  417. USE_DEV_RANDOM             default: 0 (i.e., not used)
  418.   Causes malloc to use /dev/random to initialize secure magic seed for
  419.   stamping footers. Otherwise, the current time is used.
  420.  
  421. NO_MALLINFO                default: 0
  422.   If defined, don't compile "mallinfo". This can be a simple way
  423.   of dealing with mismatches between system declarations and
  424.   those in this file.
  425.  
  426. MALLINFO_FIELD_TYPE        default: size_t
  427.   The type of the fields in the mallinfo struct. This was originally
  428.   defined as "int" in SVID etc, but is more usefully defined as
  429.   size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
  430.  
  431. NO_MALLOC_STATS            default: 0
  432.   If defined, don't compile "malloc_stats". This avoids calls to
  433.   fprintf and bringing in stdio dependencies you might not want.
  434.  
  435. REALLOC_ZERO_BYTES_FREES    default: not defined
  436.   This should be set if a call to realloc with zero bytes should
  437.   be the same as a call to free. Some people think it should. Otherwise,
  438.   since this malloc returns a unique pointer for malloc(0), so does
  439.   realloc(p, 0).
  440.  
  441. LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
  442. LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
  443. LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H  default: NOT defined unless on WIN32
  444.   Define these if your system does not have these header files.
  445.   You might need to manually insert some of the declarations they provide.
  446.  
  447. DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
  448.                                 system_info.dwAllocationGranularity in WIN32,
  449.                                 otherwise 64K.
  450.       Also settable using mallopt(M_GRANULARITY, x)
  451.   The unit for allocating and deallocating memory from the system.  On
  452.   most systems with contiguous MORECORE, there is no reason to
  453.   make this more than a page. However, systems with MMAP tend to
  454.   either require or encourage larger granularities.  You can increase
  455.   this value to prevent system allocation functions to be called so
  456.   often, especially if they are slow.  The value must be at least one
  457.   page and must be a power of two.  Setting to 0 causes initialization
  458.   to either page size or win32 region size.  (Note: In previous
  459.   versions of malloc, the equivalent of this option was called
  460.   "TOP_PAD")
  461.  
  462. DEFAULT_TRIM_THRESHOLD    default: 2MB
  463.       Also settable using mallopt(M_TRIM_THRESHOLD, x)
  464.   The maximum amount of unused top-most memory to keep before
  465.   releasing via malloc_trim in free().  Automatic trimming is mainly
  466.   useful in long-lived programs using contiguous MORECORE.  Because
  467.   trimming via sbrk can be slow on some systems, and can sometimes be
  468.   wasteful (in cases where programs immediately afterward allocate
  469.   more large chunks) the value should be high enough so that your
  470.   overall system performance would improve by releasing this much
  471.   memory.  As a rough guide, you might set to a value close to the
  472.   average size of a process (program) running on your system.
  473.   Releasing this much memory would allow such a process to run in
  474.   memory.  Generally, it is worth tuning trim thresholds when a
  475.   program undergoes phases where several large chunks are allocated
  476.   and released in ways that can reuse each other's storage, perhaps
  477.   mixed with phases where there are no such chunks at all. The trim
  478.   value must be greater than page size to have any useful effect.  To
  479.   disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
  480.   some people use of mallocing a huge space and then freeing it at
  481.   program startup, in an attempt to reserve system memory, doesn't
  482.   have the intended effect under automatic trimming, since that memory
  483.   will immediately be returned to the system.
  484.  
  485. DEFAULT_MMAP_THRESHOLD       default: 256K
  486.       Also settable using mallopt(M_MMAP_THRESHOLD, x)
  487.   The request size threshold for using MMAP to directly service a
  488.   request. Requests of at least this size that cannot be allocated
  489.   using already-existing space will be serviced via mmap.  (If enough
  490.   normal freed space already exists it is used instead.)  Using mmap
  491.   segregates relatively large chunks of memory so that they can be
  492.   individually obtained and released from the host system. A request
  493.   serviced through mmap is never reused by any other request (at least
  494.   not directly; the system may just so happen to remap successive
  495.   requests to the same locations).  Segregating space in this way has
  496.   the benefits that: Mmapped space can always be individually released
  497.   back to the system, which helps keep the system level memory demands
  498.   of a long-lived program low.  Also, mapped memory doesn't become
  499.   `locked' between other chunks, as can happen with normally allocated
  500.   chunks, which means that even trimming via malloc_trim would not
  501.   release them.  However, it has the disadvantage that the space
  502.   cannot be reclaimed, consolidated, and then used to service later
  503.   requests, as happens with normal chunks.  The advantages of mmap
  504.   nearly always outweigh disadvantages for "large" chunks, but the
  505.   value of "large" may vary across systems.  The default is an
  506.   empirically derived value that works well in most systems. You can
  507.   disable mmap by setting to MAX_SIZE_T.
  508.  
  509. MAX_RELEASE_CHECK_RATE   default: 4095 unless not HAVE_MMAP
  510.   The number of consolidated frees between checks to release
  511.   unused segments when freeing. When using non-contiguous segments,
  512.   especially with multiple mspaces, checking only for topmost space
  513.   doesn't always suffice to trigger trimming. To compensate for this,
  514.   free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
  515.   current number of segments, if greater) try to release unused
  516.   segments to the OS when freeing chunks that result in
  517.   consolidation. The best value for this parameter is a compromise
  518.   between slowing down frees with relatively costly checks that
  519.   rarely trigger versus holding on to unused memory. To effectively
  520.   disable, set to MAX_SIZE_T. This may lead to a very slight speed
  521.   improvement at the expense of carrying around more memory.
  522. */
  523.  
  524. #include <ddk.h>
  525. #include <mutex.h>
  526. #include <syscall.h>
  527.  
  528. /* Version identifier to allow people to support multiple versions */
  529. #ifndef DLMALLOC_VERSION
  530. #define DLMALLOC_VERSION 20806
  531. #endif /* DLMALLOC_VERSION */
  532.  
  533.  
  534. /*
  535.   malloc(size_t n)
  536.   Returns a pointer to a newly allocated chunk of at least n bytes, or
  537.   null if no space is available, in which case errno is set to ENOMEM
  538.   on ANSI C systems.
  539.  
  540.   If n is zero, malloc returns a minimum-sized chunk. (The minimum
  541.   size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
  542.   systems.)  Note that size_t is an unsigned type, so calls with
  543.   arguments that would be negative if signed are interpreted as
  544.   requests for huge amounts of space, which will often fail. The
  545.   maximum supported value of n differs across systems, but is in all
  546.   cases less than the maximum representable value of a size_t.
  547. */
  548.  
  549. /*
  550.   free(void* p)
  551.   Releases the chunk of memory pointed to by p, that had been previously
  552.   allocated using malloc or a related routine such as realloc.
  553.   It has no effect if p is null. If p was not malloced or already
  554.   freed, free(p) will by default cause the current program to abort.
  555. */
  556.  
  557. /*
  558.   calloc(size_t n_elements, size_t element_size);
  559.   Returns a pointer to n_elements * element_size bytes, with all locations
  560.   set to zero.
  561. */
  562.  
  563. /*
  564.   realloc(void* p, size_t n)
  565.   Returns a pointer to a chunk of size n that contains the same data
  566.   as does chunk p up to the minimum of (n, p's size) bytes, or null
  567.   if no space is available.
  568.  
  569.   The returned pointer may or may not be the same as p. The algorithm
  570.   prefers extending p in most cases when possible, otherwise it
  571.   employs the equivalent of a malloc-copy-free sequence.
  572.  
  573.   If p is null, realloc is equivalent to malloc.
  574.  
  575.   If space is not available, realloc returns null, errno is set (if on
  576.   ANSI) and p is NOT freed.
  577.  
  578.   if n is for fewer bytes than already held by p, the newly unused
  579.   space is lopped off and freed if possible.  realloc with a size
  580.   argument of zero (re)allocates a minimum-sized chunk.
  581.  
  582.   The old unix realloc convention of allowing the last-free'd chunk
  583.   to be used as an argument to realloc is not supported.
  584. */
  585. /*
  586.   memalign(size_t alignment, size_t n);
  587.   Returns a pointer to a newly allocated chunk of n bytes, aligned
  588.   in accord with the alignment argument.
  589.  
  590.   The alignment argument should be a power of two. If the argument is
  591.   not a power of two, the nearest greater power is used.
  592.   8-byte alignment is guaranteed by normal malloc calls, so don't
  593.   bother calling memalign with an argument of 8 or less.
  594.  
  595.   Overreliance on memalign is a sure way to fragment space.
  596. */
  597.  
  598.  
  599. #define DEBUG   1
  600.  
  601.  
  602.  
  603.  
  604.  
  605.  
  606.  
  607.  
  608.  
  609.  
  610.  
  611.  
  612. #define assert(x)
  613.  
  614. #define MAX_SIZE_T           (~(size_t)0)
  615.  
  616. #define CALL_DIRECT_MMAP(s) MMAP_DEFAULT(s)
  617.  
  618. /* ------------------- size_t and alignment properties -------------------- */
  619.  
  620. /* The byte and bit size of a size_t */
  621. #define SIZE_T_SIZE             (sizeof(size_t))
  622. #define SIZE_T_BITSIZE          (sizeof(size_t) << 3)
  623.  
  624. /* Some constants coerced to size_t */
  625. /* Annoying but necessary to avoid errors on some platforms */
  626. #define SIZE_T_ZERO             ((size_t)0)
  627. #define SIZE_T_ONE              ((size_t)1)
  628. #define SIZE_T_TWO              ((size_t)2)
  629. #define SIZE_T_FOUR             ((size_t)4)
  630. #define TWO_SIZE_T_SIZES        (SIZE_T_SIZE<<1)
  631. #define FOUR_SIZE_T_SIZES       (SIZE_T_SIZE<<2)
  632. #define SIX_SIZE_T_SIZES        (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
  633. #define HALF_MAX_SIZE_T         (MAX_SIZE_T / 2U)
  634.  
  635. #define USE_LOCK_BIT            (2U)
  636. #define USE_MMAP_BIT            (SIZE_T_ONE)
  637. #define USE_NONCONTIGUOUS_BIT   (4U)
  638.  
  639. /* segment bit set in create_mspace_with_base */
  640. #define EXTERN_BIT              (8U)
  641.  
  642. #define HAVE_MMAP               1
  643. #define HAVE_MORECORE           0
  644. #define MORECORE_CANNOT_TRIM    1
  645. #define CALL_MMAP(s)            MMAP_DEFAULT(s)
  646. #define CALL_MUNMAP(a, s)       MUNMAP_DEFAULT((a), (s))
  647. #define CALL_MREMAP(addr, osz, nsz, mv)     MFAIL
  648. #define MAX_RELEASE_CHECK_RATE  4095
  649. #define NO_SEGMENT_TRAVERSAL    1
  650. #define MALLOC_ALIGNMENT        ((size_t)8U)
  651. #define CHUNK_OVERHEAD          (SIZE_T_SIZE)
  652. #define DEFAULT_GRANULARITY     ((size_t)128U * (size_t)1024U)
  653. #define DEFAULT_MMAP_THRESHOLD  ((size_t)512U * (size_t)1024U)
  654. #define DEFAULT_TRIM_THRESHOLD  ((size_t)1024U * (size_t)1024U)
  655.  
  656. /* The bit mask value corresponding to MALLOC_ALIGNMENT */
  657. #define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
  658.  
  659. /* True if address a has acceptable alignment */
  660. #define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
  661.  
  662. /* the number of bytes to offset an address to align it */
  663. #define align_offset(A)\
  664.  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
  665.   ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
  666.  
  667. /* -------------------------- MMAP preliminaries ------------------------- */
  668.  
  669. /*
  670.    If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
  671.    checks to fail so compiler optimizer can delete code rather than
  672.    using so many "#if"s.
  673. */
  674.  
  675.  
  676. /* MORECORE and MMAP must return MFAIL on failure */
  677. #define MFAIL                ((void*)(MAX_SIZE_T))
  678. #define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
  679.  
  680. #define should_trim(M,s)  (0)
  681.  
  682.  
  683.  
  684. /* --------------------------- Lock preliminaries ------------------------ */
  685.  
  686. /*
  687.   When locks are defined, there is one global lock, plus
  688.   one per-mspace lock.
  689.  
  690.   The global lock_ensures that mparams.magic and other unique
  691.   mparams values are initialized only once. It also protects
  692.   sequences of calls to MORECORE.  In many cases sys_alloc requires
  693.   two calls, that should not be interleaved with calls by other
  694.   threads.  This does not protect against direct calls to MORECORE
  695.   by other threads not using this lock, so there is still code to
  696.   cope the best we can on interference.
  697.  
  698.   Per-mspace locks surround calls to malloc, free, etc.
  699.   By default, locks are simple non-reentrant mutexes.
  700.  
  701.   Because lock-protected regions generally have bounded times, it is
  702.   OK to use the supplied simple spinlocks. Spinlocks are likely to
  703.   improve performance for lightly contended applications, but worsen
  704.   performance under heavy contention.
  705.  
  706.   If USE_LOCKS is > 1, the definitions of lock routines here are
  707.   bypassed, in which case you will need to define the type MLOCK_T,
  708.   and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
  709.   and TRY_LOCK.  You must also declare a
  710.     static MLOCK_T malloc_global_mutex = { initialization values };.
  711.  
  712. */
  713.  
  714. static DEFINE_MUTEX(malloc_global_mutex);
  715.  
  716. #define ACQUIRE_MALLOC_GLOBAL_LOCK()  MutexLock(&malloc_global_mutex);
  717. #define RELEASE_MALLOC_GLOBAL_LOCK()  MutexUnlock(&malloc_global_mutex);
  718.  
  719.  
  720. /* -----------------------  Chunk representations ------------------------ */
  721.  
  722. /*
  723.   (The following includes lightly edited explanations by Colin Plumb.)
  724.  
  725.   The malloc_chunk declaration below is misleading (but accurate and
  726.   necessary).  It declares a "view" into memory allowing access to
  727.   necessary fields at known offsets from a given base.
  728.  
  729.   Chunks of memory are maintained using a `boundary tag' method as
  730.   originally described by Knuth.  (See the paper by Paul Wilson
  731.   ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
  732.   techniques.)  Sizes of free chunks are stored both in the front of
  733.   each chunk and at the end.  This makes consolidating fragmented
  734.   chunks into bigger chunks fast.  The head fields also hold bits
  735.   representing whether chunks are free or in use.
  736.  
  737.   Here are some pictures to make it clearer.  They are "exploded" to
  738.   show that the state of a chunk can be thought of as extending from
  739.   the high 31 bits of the head field of its header through the
  740.   prev_foot and PINUSE_BIT bit of the following chunk header.
  741.  
  742.   A chunk that's in use looks like:
  743.  
  744.    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  745.            | Size of previous chunk (if P = 0)                             |
  746.            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  747.          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
  748.          | Size of this chunk                                         1| +-+
  749.    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  750.          |                                                               |
  751.          +-                                                             -+
  752.          |                                                               |
  753.          +-                                                             -+
  754.          |                                                               :
  755.          +-      size - sizeof(size_t) available payload bytes          -+
  756.          :                                                               |
  757.  chunk-> +-                                                             -+
  758.          |                                                               |
  759.          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  760.        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
  761.        | Size of next chunk (may or may not be in use)               | +-+
  762.  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  763.  
  764.     And if it's free, it looks like this:
  765.  
  766.    chunk-> +-                                                             -+
  767.            | User payload (must be in use, or we would have merged!)       |
  768.            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  769.          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
  770.          | Size of this chunk                                         0| +-+
  771.    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  772.          | Next pointer                                                  |
  773.          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  774.          | Prev pointer                                                  |
  775.          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  776.          |                                                               :
  777.          +-      size - sizeof(struct chunk) unused bytes               -+
  778.          :                                                               |
  779.  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  780.          | Size of this chunk                                            |
  781.          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  782.        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
  783.        | Size of next chunk (must be in use, or we would have merged)| +-+
  784.  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  785.        |                                                               :
  786.        +- User payload                                                -+
  787.        :                                                               |
  788.        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  789.                                                                      |0|
  790.                                                                      +-+
  791.   Note that since we always merge adjacent free chunks, the chunks
  792.   adjacent to a free chunk must be in use.
  793.  
  794.   Given a pointer to a chunk (which can be derived trivially from the
  795.   payload pointer) we can, in O(1) time, find out whether the adjacent
  796.   chunks are free, and if so, unlink them from the lists that they
  797.   are on and merge them with the current chunk.
  798.  
  799.   Chunks always begin on even word boundaries, so the mem portion
  800.   (which is returned to the user) is also on an even word boundary, and
  801.   thus at least double-word aligned.
  802.  
  803.   The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
  804.   chunk size (which is always a multiple of two words), is an in-use
  805.   bit for the *previous* chunk.  If that bit is *clear*, then the
  806.   word before the current chunk size contains the previous chunk
  807.   size, and can be used to find the front of the previous chunk.
  808.   The very first chunk allocated always has this bit set, preventing
  809.   access to non-existent (or non-owned) memory. If pinuse is set for
  810.   any given chunk, then you CANNOT determine the size of the
  811.   previous chunk, and might even get a memory addressing fault when
  812.   trying to do so.
  813.  
  814.   The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
  815.   the chunk size redundantly records whether the current chunk is
  816.   inuse (unless the chunk is mmapped). This redundancy enables usage
  817.   checks within free and realloc, and reduces indirection when freeing
  818.   and consolidating chunks.
  819.  
  820.   Each freshly allocated chunk must have both cinuse and pinuse set.
  821.   That is, each allocated chunk borders either a previously allocated
  822.   and still in-use chunk, or the base of its memory arena. This is
  823.   ensured by making all allocations from the `lowest' part of any
  824.   found chunk.  Further, no free chunk physically borders another one,
  825.   so each free chunk is known to be preceded and followed by either
  826.   inuse chunks or the ends of memory.
  827.  
  828.   Note that the `foot' of the current chunk is actually represented
  829.   as the prev_foot of the NEXT chunk. This makes it easier to
  830.   deal with alignments etc but can be very confusing when trying
  831.   to extend or adapt this code.
  832.  
  833.   The exceptions to all this are
  834.  
  835.      1. The special chunk `top' is the top-most available chunk (i.e.,
  836.         the one bordering the end of available memory). It is treated
  837.         specially.  Top is never included in any bin, is used only if
  838.         no other chunk is available, and is released back to the
  839.         system if it is very large (see M_TRIM_THRESHOLD).  In effect,
  840.         the top chunk is treated as larger (and thus less well
  841.         fitting) than any other available chunk.  The top chunk
  842.         doesn't update its trailing size field since there is no next
  843.         contiguous chunk that would have to index off it. However,
  844.         space is still allocated for it (TOP_FOOT_SIZE) to enable
  845.         separation or merging when space is extended.
  846.  
  847.      3. Chunks allocated via mmap, have both cinuse and pinuse bits
  848.         cleared in their head fields.  Because they are allocated
  849.         one-by-one, each must carry its own prev_foot field, which is
  850.         also used to hold the offset this chunk has within its mmapped
  851.         region, which is needed to preserve alignment. Each mmapped
  852.         chunk is trailed by the first two fields of a fake next-chunk
  853.         for sake of usage checks.
  854.  
  855. */
  856.  
  857. struct malloc_chunk {
  858.   size_t               prev_foot;  /* Size of previous chunk (if free).  */
  859.   size_t               head;       /* Size and inuse bits. */
  860.   struct malloc_chunk* fd;         /* double links -- used only if free. */
  861.   struct malloc_chunk* bk;
  862. };
  863.  
  864. typedef struct malloc_chunk  mchunk;
  865. typedef struct malloc_chunk* mchunkptr;
  866. typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
  867. typedef unsigned int bindex_t;         /* Described below */
  868. typedef unsigned int binmap_t;         /* Described below */
  869. typedef unsigned int flag_t;           /* The type of various bit flag sets */
  870.  
  871. /* ------------------- Chunks sizes and alignments ----------------------- */
  872.  
  873. #define MCHUNK_SIZE         (sizeof(mchunk))
  874.  
  875. #if FOOTERS
  876. #define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
  877. #else /* FOOTERS */
  878. #define CHUNK_OVERHEAD      (SIZE_T_SIZE)
  879. #endif /* FOOTERS */
  880.  
  881. /* MMapped chunks need a second word of overhead ... */
  882. #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
  883. /* ... and additional padding for fake next-chunk at foot */
  884. #define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
  885.  
  886. /* The smallest size we can malloc is an aligned minimal chunk */
  887. #define MIN_CHUNK_SIZE\
  888.   ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
  889.  
  890. /* conversion from malloc headers to user pointers, and back */
  891. #define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
  892. #define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
  893. /* chunk associated with aligned address A */
  894. #define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
  895.  
  896. /* Bounds on request (not chunk) sizes. */
  897. #define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
  898. #define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
  899.  
  900. /* pad request bytes into a usable size */
  901. #define pad_request(req) \
  902.    (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
  903.  
  904. /* pad request, checking for minimum (but not maximum) */
  905. #define request2size(req) \
  906.   (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
  907.  
  908.  
  909. /* ------------------ Operations on head and foot fields ----------------- */
  910.  
  911. /*
  912.   The head field of a chunk is or'ed with PINUSE_BIT when previous
  913.   adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
  914.   use, unless mmapped, in which case both bits are cleared.
  915.  
  916.   FLAG4_BIT is not used by this malloc, but might be useful in extensions.
  917. */
  918.  
  919. #define PINUSE_BIT          (SIZE_T_ONE)
  920. #define CINUSE_BIT          (SIZE_T_TWO)
  921. #define FLAG4_BIT           (SIZE_T_FOUR)
  922. #define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
  923. #define FLAG_BITS           (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
  924.  
  925. /* Head value for fenceposts */
  926. #define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
  927.  
  928. /* extraction of fields from head words */
  929. #define cinuse(p)           ((p)->head & CINUSE_BIT)
  930. #define pinuse(p)           ((p)->head & PINUSE_BIT)
  931. #define flag4inuse(p)       ((p)->head & FLAG4_BIT)
  932. #define is_inuse(p)         (((p)->head & INUSE_BITS) != PINUSE_BIT)
  933. #define is_mmapped(p)       (((p)->head & INUSE_BITS) == 0)
  934.  
  935. #define chunksize(p)        ((p)->head & ~(FLAG_BITS))
  936.  
  937. #define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
  938. #define set_flag4(p)        ((p)->head |= FLAG4_BIT)
  939. #define clear_flag4(p)      ((p)->head &= ~FLAG4_BIT)
  940.  
  941. /* Treat space at ptr +/- offset as a chunk */
  942. #define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
  943. #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
  944.  
  945. /* Ptr to next or previous physical malloc_chunk. */
  946. #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
  947. #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
  948.  
  949. /* extract next chunk's pinuse bit */
  950. #define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
  951.  
  952. /* Get/set size at footer */
  953. #define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
  954. #define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
  955.  
  956. /* Set size, pinuse bit, and foot */
  957. #define set_size_and_pinuse_of_free_chunk(p, s)\
  958.   ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
  959.  
  960. /* Set size, pinuse bit, foot, and clear next pinuse */
  961. #define set_free_with_pinuse(p, s, n)\
  962.   (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
  963.  
  964. /* Get the internal overhead associated with chunk p */
  965. #define overhead_for(p)\
  966.  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
  967.  
  968. /* Return true if malloced space is not necessarily cleared */
  969. #if MMAP_CLEARS
  970. #define calloc_must_clear(p) (!is_mmapped(p))
  971. #else /* MMAP_CLEARS */
  972. #define calloc_must_clear(p) (1)
  973. #endif /* MMAP_CLEARS */
  974.  
  975. /* ---------------------- Overlaid data structures ----------------------- */
  976.  
  977. /*
  978.   When chunks are not in use, they are treated as nodes of either
  979.   lists or trees.
  980.  
  981.   "Small"  chunks are stored in circular doubly-linked lists, and look
  982.   like this:
  983.  
  984.     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  985.             |             Size of previous chunk                            |
  986.             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  987.     `head:' |             Size of chunk, in bytes                         |P|
  988.       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  989.             |             Forward pointer to next chunk in list             |
  990.             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  991.             |             Back pointer to previous chunk in list            |
  992.             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  993.             |             Unused space (may be 0 bytes long)                .
  994.             .                                                               .
  995.             .                                                               |
  996. nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  997.     `foot:' |             Size of chunk, in bytes                           |
  998.             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  999.  
  1000.   Larger chunks are kept in a form of bitwise digital trees (aka
  1001.   tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
  1002.   free chunks greater than 256 bytes, their size doesn't impose any
  1003.   constraints on user chunk sizes.  Each node looks like:
  1004.  
  1005.     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1006.             |             Size of previous chunk                            |
  1007.             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1008.     `head:' |             Size of chunk, in bytes                         |P|
  1009.       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1010.             |             Forward pointer to next chunk of same size        |
  1011.             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1012.             |             Back pointer to previous chunk of same size       |
  1013.             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1014.             |             Pointer to left child (child[0])                  |
  1015.             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1016.             |             Pointer to right child (child[1])                 |
  1017.             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1018.             |             Pointer to parent                                 |
  1019.             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1020.             |             bin index of this chunk                           |
  1021.             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1022.             |             Unused space                                      .
  1023.             .                                                               |
  1024. nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1025.     `foot:' |             Size of chunk, in bytes                           |
  1026.             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  1027.  
  1028.   Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
  1029.   of the same size are arranged in a circularly-linked list, with only
  1030.   the oldest chunk (the next to be used, in our FIFO ordering)
  1031.   actually in the tree.  (Tree members are distinguished by a non-null
  1032.   parent pointer.)  If a chunk with the same size an an existing node
  1033.   is inserted, it is linked off the existing node using pointers that
  1034.   work in the same way as fd/bk pointers of small chunks.
  1035.  
  1036.   Each tree contains a power of 2 sized range of chunk sizes (the
  1037.   smallest is 0x100 <= x < 0x180), which is is divided in half at each
  1038.   tree level, with the chunks in the smaller half of the range (0x100
  1039.   <= x < 0x140 for the top nose) in the left subtree and the larger
  1040.   half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
  1041.   done by inspecting individual bits.
  1042.  
  1043.   Using these rules, each node's left subtree contains all smaller
  1044.   sizes than its right subtree.  However, the node at the root of each
  1045.   subtree has no particular ordering relationship to either.  (The
  1046.   dividing line between the subtree sizes is based on trie relation.)
  1047.   If we remove the last chunk of a given size from the interior of the
  1048.   tree, we need to replace it with a leaf node.  The tree ordering
  1049.   rules permit a node to be replaced by any leaf below it.
  1050.  
  1051.   The smallest chunk in a tree (a common operation in a best-fit
  1052.   allocator) can be found by walking a path to the leftmost leaf in
  1053.   the tree.  Unlike a usual binary tree, where we follow left child
  1054.   pointers until we reach a null, here we follow the right child
  1055.   pointer any time the left one is null, until we reach a leaf with
  1056.   both child pointers null. The smallest chunk in the tree will be
  1057.   somewhere along that path.
  1058.  
  1059.   The worst case number of steps to add, find, or remove a node is
  1060.   bounded by the number of bits differentiating chunks within
  1061.   bins. Under current bin calculations, this ranges from 6 up to 21
  1062.   (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
  1063.   is of course much better.
  1064. */
  1065.  
  1066. struct malloc_tree_chunk {
  1067.   /* The first four fields must be compatible with malloc_chunk */
  1068.   size_t                    prev_foot;
  1069.   size_t                    head;
  1070.   struct malloc_tree_chunk* fd;
  1071.   struct malloc_tree_chunk* bk;
  1072.  
  1073.   struct malloc_tree_chunk* child[2];
  1074.   struct malloc_tree_chunk* parent;
  1075.   bindex_t                  index;
  1076. };
  1077.  
  1078. typedef struct malloc_tree_chunk  tchunk;
  1079. typedef struct malloc_tree_chunk* tchunkptr;
  1080. typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
  1081.  
  1082. /* A little helper macro for trees */
  1083. #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
  1084.  
  1085. /* ----------------------------- Segments -------------------------------- */
  1086.  
  1087. /*
  1088.   Each malloc space may include non-contiguous segments, held in a
  1089.   list headed by an embedded malloc_segment record representing the
  1090.   top-most space. Segments also include flags holding properties of
  1091.   the space. Large chunks that are directly allocated by mmap are not
  1092.   included in this list. They are instead independently created and
  1093.   destroyed without otherwise keeping track of them.
  1094.  
  1095.   Segment management mainly comes into play for spaces allocated by
  1096.   MMAP.  Any call to MMAP might or might not return memory that is
  1097.   adjacent to an existing segment.  MORECORE normally contiguously
  1098.   extends the current space, so this space is almost always adjacent,
  1099.   which is simpler and faster to deal with. (This is why MORECORE is
  1100.   used preferentially to MMAP when both are available -- see
  1101.   sys_alloc.)  When allocating using MMAP, we don't use any of the
  1102.   hinting mechanisms (inconsistently) supported in various
  1103.   implementations of unix mmap, or distinguish reserving from
  1104.   committing memory. Instead, we just ask for space, and exploit
  1105.   contiguity when we get it.  It is probably possible to do
  1106.   better than this on some systems, but no general scheme seems
  1107.   to be significantly better.
  1108.  
  1109.   Management entails a simpler variant of the consolidation scheme
  1110.   used for chunks to reduce fragmentation -- new adjacent memory is
  1111.   normally prepended or appended to an existing segment. However,
  1112.   there are limitations compared to chunk consolidation that mostly
  1113.   reflect the fact that segment processing is relatively infrequent
  1114.   (occurring only when getting memory from system) and that we
  1115.   don't expect to have huge numbers of segments:
  1116.  
  1117.   * Segments are not indexed, so traversal requires linear scans.  (It
  1118.     would be possible to index these, but is not worth the extra
  1119.     overhead and complexity for most programs on most platforms.)
  1120.   * New segments are only appended to old ones when holding top-most
  1121.     memory; if they cannot be prepended to others, they are held in
  1122.     different segments.
  1123.  
  1124.   Except for the top-most segment of an mstate, each segment record
  1125.   is kept at the tail of its segment. Segments are added by pushing
  1126.   segment records onto the list headed by &mstate.seg for the
  1127.   containing mstate.
  1128.  
  1129.   Segment flags control allocation/merge/deallocation policies:
  1130.   * If EXTERN_BIT set, then we did not allocate this segment,
  1131.     and so should not try to deallocate or merge with others.
  1132.     (This currently holds only for the initial segment passed
  1133.     into create_mspace_with_base.)
  1134.   * If USE_MMAP_BIT set, the segment may be merged with
  1135.     other surrounding mmapped segments and trimmed/de-allocated
  1136.     using munmap.
  1137.   * If neither bit is set, then the segment was obtained using
  1138.     MORECORE so can be merged with surrounding MORECORE'd segments
  1139.     and deallocated/trimmed using MORECORE with negative arguments.
  1140. */
  1141.  
  1142. struct malloc_segment {
  1143.   char*        base;             /* base address */
  1144.   size_t       size;             /* allocated size */
  1145.   struct malloc_segment* next;   /* ptr to next segment */
  1146.   flag_t       sflags;           /* mmap and extern flag */
  1147. };
  1148.  
  1149. #define is_mmapped_segment(S)  ((S)->sflags & USE_MMAP_BIT)
  1150. #define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
  1151.  
  1152. typedef struct malloc_segment  msegment;
  1153. typedef struct malloc_segment* msegmentptr;
  1154.  
  1155. /* ---------------------------- malloc_state ----------------------------- */
  1156.  
  1157. /*
  1158.    A malloc_state holds all of the bookkeeping for a space.
  1159.    The main fields are:
  1160.  
  1161.   Top
  1162.     The topmost chunk of the currently active segment. Its size is
  1163.     cached in topsize.  The actual size of topmost space is
  1164.     topsize+TOP_FOOT_SIZE, which includes space reserved for adding
  1165.     fenceposts and segment records if necessary when getting more
  1166.     space from the system.  The size at which to autotrim top is
  1167.     cached from mparams in trim_check, except that it is disabled if
  1168.     an autotrim fails.
  1169.  
  1170.   Designated victim (dv)
  1171.     This is the preferred chunk for servicing small requests that
  1172.     don't have exact fits.  It is normally the chunk split off most
  1173.     recently to service another small request.  Its size is cached in
  1174.     dvsize. The link fields of this chunk are not maintained since it
  1175.     is not kept in a bin.
  1176.  
  1177.   SmallBins
  1178.     An array of bin headers for free chunks.  These bins hold chunks
  1179.     with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
  1180.     chunks of all the same size, spaced 8 bytes apart.  To simplify
  1181.     use in double-linked lists, each bin header acts as a malloc_chunk
  1182.     pointing to the real first node, if it exists (else pointing to
  1183.     itself).  This avoids special-casing for headers.  But to avoid
  1184.     waste, we allocate only the fd/bk pointers of bins, and then use
  1185.     repositioning tricks to treat these as the fields of a chunk.
  1186.  
  1187.   TreeBins
  1188.     Treebins are pointers to the roots of trees holding a range of
  1189.     sizes. There are 2 equally spaced treebins for each power of two
  1190.     from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
  1191.     larger.
  1192.  
  1193.   Bin maps
  1194.     There is one bit map for small bins ("smallmap") and one for
  1195.     treebins ("treemap).  Each bin sets its bit when non-empty, and
  1196.     clears the bit when empty.  Bit operations are then used to avoid
  1197.     bin-by-bin searching -- nearly all "search" is done without ever
  1198.     looking at bins that won't be selected.  The bit maps
  1199.     conservatively use 32 bits per map word, even if on 64bit system.
  1200.     For a good description of some of the bit-based techniques used
  1201.     here, see Henry S. Warren Jr's book "Hacker's Delight" (and
  1202.     supplement at http://hackersdelight.org/). Many of these are
  1203.     intended to reduce the branchiness of paths through malloc etc, as
  1204.     well as to reduce the number of memory locations read or written.
  1205.  
  1206.   Segments
  1207.     A list of segments headed by an embedded malloc_segment record
  1208.     representing the initial space.
  1209.  
  1210.   Address check support
  1211.     The least_addr field is the least address ever obtained from
  1212.     MORECORE or MMAP. Attempted frees and reallocs of any address less
  1213.     than this are trapped (unless INSECURE is defined).
  1214.  
  1215.   Magic tag
  1216.     A cross-check field that should always hold same value as mparams.magic.
  1217.  
  1218.   Max allowed footprint
  1219.     The maximum allowed bytes to allocate from system (zero means no limit)
  1220.  
  1221.   Flags
  1222.     Bits recording whether to use MMAP, locks, or contiguous MORECORE
  1223.  
  1224.   Statistics
  1225.     Each space keeps track of current and maximum system memory
  1226.     obtained via MORECORE or MMAP.
  1227.  
  1228.   Trim support
  1229.     Fields holding the amount of unused topmost memory that should trigger
  1230.     trimming, and a counter to force periodic scanning to release unused
  1231.     non-topmost segments.
  1232.  
  1233.   Locking
  1234.     If USE_LOCKS is defined, the "mutex" lock is acquired and released
  1235.     around every public call using this mspace.
  1236.  
  1237.   Extension support
  1238.     A void* pointer and a size_t field that can be used to help implement
  1239.     extensions to this malloc.
  1240. */
  1241.  
  1242. /* Bin types, widths and sizes */
  1243. #define NSMALLBINS        (32U)
  1244. #define NTREEBINS         (32U)
  1245. #define SMALLBIN_SHIFT    (3U)
  1246. #define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
  1247. #define TREEBIN_SHIFT     (8U)
  1248. #define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
  1249. #define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
  1250. #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
  1251.  
  1252. struct malloc_state {
  1253.   binmap_t   smallmap;
  1254.   binmap_t   treemap;
  1255.   size_t     dvsize;
  1256.   size_t     topsize;
  1257.   char*      least_addr;
  1258.   mchunkptr  dv;
  1259.   mchunkptr  top;
  1260.   size_t     trim_check;
  1261.   size_t     release_checks;
  1262.   size_t     magic;
  1263.   mchunkptr  smallbins[(NSMALLBINS+1)*2];
  1264.   tbinptr    treebins[NTREEBINS];
  1265.   size_t     footprint;
  1266.   size_t     max_footprint;
  1267.   size_t     footprint_limit; /* zero means no limit */
  1268.   flag_t     mflags;
  1269.   struct mutex  lock;     /* locate lock among fields that rarely change */
  1270.   msegment   seg;
  1271.   void*      extp;      /* Unused but available for extensions */
  1272.   size_t     exts;
  1273. };
  1274.  
  1275. typedef struct malloc_state*    mstate;
  1276.  
  1277. /* ------------- Global malloc_state and malloc_params ------------------- */
  1278.  
  1279. /*
  1280.   malloc_params holds global properties, including those that can be
  1281.   dynamically set using mallopt. There is a single instance, mparams,
  1282.   initialized in init_mparams. Note that the non-zeroness of "magic"
  1283.   also serves as an initialization flag.
  1284. */
  1285.  
  1286. struct malloc_params {
  1287.   size_t magic;
  1288.     size_t  page_size;
  1289.     size_t  granularity;
  1290.     size_t  mmap_threshold;
  1291.     size_t  trim_threshold;
  1292.     flag_t  default_mflags;
  1293. };
  1294.  
  1295. static struct malloc_params mparams;
  1296.  
  1297. /* Ensure mparams initialized */
  1298. #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
  1299.  
  1300. static struct malloc_state _gm_;
  1301. #define gm                 (&_gm_)
  1302. #define is_global(M)       ((M) == &_gm_)
  1303.  
  1304. #define is_initialized(M)  ((M)->top != 0)
  1305.  
  1306. /* -------------------------- system alloc setup ------------------------- */
  1307.  
  1308. /* Operations on mflags */
  1309.  
  1310. #define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
  1311. #define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
  1312. #if USE_LOCKS
  1313. #define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
  1314. #else
  1315. #define disable_lock(M)
  1316. #endif
  1317.  
  1318. #define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
  1319. #define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
  1320. #if HAVE_MMAP
  1321. #define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
  1322. #else
  1323. #define disable_mmap(M)
  1324. #endif
  1325.  
  1326. #define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
  1327. #define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
  1328.  
  1329. #define set_lock(M,L)\
  1330.  ((M)->mflags = (L)?\
  1331.   ((M)->mflags | USE_LOCK_BIT) :\
  1332.   ((M)->mflags & ~USE_LOCK_BIT))
  1333.  
  1334. /* page-align a size */
  1335. #define page_align(S)\
  1336.  (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
  1337.  
  1338. /* granularity-align a size */
  1339. #define granularity_align(S)\
  1340.   (((S) + (mparams.granularity - SIZE_T_ONE))\
  1341.    & ~(mparams.granularity - SIZE_T_ONE))
  1342.  
  1343.  
  1344. /* For mmap, use granularity alignment on windows, else page-align */
  1345. #ifdef WIN32
  1346. #define mmap_align(S) granularity_align(S)
  1347. #else
  1348. #define mmap_align(S) page_align(S)
  1349. #endif
  1350.  
  1351. /* For sys_alloc, enough padding to ensure can malloc request on success */
  1352. #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
  1353.  
  1354. #define is_page_aligned(S)\
  1355.    (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
  1356. #define is_granularity_aligned(S)\
  1357.    (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
  1358.  
  1359. /*  True if segment S holds address A */
  1360. #define segment_holds(S, A)\
  1361.   ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
  1362.  
  1363. /* Return segment holding given address */
  1364. static msegmentptr segment_holding(mstate m, char* addr) {
  1365.   msegmentptr sp = &m->seg;
  1366.   for (;;) {
  1367.     if (addr >= sp->base && addr < sp->base + sp->size)
  1368.       return sp;
  1369.     if ((sp = sp->next) == 0)
  1370.       return 0;
  1371.   }
  1372. }
  1373.  
  1374. /* Return true if segment contains a segment link */
  1375. static int has_segment_link(mstate m, msegmentptr ss) {
  1376.   msegmentptr sp = &m->seg;
  1377.   for (;;) {
  1378.     if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
  1379.       return 1;
  1380.     if ((sp = sp->next) == 0)
  1381.       return 0;
  1382.   }
  1383. }
  1384.  
  1385.  
  1386. /*
  1387.   TOP_FOOT_SIZE is padding at the end of a segment, including space
  1388.   that may be needed to place segment records and fenceposts when new
  1389.   noncontiguous segments are added.
  1390. */
  1391. #define TOP_FOOT_SIZE\
  1392.   (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
  1393.  
  1394.  
  1395. /* -------------------------------  Hooks -------------------------------- */
  1396.  
  1397. /*
  1398.   PREACTION should be defined to return 0 on success, and nonzero on
  1399.   failure. If you are not using locking, you can redefine these to do
  1400.   anything you like.
  1401. */
  1402.  
  1403. #define PREACTION(M)  ( MutexLock(&(M)->lock))
  1404. #define POSTACTION(M) { MutexUnlock(&(M)->lock); }
  1405.  
  1406. /* -------------------------- Debugging setup ---------------------------- */
  1407.  
  1408. #if ! DEBUG
  1409.  
  1410. #define check_free_chunk(M,P)
  1411. #define check_inuse_chunk(M,P)
  1412. #define check_malloced_chunk(M,P,N)
  1413. #define check_mmapped_chunk(M,P)
  1414. #define check_malloc_state(M)
  1415. #define check_top_chunk(M,P)
  1416.  
  1417. #else /* DEBUG */
  1418. #define check_free_chunk(M,P)       do_check_free_chunk(M,P)
  1419. #define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
  1420. #define check_top_chunk(M,P)        do_check_top_chunk(M,P)
  1421. #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
  1422. #define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
  1423. #define check_malloc_state(M)       do_check_malloc_state(M)
  1424.  
  1425. static void   do_check_any_chunk(mstate m, mchunkptr p);
  1426. static void   do_check_top_chunk(mstate m, mchunkptr p);
  1427. static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
  1428. static void   do_check_inuse_chunk(mstate m, mchunkptr p);
  1429. static void   do_check_free_chunk(mstate m, mchunkptr p);
  1430. static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
  1431. static void   do_check_tree(mstate m, tchunkptr t);
  1432. static void   do_check_treebin(mstate m, bindex_t i);
  1433. static void   do_check_smallbin(mstate m, bindex_t i);
  1434. static void   do_check_malloc_state(mstate m);
  1435. static int    bin_find(mstate m, mchunkptr x);
  1436. static size_t traverse_and_check(mstate m);
  1437. #endif /* DEBUG */
  1438.  
  1439. /* ---------------------------- Indexing Bins ---------------------------- */
  1440.  
  1441. #define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
  1442. #define small_index(s)      (bindex_t)((s)  >> SMALLBIN_SHIFT)
  1443. #define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
  1444. #define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
  1445.  
  1446. /* addressing by index. See above about smallbin repositioning */
  1447. #define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
  1448. #define treebin_at(M,i)     (&((M)->treebins[i]))
  1449.  
  1450. #define compute_tree_index(S, I)\
  1451. {\
  1452.   unsigned int X = S >> TREEBIN_SHIFT;\
  1453.   if (X == 0)\
  1454.     I = 0;\
  1455.   else if (X > 0xFFFF)\
  1456.     I = NTREEBINS-1;\
  1457.   else {\
  1458.     unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
  1459.     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
  1460.   }\
  1461. }
  1462.  
  1463.  
  1464.  
  1465. /* Bit representing maximum resolved size in a treebin at i */
  1466. #define bit_for_tree_index(i) \
  1467.    (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
  1468.  
  1469. /* Shift placing maximum resolved bit in a treebin at i as sign bit */
  1470. #define leftshift_for_tree_index(i) \
  1471.    ((i == NTREEBINS-1)? 0 : \
  1472.     ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
  1473.  
  1474. /* The size of the smallest chunk held in bin with index i */
  1475. #define minsize_for_tree_index(i) \
  1476.    ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
  1477.    (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
  1478.  
  1479.  
  1480. /* ------------------------ Operations on bin maps ----------------------- */
  1481.  
  1482. /* bit corresponding to given index */
  1483. #define idx2bit(i)              ((binmap_t)(1) << (i))
  1484.  
  1485. /* Mark/Clear bits with given index */
  1486. #define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
  1487. #define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
  1488. #define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
  1489.  
  1490. #define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
  1491. #define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
  1492. #define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
  1493.  
  1494. /* isolate the least set bit of a bitmap */
  1495. #define least_bit(x)         ((x) & -(x))
  1496.  
  1497. /* mask with all bits to left of least bit of x on */
  1498. #define left_bits(x)         ((x<<1) | -(x<<1))
  1499.  
  1500. /* mask with all bits to left of or equal to least bit of x on */
  1501. #define same_or_left_bits(x) ((x) | -(x))
  1502.  
  1503. #define compute_bit2idx(X, I)\
  1504. {\
  1505.   unsigned int J;\
  1506.   J = __builtin_ctz(X); \
  1507.   I = (bindex_t)J;\
  1508. }
  1509.  
  1510. /* ----------------------- Runtime Check Support ------------------------- */
  1511.  
  1512. /*
  1513.   For security, the main invariant is that malloc/free/etc never
  1514.   writes to a static address other than malloc_state, unless static
  1515.   malloc_state itself has been corrupted, which cannot occur via
  1516.   malloc (because of these checks). In essence this means that we
  1517.   believe all pointers, sizes, maps etc held in malloc_state, but
  1518.   check all of those linked or offsetted from other embedded data
  1519.   structures.  These checks are interspersed with main code in a way
  1520.   that tends to minimize their run-time cost.
  1521.  
  1522.   When FOOTERS is defined, in addition to range checking, we also
  1523.   verify footer fields of inuse chunks, which can be used guarantee
  1524.   that the mstate controlling malloc/free is intact.  This is a
  1525.   streamlined version of the approach described by William Robertson
  1526.   et al in "Run-time Detection of Heap-based Overflows" LISA'03
  1527.   http://www.usenix.org/events/lisa03/tech/robertson.html The footer
  1528.   of an inuse chunk holds the xor of its mstate and a random seed,
  1529.   that is checked upon calls to free() and realloc().  This is
  1530.   (probabalistically) unguessable from outside the program, but can be
  1531.   computed by any code successfully malloc'ing any chunk, so does not
  1532.   itself provide protection against code that has already broken
  1533.   security through some other means.  Unlike Robertson et al, we
  1534.   always dynamically check addresses of all offset chunks (previous,
  1535.   next, etc). This turns out to be cheaper than relying on hashes.
  1536. */
  1537.  
  1538. #if !INSECURE
  1539. /* Check if address a is at least as high as any from MORECORE or MMAP */
  1540. #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
  1541. /* Check if address of next chunk n is higher than base chunk p */
  1542. #define ok_next(p, n)    ((char*)(p) < (char*)(n))
  1543. /* Check if p has inuse status */
  1544. #define ok_inuse(p)     is_inuse(p)
  1545. /* Check if p has its pinuse bit on */
  1546. #define ok_pinuse(p)     pinuse(p)
  1547.  
  1548. #else /* !INSECURE */
  1549. #define ok_address(M, a) (1)
  1550. #define ok_next(b, n)    (1)
  1551. #define ok_inuse(p)      (1)
  1552. #define ok_pinuse(p)     (1)
  1553. #endif /* !INSECURE */
  1554.  
  1555. #if (FOOTERS && !INSECURE)
  1556. /* Check if (alleged) mstate m has expected magic field */
  1557. #define ok_magic(M)      ((M)->magic == mparams.magic)
  1558. #else  /* (FOOTERS && !INSECURE) */
  1559. #define ok_magic(M)      (1)
  1560. #endif /* (FOOTERS && !INSECURE) */
  1561.  
  1562. /* In gcc, use __builtin_expect to minimize impact of checks */
  1563. #if !INSECURE
  1564. #if defined(__GNUC__) && __GNUC__ >= 3
  1565. #define RTCHECK(e)  __builtin_expect(e, 1)
  1566. #else /* GNUC */
  1567. #define RTCHECK(e)  (e)
  1568. #endif /* GNUC */
  1569. #else /* !INSECURE */
  1570. #define RTCHECK(e)  (1)
  1571. #endif /* !INSECURE */
  1572.  
  1573. /* macros to set up inuse chunks with or without footers */
  1574.  
  1575. #if !FOOTERS
  1576.  
  1577. #define mark_inuse_foot(M,p,s)
  1578.  
  1579. /* Macros for setting head/foot of non-mmapped chunks */
  1580.  
  1581. /* Set cinuse bit and pinuse bit of next chunk */
  1582. #define set_inuse(M,p,s)\
  1583.   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
  1584.   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
  1585.  
  1586. /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
  1587. #define set_inuse_and_pinuse(M,p,s)\
  1588.   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
  1589.   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
  1590.  
  1591. /* Set size, cinuse and pinuse bit of this chunk */
  1592. #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
  1593.   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
  1594.  
  1595. #else /* FOOTERS */
  1596.  
  1597. /* Set foot of inuse chunk to be xor of mstate and seed */
  1598. #define mark_inuse_foot(M,p,s)\
  1599.   (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
  1600.  
  1601. #define get_mstate_for(p)\
  1602.   ((mstate)(((mchunkptr)((char*)(p) +\
  1603.     (chunksize(p))))->prev_foot ^ mparams.magic))
  1604.  
  1605. #define set_inuse(M,p,s)\
  1606.   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
  1607.   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
  1608.   mark_inuse_foot(M,p,s))
  1609.  
  1610. #define set_inuse_and_pinuse(M,p,s)\
  1611.   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
  1612.   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
  1613.  mark_inuse_foot(M,p,s))
  1614.  
  1615. #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
  1616.   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
  1617.   mark_inuse_foot(M, p, s))
  1618.  
  1619. #endif /* !FOOTERS */
  1620.  
  1621. /* ---------------------------- setting mparams -------------------------- */
  1622.  
  1623. #if LOCK_AT_FORK
  1624. static void pre_fork(void)         { ACQUIRE_LOCK(&(gm)->mutex); }
  1625. static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); }
  1626. static void post_fork_child(void)  { INITIAL_LOCK(&(gm)->mutex); }
  1627. #endif /* LOCK_AT_FORK */
  1628.  
  1629. /* Initialize mparams */
  1630. static int init_mparams(void) {
  1631.  
  1632.   ACQUIRE_MALLOC_GLOBAL_LOCK();
  1633.   if (mparams.magic == 0) {
  1634.     size_t magic;
  1635.     size_t psize;
  1636.     size_t gsize;
  1637.  
  1638.     psize = 4096;
  1639.     gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
  1640.  
  1641.     mparams.granularity = gsize;
  1642.     mparams.page_size = psize;
  1643.     mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
  1644.     mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
  1645.     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
  1646.  
  1647.     /* Set up lock for main malloc area */
  1648.     gm->mflags = mparams.default_mflags;
  1649.     MutexInit(&gm->lock);
  1650.  
  1651.     {
  1652.       magic = (size_t)&magic ^ (size_t)0x55555555U;
  1653.       magic |= (size_t)8U;    /* ensure nonzero */
  1654.       magic &= ~(size_t)7U;   /* improve chances of fault for bad values */
  1655.       /* Until memory modes commonly available, use volatile-write */
  1656.       (*(volatile size_t *)(&(mparams.magic))) = magic;
  1657.     }
  1658.   }
  1659.  
  1660.   RELEASE_MALLOC_GLOBAL_LOCK();
  1661.   return 1;
  1662. }
  1663.  
  1664.  
  1665. #if DEBUG
  1666. /* ------------------------- Debugging Support --------------------------- */
  1667.  
  1668. /* Check properties of any chunk, whether free, inuse, mmapped etc  */
  1669. static void do_check_any_chunk(mstate m, mchunkptr p) {
  1670.   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
  1671.   assert(ok_address(m, p));
  1672. }
  1673.  
  1674. /* Check properties of top chunk */
  1675. static void do_check_top_chunk(mstate m, mchunkptr p) {
  1676.   msegmentptr sp = segment_holding(m, (char*)p);
  1677.   size_t  sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
  1678.   assert(sp != 0);
  1679.   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
  1680.   assert(ok_address(m, p));
  1681.   assert(sz == m->topsize);
  1682.   assert(sz > 0);
  1683.   assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
  1684.   assert(pinuse(p));
  1685.   assert(!pinuse(chunk_plus_offset(p, sz)));
  1686. }
  1687.  
  1688. /* Check properties of (inuse) mmapped chunks */
  1689. static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
  1690.   size_t  sz = chunksize(p);
  1691.   size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
  1692.   assert(is_mmapped(p));
  1693.   assert(use_mmap(m));
  1694.   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
  1695.   assert(ok_address(m, p));
  1696.   assert(!is_small(sz));
  1697.   assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
  1698.   assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
  1699.   assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
  1700. }
  1701.  
  1702. /* Check properties of inuse chunks */
  1703. static void do_check_inuse_chunk(mstate m, mchunkptr p) {
  1704.   do_check_any_chunk(m, p);
  1705.   assert(is_inuse(p));
  1706.   assert(next_pinuse(p));
  1707.   /* If not pinuse and not mmapped, previous chunk has OK offset */
  1708.   assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
  1709.   if (is_mmapped(p))
  1710.     do_check_mmapped_chunk(m, p);
  1711. }
  1712.  
  1713. /* Check properties of free chunks */
  1714. static void do_check_free_chunk(mstate m, mchunkptr p) {
  1715.   size_t sz = chunksize(p);
  1716.   mchunkptr next = chunk_plus_offset(p, sz);
  1717.   do_check_any_chunk(m, p);
  1718.   assert(!is_inuse(p));
  1719.   assert(!next_pinuse(p));
  1720.   assert (!is_mmapped(p));
  1721.   if (p != m->dv && p != m->top) {
  1722.     if (sz >= MIN_CHUNK_SIZE) {
  1723.       assert((sz & CHUNK_ALIGN_MASK) == 0);
  1724.       assert(is_aligned(chunk2mem(p)));
  1725.       assert(next->prev_foot == sz);
  1726.       assert(pinuse(p));
  1727.       assert (next == m->top || is_inuse(next));
  1728.       assert(p->fd->bk == p);
  1729.       assert(p->bk->fd == p);
  1730.     }
  1731.     else  /* markers are always of size SIZE_T_SIZE */
  1732.       assert(sz == SIZE_T_SIZE);
  1733.   }
  1734. }
  1735.  
  1736. /* Check properties of malloced chunks at the point they are malloced */
  1737. static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
  1738.   if (mem != 0) {
  1739.     mchunkptr p = mem2chunk(mem);
  1740.     size_t sz = p->head & ~INUSE_BITS;
  1741.     do_check_inuse_chunk(m, p);
  1742.     assert((sz & CHUNK_ALIGN_MASK) == 0);
  1743.     assert(sz >= MIN_CHUNK_SIZE);
  1744.     assert(sz >= s);
  1745.     /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
  1746.     assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
  1747.   }
  1748. }
  1749.  
  1750. /* Check a tree and its subtrees.  */
  1751. static void do_check_tree(mstate m, tchunkptr t) {
  1752.   tchunkptr head = 0;
  1753.   tchunkptr u = t;
  1754.   bindex_t tindex = t->index;
  1755.   size_t tsize = chunksize(t);
  1756.   bindex_t idx;
  1757.   compute_tree_index(tsize, idx);
  1758.   assert(tindex == idx);
  1759.   assert(tsize >= MIN_LARGE_SIZE);
  1760.   assert(tsize >= minsize_for_tree_index(idx));
  1761.   assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
  1762.  
  1763.   do { /* traverse through chain of same-sized nodes */
  1764.     do_check_any_chunk(m, ((mchunkptr)u));
  1765.     assert(u->index == tindex);
  1766.     assert(chunksize(u) == tsize);
  1767.     assert(!is_inuse(u));
  1768.     assert(!next_pinuse(u));
  1769.     assert(u->fd->bk == u);
  1770.     assert(u->bk->fd == u);
  1771.     if (u->parent == 0) {
  1772.       assert(u->child[0] == 0);
  1773.       assert(u->child[1] == 0);
  1774.     }
  1775.     else {
  1776.       assert(head == 0); /* only one node on chain has parent */
  1777.       head = u;
  1778.       assert(u->parent != u);
  1779.       assert (u->parent->child[0] == u ||
  1780.               u->parent->child[1] == u ||
  1781.               *((tbinptr*)(u->parent)) == u);
  1782.       if (u->child[0] != 0) {
  1783.         assert(u->child[0]->parent == u);
  1784.         assert(u->child[0] != u);
  1785.         do_check_tree(m, u->child[0]);
  1786.       }
  1787.       if (u->child[1] != 0) {
  1788.         assert(u->child[1]->parent == u);
  1789.         assert(u->child[1] != u);
  1790.         do_check_tree(m, u->child[1]);
  1791.       }
  1792.       if (u->child[0] != 0 && u->child[1] != 0) {
  1793.         assert(chunksize(u->child[0]) < chunksize(u->child[1]));
  1794.       }
  1795.     }
  1796.     u = u->fd;
  1797.   } while (u != t);
  1798.   assert(head != 0);
  1799. }
  1800.  
  1801. /*  Check all the chunks in a treebin.  */
  1802. static void do_check_treebin(mstate m, bindex_t i) {
  1803.   tbinptr* tb = treebin_at(m, i);
  1804.   tchunkptr t = *tb;
  1805.   int empty = (m->treemap & (1U << i)) == 0;
  1806.   if (t == 0)
  1807.     assert(empty);
  1808.   if (!empty)
  1809.     do_check_tree(m, t);
  1810. }
  1811.  
  1812. /*  Check all the chunks in a smallbin.  */
  1813. static void do_check_smallbin(mstate m, bindex_t i) {
  1814.   sbinptr b = smallbin_at(m, i);
  1815.   mchunkptr p = b->bk;
  1816.   unsigned int empty = (m->smallmap & (1U << i)) == 0;
  1817.   if (p == b)
  1818.     assert(empty);
  1819.   if (!empty) {
  1820.     for (; p != b; p = p->bk) {
  1821.       size_t size = chunksize(p);
  1822.       mchunkptr q;
  1823.       /* each chunk claims to be free */
  1824.       do_check_free_chunk(m, p);
  1825.       /* chunk belongs in bin */
  1826.       assert(small_index(size) == i);
  1827.       assert(p->bk == b || chunksize(p->bk) == chunksize(p));
  1828.       /* chunk is followed by an inuse chunk */
  1829.       q = next_chunk(p);
  1830.       if (q->head != FENCEPOST_HEAD)
  1831.         do_check_inuse_chunk(m, q);
  1832.     }
  1833.   }
  1834. }
  1835.  
  1836. /* Find x in a bin. Used in other check functions. */
  1837. static int bin_find(mstate m, mchunkptr x) {
  1838.   size_t size = chunksize(x);
  1839.   if (is_small(size)) {
  1840.     bindex_t sidx = small_index(size);
  1841.     sbinptr b = smallbin_at(m, sidx);
  1842.     if (smallmap_is_marked(m, sidx)) {
  1843.       mchunkptr p = b;
  1844.       do {
  1845.         if (p == x)
  1846.           return 1;
  1847.       } while ((p = p->fd) != b);
  1848.     }
  1849.   }
  1850.   else {
  1851.     bindex_t tidx;
  1852.     compute_tree_index(size, tidx);
  1853.     if (treemap_is_marked(m, tidx)) {
  1854.       tchunkptr t = *treebin_at(m, tidx);
  1855.       size_t sizebits = size << leftshift_for_tree_index(tidx);
  1856.       while (t != 0 && chunksize(t) != size) {
  1857.         t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
  1858.         sizebits <<= 1;
  1859.       }
  1860.       if (t != 0) {
  1861.         tchunkptr u = t;
  1862.         do {
  1863.           if (u == (tchunkptr)x)
  1864.             return 1;
  1865.         } while ((u = u->fd) != t);
  1866.       }
  1867.     }
  1868.   }
  1869.   return 0;
  1870. }
  1871.  
  1872. /* Traverse each chunk and check it; return total */
  1873. static size_t traverse_and_check(mstate m) {
  1874.   size_t sum = 0;
  1875.   if (is_initialized(m)) {
  1876.     msegmentptr s = &m->seg;
  1877.     sum += m->topsize + TOP_FOOT_SIZE;
  1878.     while (s != 0) {
  1879.       mchunkptr q = align_as_chunk(s->base);
  1880.       mchunkptr lastq = 0;
  1881.       assert(pinuse(q));
  1882.       while (segment_holds(s, q) &&
  1883.              q != m->top && q->head != FENCEPOST_HEAD) {
  1884.         sum += chunksize(q);
  1885.         if (is_inuse(q)) {
  1886.           assert(!bin_find(m, q));
  1887.           do_check_inuse_chunk(m, q);
  1888.         }
  1889.         else {
  1890.           assert(q == m->dv || bin_find(m, q));
  1891.           assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
  1892.           do_check_free_chunk(m, q);
  1893.         }
  1894.         lastq = q;
  1895.         q = next_chunk(q);
  1896.       }
  1897.       s = s->next;
  1898.     }
  1899.   }
  1900.   return sum;
  1901. }
  1902.  
  1903.  
  1904. /* Check all properties of malloc_state. */
  1905. static void do_check_malloc_state(mstate m) {
  1906.   bindex_t i;
  1907.   size_t total;
  1908.   /* check bins */
  1909.   for (i = 0; i < NSMALLBINS; ++i)
  1910.     do_check_smallbin(m, i);
  1911.   for (i = 0; i < NTREEBINS; ++i)
  1912.     do_check_treebin(m, i);
  1913.  
  1914.   if (m->dvsize != 0) { /* check dv chunk */
  1915.     do_check_any_chunk(m, m->dv);
  1916.     assert(m->dvsize == chunksize(m->dv));
  1917.     assert(m->dvsize >= MIN_CHUNK_SIZE);
  1918.     assert(bin_find(m, m->dv) == 0);
  1919.   }
  1920.  
  1921.   if (m->top != 0) {   /* check top chunk */
  1922.     do_check_top_chunk(m, m->top);
  1923.     /*assert(m->topsize == chunksize(m->top)); redundant */
  1924.     assert(m->topsize > 0);
  1925.     assert(bin_find(m, m->top) == 0);
  1926.   }
  1927.  
  1928.   total = traverse_and_check(m);
  1929.   assert(total <= m->footprint);
  1930.   assert(m->footprint <= m->max_footprint);
  1931. }
  1932. #endif /* DEBUG */
  1933.  
  1934. #define CORRUPTION_ERROR_ACTION(m)                             \
  1935.         do {                                                   \
  1936.             printf("%s malloc heap corrupted\n",__FUNCTION__); \
  1937.             while(1)                                           \
  1938.             {                                                  \
  1939.                 delay(100);                                    \
  1940.             }                                                  \
  1941.         }while(0)                                              \
  1942.  
  1943.  
  1944. #define USAGE_ERROR_ACTION(m, p)                               \
  1945.         do {                                                   \
  1946.             printf("%s malloc heap corrupted\n",__FUNCTION__); \
  1947.             while(1)                                           \
  1948.             {                                                  \
  1949.                 delay(100);                                    \
  1950.             }                                                  \
  1951.         }while(0)                                              \
  1952.  
  1953. /* ----------------------- Operations on smallbins ----------------------- */
  1954.  
  1955. /*
  1956.   Various forms of linking and unlinking are defined as macros.  Even
  1957.   the ones for trees, which are very long but have very short typical
  1958.   paths.  This is ugly but reduces reliance on inlining support of
  1959.   compilers.
  1960. */
  1961.  
  1962. /* Link a free chunk into a smallbin  */
  1963. #define insert_small_chunk(M, P, S) {\
  1964.   bindex_t I  = small_index(S);\
  1965.   mchunkptr B = smallbin_at(M, I);\
  1966.   mchunkptr F = B;\
  1967.   assert(S >= MIN_CHUNK_SIZE);\
  1968.   if (!smallmap_is_marked(M, I))\
  1969.     mark_smallmap(M, I);\
  1970.   else if (RTCHECK(ok_address(M, B->fd)))\
  1971.     F = B->fd;\
  1972.   else {\
  1973.     CORRUPTION_ERROR_ACTION(M);\
  1974.   }\
  1975.   B->fd = P;\
  1976.   F->bk = P;\
  1977.   P->fd = F;\
  1978.   P->bk = B;\
  1979. }
  1980. /* ----------------------- Operations on smallbins ----------------------- */
  1981.  
  1982. /*
  1983.   Various forms of linking and unlinking are defined as macros.  Even
  1984.   the ones for trees, which are very long but have very short typical
  1985.   paths.  This is ugly but reduces reliance on inlining support of
  1986.   compilers.
  1987. */
  1988.  
  1989. /* Link a free chunk into a smallbin  */
  1990. #define insert_small_chunk(M, P, S) {\
  1991.   bindex_t I  = small_index(S);\
  1992.   mchunkptr B = smallbin_at(M, I);\
  1993.   mchunkptr F = B;\
  1994.   assert(S >= MIN_CHUNK_SIZE);\
  1995.   if (!smallmap_is_marked(M, I))\
  1996.     mark_smallmap(M, I);\
  1997.   else if (RTCHECK(ok_address(M, B->fd)))\
  1998.     F = B->fd;\
  1999.   else {\
  2000.     CORRUPTION_ERROR_ACTION(M);\
  2001.   }\
  2002.   B->fd = P;\
  2003.   F->bk = P;\
  2004.   P->fd = F;\
  2005.   P->bk = B;\
  2006. }
  2007.  
  2008. /* Unlink a chunk from a smallbin  */
  2009. #define unlink_small_chunk(M, P, S) {\
  2010.   mchunkptr F = P->fd;\
  2011.   mchunkptr B = P->bk;\
  2012.   bindex_t I = small_index(S);\
  2013.   assert(P != B);\
  2014.   assert(P != F);\
  2015.   assert(chunksize(P) == small_index2size(I));\
  2016.   if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
  2017.     if (B == F) {\
  2018.     clear_smallmap(M, I);\
  2019.     }\
  2020.     else if (RTCHECK(B == smallbin_at(M,I) ||\
  2021.                      (ok_address(M, B) && B->fd == P))) {\
  2022.     F->bk = B;\
  2023.     B->fd = F;\
  2024.   }\
  2025.   else {\
  2026.     CORRUPTION_ERROR_ACTION(M);\
  2027.   }\
  2028.   }\
  2029.   else {\
  2030.     CORRUPTION_ERROR_ACTION(M);\
  2031.   }\
  2032. }
  2033.  
  2034. /* Unlink the first chunk from a smallbin */
  2035. #define unlink_first_small_chunk(M, B, P, I) {\
  2036.   mchunkptr F = P->fd;\
  2037.   assert(P != B);\
  2038.   assert(P != F);\
  2039.   assert(chunksize(P) == small_index2size(I));\
  2040.   if (B == F) {\
  2041.     clear_smallmap(M, I);\
  2042.   }\
  2043.   else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
  2044.     F->bk = B;\
  2045.     B->fd = F;\
  2046.   }\
  2047.   else {\
  2048.     CORRUPTION_ERROR_ACTION(M);\
  2049.   }\
  2050. }
  2051.  
  2052. /* Replace dv node, binning the old one */
  2053. /* Used only when dvsize known to be small */
  2054. #define replace_dv(M, P, S) {\
  2055.   size_t DVS = M->dvsize;\
  2056.   assert(is_small(DVS));\
  2057.   if (DVS != 0) {\
  2058.     mchunkptr DV = M->dv;\
  2059.     insert_small_chunk(M, DV, DVS);\
  2060.   }\
  2061.   M->dvsize = S;\
  2062.   M->dv = P;\
  2063. }
  2064.  
  2065. /* ------------------------- Operations on trees ------------------------- */
  2066.  
  2067. /* Insert chunk into tree */
  2068. #define insert_large_chunk(M, X, S) {\
  2069.   tbinptr* H;\
  2070.   bindex_t I;\
  2071.   compute_tree_index(S, I);\
  2072.   H = treebin_at(M, I);\
  2073.   X->index = I;\
  2074.   X->child[0] = X->child[1] = 0;\
  2075.   if (!treemap_is_marked(M, I)) {\
  2076.     mark_treemap(M, I);\
  2077.     *H = X;\
  2078.     X->parent = (tchunkptr)H;\
  2079.     X->fd = X->bk = X;\
  2080.   }\
  2081.   else {\
  2082.     tchunkptr T = *H;\
  2083.     size_t K = S << leftshift_for_tree_index(I);\
  2084.     for (;;) {\
  2085.       if (chunksize(T) != S) {\
  2086.         tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
  2087.         K <<= 1;\
  2088.         if (*C != 0)\
  2089.           T = *C;\
  2090.         else if (RTCHECK(ok_address(M, C))) {\
  2091.           *C = X;\
  2092.           X->parent = T;\
  2093.           X->fd = X->bk = X;\
  2094.           break;\
  2095.         }\
  2096.         else {\
  2097.           CORRUPTION_ERROR_ACTION(M);\
  2098.           break;\
  2099.         }\
  2100.       }\
  2101.       else {\
  2102.         tchunkptr F = T->fd;\
  2103.         if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
  2104.           T->fd = F->bk = X;\
  2105.           X->fd = F;\
  2106.           X->bk = T;\
  2107.           X->parent = 0;\
  2108.           break;\
  2109.         }\
  2110.         else {\
  2111.           CORRUPTION_ERROR_ACTION(M);\
  2112.           break;\
  2113.         }\
  2114.       }\
  2115.     }\
  2116.   }\
  2117. }
  2118.  
  2119. /*
  2120.   Unlink steps:
  2121.  
  2122.   1. If x is a chained node, unlink it from its same-sized fd/bk links
  2123.      and choose its bk node as its replacement.
  2124.   2. If x was the last node of its size, but not a leaf node, it must
  2125.      be replaced with a leaf node (not merely one with an open left or
  2126.      right), to make sure that lefts and rights of descendents
  2127.      correspond properly to bit masks.  We use the rightmost descendent
  2128.      of x.  We could use any other leaf, but this is easy to locate and
  2129.      tends to counteract removal of leftmosts elsewhere, and so keeps
  2130.      paths shorter than minimally guaranteed.  This doesn't loop much
  2131.      because on average a node in a tree is near the bottom.
  2132.   3. If x is the base of a chain (i.e., has parent links) relink
  2133.      x's parent and children to x's replacement (or null if none).
  2134. */
  2135.  
  2136. #define unlink_large_chunk(M, X) {\
  2137.   tchunkptr XP = X->parent;\
  2138.   tchunkptr R;\
  2139.   if (X->bk != X) {\
  2140.     tchunkptr F = X->fd;\
  2141.     R = X->bk;\
  2142.     if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
  2143.       F->bk = R;\
  2144.       R->fd = F;\
  2145.     }\
  2146.     else {\
  2147.       CORRUPTION_ERROR_ACTION(M);\
  2148.     }\
  2149.   }\
  2150.   else {\
  2151.     tchunkptr* RP;\
  2152.     if (((R = *(RP = &(X->child[1]))) != 0) ||\
  2153.         ((R = *(RP = &(X->child[0]))) != 0)) {\
  2154.       tchunkptr* CP;\
  2155.       while ((*(CP = &(R->child[1])) != 0) ||\
  2156.              (*(CP = &(R->child[0])) != 0)) {\
  2157.         R = *(RP = CP);\
  2158.       }\
  2159.       if (RTCHECK(ok_address(M, RP)))\
  2160.         *RP = 0;\
  2161.       else {\
  2162.         CORRUPTION_ERROR_ACTION(M);\
  2163.       }\
  2164.     }\
  2165.   }\
  2166.   if (XP != 0) {\
  2167.     tbinptr* H = treebin_at(M, X->index);\
  2168.     if (X == *H) {\
  2169.       if ((*H = R) == 0) \
  2170.         clear_treemap(M, X->index);\
  2171.     }\
  2172.     else if (RTCHECK(ok_address(M, XP))) {\
  2173.       if (XP->child[0] == X) \
  2174.         XP->child[0] = R;\
  2175.       else \
  2176.         XP->child[1] = R;\
  2177.     }\
  2178.     else\
  2179.       CORRUPTION_ERROR_ACTION(M);\
  2180.     if (R != 0) {\
  2181.       if (RTCHECK(ok_address(M, R))) {\
  2182.         tchunkptr C0, C1;\
  2183.         R->parent = XP;\
  2184.         if ((C0 = X->child[0]) != 0) {\
  2185.           if (RTCHECK(ok_address(M, C0))) {\
  2186.             R->child[0] = C0;\
  2187.             C0->parent = R;\
  2188.           }\
  2189.           else\
  2190.             CORRUPTION_ERROR_ACTION(M);\
  2191.         }\
  2192.         if ((C1 = X->child[1]) != 0) {\
  2193.           if (RTCHECK(ok_address(M, C1))) {\
  2194.             R->child[1] = C1;\
  2195.             C1->parent = R;\
  2196.           }\
  2197.           else\
  2198.             CORRUPTION_ERROR_ACTION(M);\
  2199.         }\
  2200.       }\
  2201.       else\
  2202.         CORRUPTION_ERROR_ACTION(M);\
  2203.     }\
  2204.   }\
  2205. }
  2206.  
  2207. /* Relays to large vs small bin operations */
  2208.  
  2209. #define insert_chunk(M, P, S)\
  2210.   if (is_small(S)) insert_small_chunk(M, P, S)\
  2211.   else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
  2212.  
  2213. #define unlink_chunk(M, P, S)\
  2214.   if (is_small(S)) unlink_small_chunk(M, P, S)\
  2215.   else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
  2216.  
  2217.  
  2218.  
  2219.  
  2220.  
  2221. static inline void* os_mmap(size_t size)
  2222. {
  2223.   void* ptr = KernelAlloc(size);
  2224.   printf("%s %x %d bytes\n",__FUNCTION__, ptr, size);
  2225.   return (ptr != 0)? ptr: MFAIL;
  2226. }
  2227.  
  2228. static inline int os_munmap(void* ptr, size_t size)
  2229. {
  2230.     return (KernelFree(ptr) != 0) ? 0 : -1;
  2231. }
  2232.  
  2233.  
  2234.  
  2235. #define MMAP_DEFAULT(s)        os_mmap(s)
  2236. #define MUNMAP_DEFAULT(a, s)   os_munmap((a), (s))
  2237. #define DIRECT_MMAP_DEFAULT(s) os_mmap(s)
  2238.  
  2239.  
  2240. /* -----------------------  Direct-mmapping chunks ----------------------- */
  2241.  
  2242. /*
  2243.   Directly mmapped chunks are set up with an offset to the start of
  2244.   the mmapped region stored in the prev_foot field of the chunk. This
  2245.   allows reconstruction of the required argument to MUNMAP when freed,
  2246.   and also allows adjustment of the returned chunk to meet alignment
  2247.   requirements (especially in memalign).
  2248. */
  2249.  
  2250. /* Malloc using mmap */
  2251. static void* mmap_alloc(mstate m, size_t nb) {
  2252.     size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
  2253.   if (m->footprint_limit != 0) {
  2254.     size_t fp = m->footprint + mmsize;
  2255.     if (fp <= m->footprint || fp > m->footprint_limit)
  2256.       return 0;
  2257.   }
  2258.   if (mmsize > nb) {     /* Check for wrap around 0 */
  2259.     char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
  2260.     if (mm != CMFAIL) {
  2261.             size_t offset = align_offset(chunk2mem(mm));
  2262.             size_t psize = mmsize - offset - MMAP_FOOT_PAD;
  2263.             mchunkptr p = (mchunkptr)(mm + offset);
  2264.             p->prev_foot = offset;
  2265.             p->head = psize;
  2266.             mark_inuse_foot(m, p, psize);
  2267.             chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
  2268.             chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
  2269.  
  2270.             if (m->least_addr == 0 || mm < m->least_addr)
  2271.                 m->least_addr = mm;
  2272.             if ((m->footprint += mmsize) > m->max_footprint)
  2273.                 m->max_footprint = m->footprint;
  2274.             assert(is_aligned(chunk2mem(p)));
  2275.             check_mmapped_chunk(m, p);
  2276.             return chunk2mem(p);
  2277.         }
  2278.     }
  2279.     return 0;
  2280. }
  2281.  
  2282. /* Realloc using mmap */
  2283. static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {
  2284.     size_t oldsize = chunksize(oldp);
  2285.   (void)flags; /* placate people compiling -Wunused */
  2286.     if (is_small(nb)) /* Can't shrink mmap regions below small size */
  2287.         return 0;
  2288.   /* Keep old chunk if big enough but not too big */
  2289.     if (oldsize >= nb + SIZE_T_SIZE &&
  2290.       (oldsize - nb) <= (mparams.granularity << 1))
  2291.         return oldp;
  2292.   else {
  2293.         size_t offset = oldp->prev_foot;
  2294.         size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
  2295.         size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
  2296.         char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
  2297.                                   oldmmsize, newmmsize, flags);
  2298.     if (cp != CMFAIL) {
  2299.             mchunkptr newp = (mchunkptr)(cp + offset);
  2300.             size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
  2301.             newp->head = psize;
  2302.             mark_inuse_foot(m, newp, psize);
  2303.             chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
  2304.             chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
  2305.  
  2306.       if (cp < m->least_addr)
  2307.         m->least_addr = cp;
  2308.       if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
  2309.         m->max_footprint = m->footprint;
  2310.       check_mmapped_chunk(m, newp);
  2311.       return newp;
  2312.     }
  2313.     }
  2314.   return 0;
  2315. }
  2316.  
  2317.  
  2318. /* -------------------------- mspace management -------------------------- */
  2319.  
  2320. /* Initialize top chunk and its size */
  2321. static void init_top(mstate m, mchunkptr p, size_t psize) {
  2322.   /* Ensure alignment */
  2323.     size_t offset = align_offset(chunk2mem(p));
  2324.     p = (mchunkptr)((char*)p + offset);
  2325.     psize -= offset;
  2326.  
  2327.     m->top = p;
  2328.     m->topsize = psize;
  2329.     p->head = psize | PINUSE_BIT;
  2330.   /* set size of fake trailing chunk holding overhead space only once */
  2331.     chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
  2332.     m->trim_check = mparams.trim_threshold; /* reset on each update */
  2333. }
  2334.  
  2335. /* Initialize bins for a new mstate that is otherwise zeroed out */
  2336. static void init_bins(mstate m) {
  2337.   /* Establish circular links for smallbins */
  2338.     bindex_t i;
  2339.     for (i = 0; i < NSMALLBINS; ++i) {
  2340.         sbinptr bin = smallbin_at(m,i);
  2341.         bin->fd = bin->bk = bin;
  2342.     }
  2343. }
  2344.  
  2345. /* Allocate chunk and prepend remainder with chunk in successor base. */
  2346. static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
  2347.                            size_t nb) {
  2348.     mchunkptr p = align_as_chunk(newbase);
  2349.     mchunkptr oldfirst = align_as_chunk(oldbase);
  2350.     size_t psize = (char*)oldfirst - (char*)p;
  2351.     mchunkptr q = chunk_plus_offset(p, nb);
  2352.     size_t qsize = psize - nb;
  2353.     set_size_and_pinuse_of_inuse_chunk(m, p, nb);
  2354.  
  2355.     assert((char*)oldfirst > (char*)q);
  2356.     assert(pinuse(oldfirst));
  2357.     assert(qsize >= MIN_CHUNK_SIZE);
  2358.  
  2359.   /* consolidate remainder with first chunk of old base */
  2360.     if (oldfirst == m->top) {
  2361.         size_t tsize = m->topsize += qsize;
  2362.         m->top = q;
  2363.         q->head = tsize | PINUSE_BIT;
  2364.         check_top_chunk(m, q);
  2365.     }
  2366.     else if (oldfirst == m->dv) {
  2367.         size_t dsize = m->dvsize += qsize;
  2368.         m->dv = q;
  2369.         set_size_and_pinuse_of_free_chunk(q, dsize);
  2370.     }
  2371.     else {
  2372.         if (!is_inuse(oldfirst)) {
  2373.             size_t nsize = chunksize(oldfirst);
  2374.             unlink_chunk(m, oldfirst, nsize);
  2375.             oldfirst = chunk_plus_offset(oldfirst, nsize);
  2376.             qsize += nsize;
  2377.         }
  2378.         set_free_with_pinuse(q, qsize, oldfirst);
  2379.         insert_chunk(m, q, qsize);
  2380.         check_free_chunk(m, q);
  2381.     }
  2382.  
  2383.     check_malloced_chunk(m, chunk2mem(p), nb);
  2384.     return chunk2mem(p);
  2385. }
  2386.  
  2387. /* Add a segment to hold a new noncontiguous region */
  2388. static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
  2389.   /* Determine locations and sizes of segment, fenceposts, old top */
  2390.     char* old_top = (char*)m->top;
  2391.     msegmentptr oldsp = segment_holding(m, old_top);
  2392.     char* old_end = oldsp->base + oldsp->size;
  2393.     size_t ssize = pad_request(sizeof(struct malloc_segment));
  2394.     char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
  2395.     size_t offset = align_offset(chunk2mem(rawsp));
  2396.     char* asp = rawsp + offset;
  2397.     char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
  2398.     mchunkptr sp = (mchunkptr)csp;
  2399.     msegmentptr ss = (msegmentptr)(chunk2mem(sp));
  2400.     mchunkptr tnext = chunk_plus_offset(sp, ssize);
  2401.     mchunkptr p = tnext;
  2402.     int nfences = 0;
  2403.  
  2404.   /* reset top to new space */
  2405.     init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
  2406.  
  2407.   /* Set up segment record */
  2408.     assert(is_aligned(ss));
  2409.     set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
  2410.     *ss = m->seg; /* Push current record */
  2411.     m->seg.base = tbase;
  2412.     m->seg.size = tsize;
  2413.     m->seg.sflags = mmapped;
  2414.     m->seg.next = ss;
  2415.  
  2416.   /* Insert trailing fenceposts */
  2417.     for (;;) {
  2418.         mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
  2419.         p->head = FENCEPOST_HEAD;
  2420.         ++nfences;
  2421.         if ((char*)(&(nextp->head)) < old_end)
  2422.             p = nextp;
  2423.         else
  2424.             break;
  2425.     }
  2426.     assert(nfences >= 2);
  2427.  
  2428.   /* Insert the rest of old top into a bin as an ordinary free chunk */
  2429.     if (csp != old_top) {
  2430.         mchunkptr q = (mchunkptr)old_top;
  2431.         size_t psize = csp - old_top;
  2432.         mchunkptr tn = chunk_plus_offset(q, psize);
  2433.         set_free_with_pinuse(q, psize, tn);
  2434.         insert_chunk(m, q, psize);
  2435.     }
  2436.  
  2437.   check_top_chunk(m, m->top);
  2438. }
  2439.  
  2440. /* -------------------------- System allocation -------------------------- */
  2441.  
  2442. /* Get memory from system using MORECORE or MMAP */
  2443. static void* sys_alloc(mstate m, size_t nb) {
  2444.     char* tbase = CMFAIL;
  2445.     size_t tsize = 0;
  2446.     flag_t mmap_flag = 0;
  2447.   size_t asize; /* allocation size */
  2448.  
  2449.     ensure_initialization();
  2450.  
  2451.     printf("%s %d bytes\n", __FUNCTION__, nb);
  2452.  
  2453.   /* Directly map large chunks, but only if already initialized */
  2454.   if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
  2455.         void* mem = mmap_alloc(m, nb);
  2456.         if (mem != 0)
  2457.             return mem;
  2458.     }
  2459.  
  2460.   asize = granularity_align(nb + SYS_ALLOC_PADDING);
  2461.   if (asize <= nb)
  2462.     return 0; /* wraparound */
  2463.   if (m->footprint_limit != 0) {
  2464.     size_t fp = m->footprint + asize;
  2465.     if (fp <= m->footprint || fp > m->footprint_limit)
  2466.       return 0;
  2467.   }
  2468.  
  2469.   /*
  2470.     Try getting memory in any of three ways (in most-preferred to
  2471.     least-preferred order):
  2472.     1. A call to MORECORE that can normally contiguously extend memory.
  2473.        (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
  2474.        or main space is mmapped or a previous contiguous call failed)
  2475.     2. A call to MMAP new space (disabled if not HAVE_MMAP).
  2476.        Note that under the default settings, if MORECORE is unable to
  2477.        fulfill a request, and HAVE_MMAP is true, then mmap is
  2478.        used as a noncontiguous system allocator. This is a useful backup
  2479.        strategy for systems with holes in address spaces -- in this case
  2480.        sbrk cannot contiguously expand the heap, but mmap may be able to
  2481.        find space.
  2482.     3. A call to MORECORE that cannot usually contiguously extend memory.
  2483.        (disabled if not HAVE_MORECORE)
  2484.  
  2485.    In all cases, we need to request enough bytes from system to ensure
  2486.    we can malloc nb bytes upon success, so pad with enough space for
  2487.    top_foot, plus alignment-pad to make sure we don't lose bytes if
  2488.    not on boundary, and round this up to a granularity unit.
  2489.   */
  2490.  
  2491. #if 0
  2492.   if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
  2493.     char* br = CMFAIL;
  2494.     size_t ssize = asize; /* sbrk call size */
  2495.     msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
  2496.     ACQUIRE_MALLOC_GLOBAL_LOCK();
  2497.  
  2498.     if (ss == 0) {  /* First time through or recovery */
  2499.       char* base = (char*)CALL_MORECORE(0);
  2500.       if (base != CMFAIL) {
  2501.         size_t fp;
  2502.         /* Adjust to end on a page boundary */
  2503.         if (!is_page_aligned(base))
  2504.           ssize += (page_align((size_t)base) - (size_t)base);
  2505.         fp = m->footprint + ssize; /* recheck limits */
  2506.         if (ssize > nb && ssize < HALF_MAX_SIZE_T &&
  2507.             (m->footprint_limit == 0 ||
  2508.              (fp > m->footprint && fp <= m->footprint_limit)) &&
  2509.             (br = (char*)(CALL_MORECORE(ssize))) == base) {
  2510.           tbase = base;
  2511.           tsize = ssize;
  2512.         }
  2513.       }
  2514.     }
  2515.     else {
  2516.       /* Subtract out existing available top space from MORECORE request. */
  2517.       ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
  2518.       /* Use mem here only if it did continuously extend old space */
  2519.       if (ssize < HALF_MAX_SIZE_T &&
  2520.           (br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) {
  2521.         tbase = br;
  2522.         tsize = ssize;
  2523.       }
  2524.     }
  2525.  
  2526.     if (tbase == CMFAIL) {    /* Cope with partial failure */
  2527.       if (br != CMFAIL) {    /* Try to use/extend the space we did get */
  2528.         if (ssize < HALF_MAX_SIZE_T &&
  2529.             ssize < nb + SYS_ALLOC_PADDING) {
  2530.           size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize);
  2531.           if (esize < HALF_MAX_SIZE_T) {
  2532.             char* end = (char*)CALL_MORECORE(esize);
  2533.             if (end != CMFAIL)
  2534.               ssize += esize;
  2535.             else {            /* Can't use; try to release */
  2536.               (void) CALL_MORECORE(-ssize);
  2537.               br = CMFAIL;
  2538.             }
  2539.           }
  2540.         }
  2541.       }
  2542.       if (br != CMFAIL) {    /* Use the space we did get */
  2543.         tbase = br;
  2544.         tsize = ssize;
  2545.       }
  2546.       else
  2547.         disable_contiguous(m); /* Don't try contiguous path in the future */
  2548.     }
  2549.  
  2550.     RELEASE_MALLOC_GLOBAL_LOCK();
  2551.   }
  2552. #endif
  2553.  
  2554.   if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
  2555.     char* mp = (char*)(CALL_MMAP(asize));
  2556.     if (mp != CMFAIL) {
  2557.       tbase = mp;
  2558.       tsize = asize;
  2559.       mmap_flag = USE_MMAP_BIT;
  2560.     }
  2561.   }
  2562.  
  2563. #if 0
  2564.   if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
  2565.     if (asize < HALF_MAX_SIZE_T) {
  2566.       char* br = CMFAIL;
  2567.       char* end = CMFAIL;
  2568.       ACQUIRE_MALLOC_GLOBAL_LOCK();
  2569.       br = (char*)(CALL_MORECORE(asize));
  2570.       end = (char*)(CALL_MORECORE(0));
  2571.       RELEASE_MALLOC_GLOBAL_LOCK();
  2572.       if (br != CMFAIL && end != CMFAIL && br < end) {
  2573.         size_t ssize = end - br;
  2574.         if (ssize > nb + TOP_FOOT_SIZE) {
  2575.           tbase = br;
  2576.           tsize = ssize;
  2577.         }
  2578.             }
  2579.         }
  2580.     }
  2581. #endif
  2582.  
  2583.   if (tbase != CMFAIL) {
  2584.  
  2585.         if ((m->footprint += tsize) > m->max_footprint)
  2586.             m->max_footprint = m->footprint;
  2587.  
  2588.     if (!is_initialized(m)) { /* first-time initialization */
  2589.             if (m->least_addr == 0 || tbase < m->least_addr)
  2590.                 m->least_addr = tbase;
  2591.             m->seg.base = tbase;
  2592.              m->seg.size = tsize;
  2593.             m->seg.sflags = mmap_flag;
  2594.             m->magic = mparams.magic;
  2595.             m->release_checks = MAX_RELEASE_CHECK_RATE;
  2596.             init_bins(m);
  2597.  
  2598.             if (is_global(m))
  2599.                 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
  2600.             else
  2601.             {
  2602.         /* Offset top by embedded malloc_state */
  2603.                 mchunkptr mn = next_chunk(mem2chunk(m));
  2604.                 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
  2605.             }
  2606.         }
  2607.  
  2608.     else {
  2609.       /* Try to merge with an existing segment */
  2610.             msegmentptr sp = &m->seg;
  2611.       /* Only consider most recent segment if traversal suppressed */
  2612.             while (sp != 0 && tbase != sp->base + sp->size)
  2613.                 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
  2614.       if (sp != 0 &&
  2615.           !is_extern_segment(sp) &&
  2616.                (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
  2617.           segment_holds(sp, m->top)) { /* append */
  2618.                 sp->size += tsize;
  2619.                 init_top(m, m->top, m->topsize + tsize);
  2620.             }
  2621.       else {
  2622.                 if (tbase < m->least_addr)
  2623.                     m->least_addr = tbase;
  2624.                 sp = &m->seg;
  2625.                 while (sp != 0 && sp->base != tbase + tsize)
  2626.                     sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
  2627.                 if (sp != 0 &&
  2628.                     !is_extern_segment(sp) &&
  2629.             (sp->sflags & USE_MMAP_BIT) == mmap_flag) {
  2630.                     char* oldbase = sp->base;
  2631.                     sp->base = tbase;
  2632.                     sp->size += tsize;
  2633.                     return prepend_alloc(m, tbase, oldbase, nb);
  2634.                 }
  2635.                 else
  2636.                     add_segment(m, tbase, tsize, mmap_flag);
  2637.             }
  2638.         }
  2639.  
  2640.     if (nb < m->topsize) { /* Allocate from new or extended top space */
  2641.             size_t rsize = m->topsize -= nb;
  2642.             mchunkptr p = m->top;
  2643.             mchunkptr r = m->top = chunk_plus_offset(p, nb);
  2644.             r->head = rsize | PINUSE_BIT;
  2645.             set_size_and_pinuse_of_inuse_chunk(m, p, nb);
  2646.             check_top_chunk(m, m->top);
  2647.             check_malloced_chunk(m, chunk2mem(p), nb);
  2648.             return chunk2mem(p);
  2649.         }
  2650.     }
  2651.  
  2652. //    MALLOC_FAILURE_ACTION;
  2653.     return 0;
  2654. }
  2655.  
  2656. /* -----------------------  system deallocation -------------------------- */
  2657.  
  2658. /* Unmap and unlink any mmapped segments that don't contain used chunks */
  2659. static size_t release_unused_segments(mstate m) {
  2660.     size_t released = 0;
  2661.     int nsegs = 0;
  2662.     msegmentptr pred = &m->seg;
  2663.     msegmentptr sp = pred->next;
  2664.   while (sp != 0) {
  2665.         char* base = sp->base;
  2666.         size_t size = sp->size;
  2667.         msegmentptr next = sp->next;
  2668.         ++nsegs;
  2669.     if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
  2670.             mchunkptr p = align_as_chunk(base);
  2671.             size_t psize = chunksize(p);
  2672.       /* Can unmap if first chunk holds entire segment and not pinned */
  2673.       if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
  2674.                 tchunkptr tp = (tchunkptr)p;
  2675.                 assert(segment_holds(sp, (char*)sp));
  2676.                 if (p == m->dv) {
  2677.                     m->dv = 0;
  2678.                     m->dvsize = 0;
  2679.                 }
  2680.                 else {
  2681.                     unlink_large_chunk(m, tp);
  2682.                 }
  2683.         if (CALL_MUNMAP(base, size) == 0) {
  2684.                     released += size;
  2685.                     m->footprint -= size;
  2686.           /* unlink obsoleted record */
  2687.                     sp = pred;
  2688.                     sp->next = next;
  2689.                 }
  2690.                 else { /* back out if cannot unmap */
  2691.                     insert_large_chunk(m, tp, psize);
  2692.                 }
  2693.             }
  2694.         }
  2695.         if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
  2696.             break;
  2697.         pred = sp;
  2698.         sp = next;
  2699.     }
  2700.   /* Reset check counter */
  2701.   m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)?
  2702.                        (size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE);
  2703.     return released;
  2704. }
  2705.  
  2706. static int sys_trim(mstate m, size_t pad) {
  2707.     size_t released = 0;
  2708.     ensure_initialization();
  2709.   if (pad < MAX_REQUEST && is_initialized(m)) {
  2710.         pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
  2711.  
  2712.     if (m->topsize > pad) {
  2713.       /* Shrink top space in granularity-size units, keeping at least one */
  2714.             size_t unit = mparams.granularity;
  2715.             size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
  2716.                              SIZE_T_ONE) * unit;
  2717.             msegmentptr sp = segment_holding(m, (char*)m->top);
  2718.  
  2719.       if (!is_extern_segment(sp)) {
  2720.         if (is_mmapped_segment(sp)) {
  2721.                     if (HAVE_MMAP &&
  2722.                         sp->size >= extra &&
  2723.               !has_segment_link(m, sp)) { /* can't shrink if pinned */
  2724.                         size_t newsize = sp->size - extra;
  2725.             (void)newsize; /* placate people compiling -Wunused-variable */
  2726.             /* Prefer mremap, fall back to munmap */
  2727.                         if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
  2728.                 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
  2729.               released = extra;
  2730.             }
  2731.           }
  2732.         }
  2733.         else if (HAVE_MORECORE) {
  2734.           if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
  2735.             extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
  2736.           ACQUIRE_MALLOC_GLOBAL_LOCK();
  2737.                         {
  2738.             /* Make sure end of memory is where we last set it. */
  2739.             char* old_br = (char*)(CALL_MORECORE(0));
  2740.             if (old_br == sp->base + sp->size) {
  2741.               char* rel_br = (char*)(CALL_MORECORE(-extra));
  2742.               char* new_br = (char*)(CALL_MORECORE(0));
  2743.               if (rel_br != CMFAIL && new_br < old_br)
  2744.                 released = old_br - new_br;
  2745.                         }
  2746.                     }
  2747.           RELEASE_MALLOC_GLOBAL_LOCK();
  2748.                 }
  2749.             }
  2750.  
  2751.       if (released != 0) {
  2752.                 sp->size -= released;
  2753.                 m->footprint -= released;
  2754.                 init_top(m, m->top, m->topsize - released);
  2755.                 check_top_chunk(m, m->top);
  2756.             }
  2757.         }
  2758.  
  2759.     /* Unmap any unused mmapped segments */
  2760.         if (HAVE_MMAP)
  2761.             released += release_unused_segments(m);
  2762.  
  2763.     /* On failure, disable autotrim to avoid repeated failed future calls */
  2764.         if (released == 0 && m->topsize > m->trim_check)
  2765.         m->trim_check = MAX_SIZE_T;
  2766.     }
  2767.  
  2768.     return (released != 0)? 1 : 0;
  2769. }
  2770.  
  2771. /* Consolidate and bin a chunk. Differs from exported versions
  2772.    of free mainly in that the chunk need not be marked as inuse.
  2773. */
  2774. static void dispose_chunk(mstate m, mchunkptr p, size_t psize) {
  2775.   mchunkptr next = chunk_plus_offset(p, psize);
  2776.   if (!pinuse(p)) {
  2777.     mchunkptr prev;
  2778.     size_t prevsize = p->prev_foot;
  2779.     if (is_mmapped(p)) {
  2780.       psize += prevsize + MMAP_FOOT_PAD;
  2781.       if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
  2782.         m->footprint -= psize;
  2783.       return;
  2784.     }
  2785.     prev = chunk_minus_offset(p, prevsize);
  2786.     psize += prevsize;
  2787.     p = prev;
  2788.     if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */
  2789.       if (p != m->dv) {
  2790.         unlink_chunk(m, p, prevsize);
  2791.       }
  2792.       else if ((next->head & INUSE_BITS) == INUSE_BITS) {
  2793.         m->dvsize = psize;
  2794.         set_free_with_pinuse(p, psize, next);
  2795.         return;
  2796.       }
  2797.     }
  2798.     else {
  2799.       CORRUPTION_ERROR_ACTION(m);
  2800.       return;
  2801.     }
  2802.   }
  2803.   if (RTCHECK(ok_address(m, next))) {
  2804.     if (!cinuse(next)) {  /* consolidate forward */
  2805.       if (next == m->top) {
  2806.         size_t tsize = m->topsize += psize;
  2807.         m->top = p;
  2808.         p->head = tsize | PINUSE_BIT;
  2809.         if (p == m->dv) {
  2810.           m->dv = 0;
  2811.           m->dvsize = 0;
  2812.         }
  2813.         return;
  2814.       }
  2815.       else if (next == m->dv) {
  2816.         size_t dsize = m->dvsize += psize;
  2817.         m->dv = p;
  2818.         set_size_and_pinuse_of_free_chunk(p, dsize);
  2819.         return;
  2820.       }
  2821.       else {
  2822.         size_t nsize = chunksize(next);
  2823.         psize += nsize;
  2824.         unlink_chunk(m, next, nsize);
  2825.         set_size_and_pinuse_of_free_chunk(p, psize);
  2826.         if (p == m->dv) {
  2827.           m->dvsize = psize;
  2828.           return;
  2829.         }
  2830.       }
  2831.     }
  2832.     else {
  2833.       set_free_with_pinuse(p, psize, next);
  2834.     }
  2835.     insert_chunk(m, p, psize);
  2836.   }
  2837.   else {
  2838.     CORRUPTION_ERROR_ACTION(m);
  2839.   }
  2840. }
  2841.  
  2842. /* ---------------------------- malloc --------------------------- */
  2843.  
  2844. /* allocate a large request from the best fitting chunk in a treebin */
  2845. static void* tmalloc_large(mstate m, size_t nb) {
  2846.   tchunkptr v = 0;
  2847.   size_t rsize = -nb; /* Unsigned negation */
  2848.   tchunkptr t;
  2849.   bindex_t idx;
  2850.   compute_tree_index(nb, idx);
  2851.   if ((t = *treebin_at(m, idx)) != 0) {
  2852.     /* Traverse tree for this bin looking for node with size == nb */
  2853.     size_t sizebits = nb << leftshift_for_tree_index(idx);
  2854.     tchunkptr rst = 0;  /* The deepest untaken right subtree */
  2855.     for (;;) {
  2856.       tchunkptr rt;
  2857.       size_t trem = chunksize(t) - nb;
  2858.       if (trem < rsize) {
  2859.         v = t;
  2860.         if ((rsize = trem) == 0)
  2861.           break;
  2862.       }
  2863.       rt = t->child[1];
  2864.       t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
  2865.       if (rt != 0 && rt != t)
  2866.         rst = rt;
  2867.       if (t == 0) {
  2868.         t = rst; /* set t to least subtree holding sizes > nb */
  2869.         break;
  2870.       }
  2871.       sizebits <<= 1;
  2872.     }
  2873.   }
  2874.   if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
  2875.     binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
  2876.     if (leftbits != 0) {
  2877.       bindex_t i;
  2878.       binmap_t leastbit = least_bit(leftbits);
  2879.       compute_bit2idx(leastbit, i);
  2880.       t = *treebin_at(m, i);
  2881.     }
  2882.   }
  2883.  
  2884.   while (t != 0) { /* find smallest of tree or subtree */
  2885.     size_t trem = chunksize(t) - nb;
  2886.     if (trem < rsize) {
  2887.       rsize = trem;
  2888.       v = t;
  2889.     }
  2890.     t = leftmost_child(t);
  2891.   }
  2892.  
  2893.   /*  If dv is a better fit, return 0 so malloc will use it */
  2894.   if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
  2895.     if (RTCHECK(ok_address(m, v))) { /* split */
  2896.       mchunkptr r = chunk_plus_offset(v, nb);
  2897.       assert(chunksize(v) == rsize + nb);
  2898.       if (RTCHECK(ok_next(v, r))) {
  2899.         unlink_large_chunk(m, v);
  2900.         if (rsize < MIN_CHUNK_SIZE)
  2901.           set_inuse_and_pinuse(m, v, (rsize + nb));
  2902.         else {
  2903.           set_size_and_pinuse_of_inuse_chunk(m, v, nb);
  2904.           set_size_and_pinuse_of_free_chunk(r, rsize);
  2905.           insert_chunk(m, r, rsize);
  2906.         }
  2907.         return chunk2mem(v);
  2908.       }
  2909.     }
  2910.     CORRUPTION_ERROR_ACTION(m);
  2911.   }
  2912.   return 0;
  2913. }
  2914.  
  2915. /* allocate a small request from the best fitting chunk in a treebin */
  2916. static void* tmalloc_small(mstate m, size_t nb) {
  2917.     tchunkptr t, v;
  2918.     size_t rsize;
  2919.     bindex_t i;
  2920.     binmap_t leastbit = least_bit(m->treemap);
  2921.     compute_bit2idx(leastbit, i);
  2922.     v = t = *treebin_at(m, i);
  2923.     rsize = chunksize(t) - nb;
  2924.  
  2925.     while ((t = leftmost_child(t)) != 0) {
  2926.         size_t trem = chunksize(t) - nb;
  2927.         if (trem < rsize) {
  2928.             rsize = trem;
  2929.             v = t;
  2930.         }
  2931.     }
  2932.  
  2933.     if (RTCHECK(ok_address(m, v))) {
  2934.         mchunkptr r = chunk_plus_offset(v, nb);
  2935.         assert(chunksize(v) == rsize + nb);
  2936.         if (RTCHECK(ok_next(v, r))) {
  2937.             unlink_large_chunk(m, v);
  2938.             if (rsize < MIN_CHUNK_SIZE)
  2939.                 set_inuse_and_pinuse(m, v, (rsize + nb));
  2940.             else {
  2941.                 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
  2942.                 set_size_and_pinuse_of_free_chunk(r, rsize);
  2943.                 replace_dv(m, r, rsize);
  2944.             }
  2945.             return chunk2mem(v);
  2946.         }
  2947.     }
  2948.  
  2949.     CORRUPTION_ERROR_ACTION(m);
  2950.     return 0;
  2951. }
  2952.  
  2953.  
  2954. void* malloc(size_t bytes)
  2955. {
  2956.   /*
  2957.      Basic algorithm:
  2958.      If a small request (< 256 bytes minus per-chunk overhead):
  2959.        1. If one exists, use a remainderless chunk in associated smallbin.
  2960.           (Remainderless means that there are too few excess bytes to
  2961.           represent as a chunk.)
  2962.        2. If it is big enough, use the dv chunk, which is normally the
  2963.           chunk adjacent to the one used for the most recent small request.
  2964.        3. If one exists, split the smallest available chunk in a bin,
  2965.           saving remainder in dv.
  2966.        4. If it is big enough, use the top chunk.
  2967.        5. If available, get memory from system and use it
  2968.      Otherwise, for a large request:
  2969.        1. Find the smallest available binned chunk that fits, and use it
  2970.           if it is better fitting than dv chunk, splitting if necessary.
  2971.        2. If better fitting than any binned chunk, use the dv chunk.
  2972.        3. If it is big enough, use the top chunk.
  2973.        4. If request size >= mmap threshold, try to directly mmap this chunk.
  2974.        5. If available, get memory from system and use it
  2975.  
  2976.      The ugly goto's here ensure that postaction occurs along all paths.
  2977.   */
  2978.  
  2979.     ensure_initialization(); /* initialize in sys_alloc if not using locks */
  2980.  
  2981.     PREACTION(gm);
  2982.     {
  2983.         void* mem;
  2984.         size_t nb;
  2985.     if (bytes <= MAX_SMALL_REQUEST) {
  2986.             bindex_t idx;
  2987.             binmap_t smallbits;
  2988.             nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
  2989.             idx = small_index(nb);
  2990.             smallbits = gm->smallmap >> idx;
  2991.  
  2992.       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
  2993.                 mchunkptr b, p;
  2994.                 idx += ~smallbits & 1;       /* Uses next bin if idx empty */
  2995.                 b = smallbin_at(gm, idx);
  2996.                 p = b->fd;
  2997.                 assert(chunksize(p) == small_index2size(idx));
  2998.                 unlink_first_small_chunk(gm, b, p, idx);
  2999.                 set_inuse_and_pinuse(gm, p, small_index2size(idx));
  3000.                 mem = chunk2mem(p);
  3001.                 check_malloced_chunk(gm, mem, nb);
  3002.                 goto postaction;
  3003.             }
  3004.  
  3005.       else if (nb > gm->dvsize) {
  3006.         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
  3007.                     mchunkptr b, p, r;
  3008.                     size_t rsize;
  3009.                     bindex_t i;
  3010.                     binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
  3011.                     binmap_t leastbit = least_bit(leftbits);
  3012.                     compute_bit2idx(leastbit, i);
  3013.                     b = smallbin_at(gm, i);
  3014.                     p = b->fd;
  3015.                     assert(chunksize(p) == small_index2size(i));
  3016.                     unlink_first_small_chunk(gm, b, p, i);
  3017.                     rsize = small_index2size(i) - nb;
  3018.           /* Fit here cannot be remainderless if 4byte sizes */
  3019.                     if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
  3020.                         set_inuse_and_pinuse(gm, p, small_index2size(i));
  3021.           else {
  3022.                         set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
  3023.                         r = chunk_plus_offset(p, nb);
  3024.                         set_size_and_pinuse_of_free_chunk(r, rsize);
  3025.                         replace_dv(gm, r, rsize);
  3026.                     }
  3027.                     mem = chunk2mem(p);
  3028.                     check_malloced_chunk(gm, mem, nb);
  3029.                     goto postaction;
  3030.                 }
  3031.  
  3032.         else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
  3033.                     check_malloced_chunk(gm, mem, nb);
  3034.                     goto postaction;
  3035.                 }
  3036.             }
  3037.         }
  3038.         else if (bytes >= MAX_REQUEST)
  3039.             nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
  3040.     else {
  3041.             nb = pad_request(bytes);
  3042.       if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
  3043.                 check_malloced_chunk(gm, mem, nb);
  3044.                 goto postaction;
  3045.             }
  3046.         }
  3047.  
  3048.         if (nb <= gm->dvsize) {
  3049.             size_t rsize = gm->dvsize - nb;
  3050.             mchunkptr p = gm->dv;
  3051.             if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
  3052.                 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
  3053.                 gm->dvsize = rsize;
  3054.                 set_size_and_pinuse_of_free_chunk(r, rsize);
  3055.                 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
  3056.             }
  3057.             else { /* exhaust dv */
  3058.                 size_t dvs = gm->dvsize;
  3059.                 gm->dvsize = 0;
  3060.                 gm->dv = 0;
  3061.                 set_inuse_and_pinuse(gm, p, dvs);
  3062.             }
  3063.             mem = chunk2mem(p);
  3064.             check_malloced_chunk(gm, mem, nb);
  3065.             goto postaction;
  3066.         }
  3067.  
  3068.         else if (nb < gm->topsize) { /* Split top */
  3069.             size_t rsize = gm->topsize -= nb;
  3070.             mchunkptr p = gm->top;
  3071.             mchunkptr r = gm->top = chunk_plus_offset(p, nb);
  3072.             r->head = rsize | PINUSE_BIT;
  3073.             set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
  3074.             mem = chunk2mem(p);
  3075.             check_top_chunk(gm, gm->top);
  3076.             check_malloced_chunk(gm, mem, nb);
  3077.             goto postaction;
  3078.         }
  3079.  
  3080.         mem = sys_alloc(gm, nb);
  3081.  
  3082.   postaction:
  3083.         POSTACTION(gm);
  3084. //        printf("%s %p %d\n", __FUNCTION__, mem, bytes);
  3085.         return mem;
  3086.     }
  3087. //    FAIL();
  3088.     return 0;
  3089. }
  3090.  
  3091. /* ---------------------------- free --------------------------- */
  3092.  
  3093. void free(void* mem)
  3094. {
  3095.   /*
  3096.      Consolidate freed chunks with preceeding or succeeding bordering
  3097.      free chunks, if they exist, and then place in a bin.  Intermixed
  3098.      with special cases for top, dv, mmapped chunks, and usage errors.
  3099.   */
  3100.  
  3101. //  dbgprintf("%s %p\n", __FUNCTION__, mem);
  3102.  
  3103.   if (mem != 0) {
  3104.         mchunkptr p  = mem2chunk(mem);
  3105. #if FOOTERS
  3106.     mstate fm = get_mstate_for(p);
  3107.     if (!ok_magic(fm)) {
  3108.       USAGE_ERROR_ACTION(fm, p);
  3109.       return;
  3110.     }
  3111. #else /* FOOTERS */
  3112. #define fm gm
  3113. #endif /* FOOTERS */
  3114.         PREACTION(fm);
  3115.         {
  3116.             check_inuse_chunk(fm, p);
  3117.       if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
  3118.                 size_t psize = chunksize(p);
  3119.                 mchunkptr next = chunk_plus_offset(p, psize);
  3120.         if (!pinuse(p)) {
  3121.                     size_t prevsize = p->prev_foot;
  3122.           if (is_mmapped(p)) {
  3123.                         psize += prevsize + MMAP_FOOT_PAD;
  3124.                         if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
  3125.                         fm->footprint -= psize;
  3126.                         goto postaction;
  3127.                     }
  3128.           else {
  3129.                         mchunkptr prev = chunk_minus_offset(p, prevsize);
  3130.                         psize += prevsize;
  3131.                         p = prev;
  3132.             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
  3133.               if (p != fm->dv) {
  3134.                                 unlink_chunk(fm, p, prevsize);
  3135.                             }
  3136.               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
  3137.                                 fm->dvsize = psize;
  3138.                                 set_free_with_pinuse(p, psize, next);
  3139.                                 goto postaction;
  3140.                             }
  3141.                         }
  3142.                         else
  3143.                             goto erroraction;
  3144.                     }
  3145.                 }
  3146.  
  3147.         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
  3148.           if (!cinuse(next)) {  /* consolidate forward */
  3149.             if (next == fm->top) {
  3150.                             size_t tsize = fm->topsize += psize;
  3151.                             fm->top = p;
  3152.                             p->head = tsize | PINUSE_BIT;
  3153.               if (p == fm->dv) {
  3154.                                 fm->dv = 0;
  3155.                                 fm->dvsize = 0;
  3156.                             }
  3157.                             if (should_trim(fm, tsize))
  3158.                                 sys_trim(fm, 0);
  3159.                             goto postaction;
  3160.                         }
  3161.             else if (next == fm->dv) {
  3162.                             size_t dsize = fm->dvsize += psize;
  3163.                             fm->dv = p;
  3164.                             set_size_and_pinuse_of_free_chunk(p, dsize);
  3165.                             goto postaction;
  3166.                         }
  3167.             else {
  3168.                             size_t nsize = chunksize(next);
  3169.                             psize += nsize;
  3170.                             unlink_chunk(fm, next, nsize);
  3171.                             set_size_and_pinuse_of_free_chunk(p, psize);
  3172.               if (p == fm->dv) {
  3173.                                 fm->dvsize = psize;
  3174.                                 goto postaction;
  3175.                             }
  3176.                         }
  3177.                     }
  3178.                     else
  3179.                         set_free_with_pinuse(p, psize, next);
  3180.  
  3181.           if (is_small(psize)) {
  3182.                         insert_small_chunk(fm, p, psize);
  3183.                         check_free_chunk(fm, p);
  3184.                     }
  3185.           else {
  3186.                         tchunkptr tp = (tchunkptr)p;
  3187.                         insert_large_chunk(fm, tp, psize);
  3188.                         check_free_chunk(fm, p);
  3189.                         if (--fm->release_checks == 0)
  3190.                             release_unused_segments(fm);
  3191.                     }
  3192.                     goto postaction;
  3193.                 }
  3194.             }
  3195.     erroraction:
  3196.         USAGE_ERROR_ACTION(fm, p);
  3197.     postaction:
  3198.         POSTACTION(fm);
  3199.         }
  3200.     }
  3201.  
  3202. //    LEAVE();
  3203.  
  3204. #if !FOOTERS
  3205. #undef fm
  3206. #endif /* FOOTERS */
  3207. }
  3208.  
  3209.  
  3210.