Subversion Repositories Kolibri OS

Rev

Go to most recent revision | Details | Last modification | View Log | RSS feed

Rev Author Line No. Line
1029 serge 1
/*
2
  This is a version (aka dlmalloc) of malloc/free/realloc written by
3
  Doug Lea and released to the public domain, as explained at
4
  http://creativecommons.org/licenses/publicdomain.  Send questions,
5
  comments, complaints, performance data, etc to dl@cs.oswego.edu
6
 
7
* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
8
 
9
   Note: There may be an updated version of this malloc obtainable at
10
           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11
         Check before installing!
12
 
13
* Quickstart
14
 
15
  This library is all in one file to simplify the most common usage:
16
  ftp it, compile it (-O3), and link it into another program. All of
17
  the compile-time options default to reasonable values for use on
18
  most platforms.  You might later want to step through various
19
  compile-time and dynamic tuning options.
20
 
21
  For convenience, an include file for code using this malloc is at:
22
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
23
  You don't really need this .h file unless you call functions not
24
  defined in your system include files.  The .h file contains only the
25
  excerpts from this file needed for using this malloc on ANSI C/C++
26
  systems, so long as you haven't changed compile-time options about
27
  naming and tuning parameters.  If you do, then you can create your
28
  own malloc.h that does include all settings by cutting at the point
29
  indicated below. Note that you may already by default be using a C
30
  library containing a malloc that is based on some version of this
31
  malloc (for example in linux). You might still want to use the one
32
  in this file to customize settings or to avoid overheads associated
33
  with library versions.
34
 
35
* Vital statistics:
36
 
37
  Supported pointer/size_t representation:       4 or 8 bytes
38
       size_t MUST be an unsigned type of the same width as
39
       pointers. (If you are using an ancient system that declares
40
       size_t as a signed type, or need it to be a different width
41
       than pointers, you can use a previous release of this malloc
42
       (e.g. 2.7.2) supporting these.)
43
 
44
  Alignment:                                     8 bytes (default)
45
       This suffices for nearly all current machines and C compilers.
46
       However, you can define MALLOC_ALIGNMENT to be wider than this
47
       if necessary (up to 128bytes), at the expense of using more space.
48
 
49
  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
50
                                          8 or 16 bytes (if 8byte sizes)
51
       Each malloced chunk has a hidden word of overhead holding size
52
       and status information, and additional cross-check word
53
       if FOOTERS is defined.
54
 
55
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
56
                          8-byte ptrs:  32 bytes    (including overhead)
57
 
58
       Even a request for zero bytes (i.e., malloc(0)) returns a
59
       pointer to something of the minimum allocatable size.
60
       The maximum overhead wastage (i.e., number of extra bytes
61
       allocated than were requested in malloc) is less than or equal
62
       to the minimum size, except for requests >= mmap_threshold that
63
       are serviced via mmap(), where the worst case wastage is about
64
       32 bytes plus the remainder from a system page (the minimal
65
       mmap unit); typically 4096 or 8192 bytes.
66
 
67
  Security: static-safe; optionally more or less
68
       The "security" of malloc refers to the ability of malicious
69
       code to accentuate the effects of errors (for example, freeing
70
       space that is not currently malloc'ed or overwriting past the
71
       ends of chunks) in code that calls malloc.  This malloc
72
       guarantees not to modify any memory locations below the base of
73
       heap, i.e., static variables, even in the presence of usage
74
       errors.  The routines additionally detect most improper frees
75
       and reallocs.  All this holds as long as the static bookkeeping
76
       for malloc itself is not corrupted by some other means.  This
77
       is only one aspect of security -- these checks do not, and
78
       cannot, detect all possible programming errors.
79
 
80
       If FOOTERS is defined nonzero, then each allocated chunk
81
       carries an additional check word to verify that it was malloced
82
       from its space.  These check words are the same within each
83
       execution of a program using malloc, but differ across
84
       executions, so externally crafted fake chunks cannot be
85
       freed. This improves security by rejecting frees/reallocs that
86
       could corrupt heap memory, in addition to the checks preventing
87
       writes to statics that are always on.  This may further improve
88
       security at the expense of time and space overhead.  (Note that
89
       FOOTERS may also be worth using with MSPACES.)
90
 
91
       By default detected errors cause the program to abort (calling
92
       "abort()"). You can override this to instead proceed past
93
       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
94
       has no effect, and a malloc that encounters a bad address
95
       caused by user overwrites will ignore the bad address by
96
       dropping pointers and indices to all known memory. This may
97
       be appropriate for programs that should continue if at all
98
       possible in the face of programming errors, although they may
99
       run out of memory because dropped memory is never reclaimed.
100
 
101
       If you don't like either of these options, you can define
102
       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103
       else. And if if you are sure that your program using malloc has
104
       no errors or vulnerabilities, you can define INSECURE to 1,
105
       which might (or might not) provide a small performance improvement.
106
 
107
  Thread-safety: NOT thread-safe unless USE_LOCKS defined
108
       When USE_LOCKS is defined, each public call to malloc, free,
109
       etc is surrounded with either a pthread mutex or a win32
110
       spinlock (depending on WIN32). This is not especially fast, and
111
       can be a major bottleneck.  It is designed only to provide
112
       minimal protection in concurrent environments, and to provide a
113
       basis for extensions.  If you are using malloc in a concurrent
114
       program, consider instead using ptmalloc, which is derived from
115
       a version of this malloc. (See http://www.malloc.de).
116
 
117
  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
118
       This malloc can use unix sbrk or any emulation (invoked using
119
       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
120
       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
121
       memory.  On most unix systems, it tends to work best if both
122
       MORECORE and MMAP are enabled.  On Win32, it uses emulations
123
       based on VirtualAlloc. It also uses common C library functions
124
       like memset.
125
 
126
  Compliance: I believe it is compliant with the Single Unix Specification
127
       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
128
       others as well.
129
 
130
* Overview of algorithms
131
 
132
  This is not the fastest, most space-conserving, most portable, or
133
  most tunable malloc ever written. However it is among the fastest
134
  while also being among the most space-conserving, portable and
135
  tunable.  Consistent balance across these factors results in a good
136
  general-purpose allocator for malloc-intensive programs.
137
 
138
  In most ways, this malloc is a best-fit allocator. Generally, it
139
  chooses the best-fitting existing chunk for a request, with ties
140
  broken in approximately least-recently-used order. (This strategy
141
  normally maintains low fragmentation.) However, for requests less
142
  than 256bytes, it deviates from best-fit when there is not an
143
  exactly fitting available chunk by preferring to use space adjacent
144
  to that used for the previous small request, as well as by breaking
145
  ties in approximately most-recently-used order. (These enhance
146
  locality of series of small allocations.)  And for very large requests
147
  (>= 256Kb by default), it relies on system memory mapping
148
  facilities, if supported.  (This helps avoid carrying around and
149
  possibly fragmenting memory used only for large chunks.)
150
 
151
  All operations (except malloc_stats and mallinfo) have execution
152
  times that are bounded by a constant factor of the number of bits in
153
  a size_t, not counting any clearing in calloc or copying in realloc,
154
  or actions surrounding MORECORE and MMAP that have times
155
  proportional to the number of non-contiguous regions returned by
156
  system allocation routines, which is often just 1.
157
 
158
  The implementation is not very modular and seriously overuses
159
  macros. Perhaps someday all C compilers will do as good a job
160
  inlining modular code as can now be done by brute-force expansion,
161
  but now, enough of them seem not to.
162
 
163
  Some compilers issue a lot of warnings about code that is
164
  dead/unreachable only on some platforms, and also about intentional
165
  uses of negation on unsigned types. All known cases of each can be
166
  ignored.
167
 
168
  For a longer but out of date high-level description, see
169
     http://gee.cs.oswego.edu/dl/html/malloc.html
170
 
171
* MSPACES
172
  If MSPACES is defined, then in addition to malloc, free, etc.,
173
  this file also defines mspace_malloc, mspace_free, etc. These
174
  are versions of malloc routines that take an "mspace" argument
175
  obtained using create_mspace, to control all internal bookkeeping.
176
  If ONLY_MSPACES is defined, only these versions are compiled.
177
  So if you would like to use this allocator for only some allocations,
178
  and your system malloc for others, you can compile with
179
  ONLY_MSPACES and then do something like...
180
    static mspace mymspace = create_mspace(0,0); // for example
181
    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
182
 
183
  (Note: If you only need one instance of an mspace, you can instead
184
  use "USE_DL_PREFIX" to relabel the global malloc.)
185
 
186
  You can similarly create thread-local allocators by storing
187
  mspaces as thread-locals. For example:
188
    static __thread mspace tlms = 0;
189
    void*  tlmalloc(size_t bytes) {
190
      if (tlms == 0) tlms = create_mspace(0, 0);
191
      return mspace_malloc(tlms, bytes);
192
    }
193
    void  tlfree(void* mem) { mspace_free(tlms, mem); }
194
 
195
  Unless FOOTERS is defined, each mspace is completely independent.
196
  You cannot allocate from one and free to another (although
197
  conformance is only weakly checked, so usage errors are not always
198
  caught). If FOOTERS is defined, then each chunk carries around a tag
199
  indicating its originating mspace, and frees are directed to their
200
  originating spaces.
201
 
202
 -------------------------  Compile-time options ---------------------------
203
 
204
Be careful in setting #define values for numerical constants of type
205
size_t. On some systems, literal values are not automatically extended
206
to size_t precision unless they are explicitly casted.
207
 
208
WIN32                    default: defined if _WIN32 defined
209
  Defining WIN32 sets up defaults for MS environment and compilers.
210
  Otherwise defaults are for unix.
211
 
212
MALLOC_ALIGNMENT         default: (size_t)8
213
  Controls the minimum alignment for malloc'ed chunks.  It must be a
214
  power of two and at least 8, even on machines for which smaller
215
  alignments would suffice. It may be defined as larger than this
216
  though. Note however that code and data structures are optimized for
217
  the case of 8-byte alignment.
218
 
219
MSPACES                  default: 0 (false)
220
  If true, compile in support for independent allocation spaces.
221
  This is only supported if HAVE_MMAP is true.
222
 
223
ONLY_MSPACES             default: 0 (false)
224
  If true, only compile in mspace versions, not regular versions.
225
 
226
USE_LOCKS                default: 0 (false)
227
  Causes each call to each public routine to be surrounded with
228
  pthread or WIN32 mutex lock/unlock. (If set true, this can be
229
  overridden on a per-mspace basis for mspace versions.)
230
 
231
FOOTERS                  default: 0
232
  If true, provide extra checking and dispatching by placing
233
  information in the footers of allocated chunks. This adds
234
  space and time overhead.
235
 
236
INSECURE                 default: 0
237
  If true, omit checks for usage errors and heap space overwrites.
238
 
239
USE_DL_PREFIX            default: NOT defined
240
  Causes compiler to prefix all public routines with the string 'dl'.
241
  This can be useful when you only want to use this malloc in one part
242
  of a program, using your regular system malloc elsewhere.
243
 
244
ABORT                    default: defined as abort()
245
  Defines how to abort on failed checks.  On most systems, a failed
246
  check cannot die with an "assert" or even print an informative
247
  message, because the underlying print routines in turn call malloc,
248
  which will fail again.  Generally, the best policy is to simply call
249
  abort(). It's not very useful to do more than this because many
250
  errors due to overwriting will show up as address faults (null, odd
251
  addresses etc) rather than malloc-triggered checks, so will also
252
  abort.  Also, most compilers know that abort() does not return, so
253
  can better optimize code conditionally calling it.
254
 
255
PROCEED_ON_ERROR           default: defined as 0 (false)
256
  Controls whether detected bad addresses cause them to bypassed
257
  rather than aborting. If set, detected bad arguments to free and
258
  realloc are ignored. And all bookkeeping information is zeroed out
259
  upon a detected overwrite of freed heap space, thus losing the
260
  ability to ever return it from malloc again, but enabling the
261
  application to proceed. If PROCEED_ON_ERROR is defined, the
262
  static variable malloc_corruption_error_count is compiled in
263
  and can be examined to see if errors have occurred. This option
264
  generates slower code than the default abort policy.
265
 
266
DEBUG                    default: NOT defined
267
  The DEBUG setting is mainly intended for people trying to modify
268
  this code or diagnose problems when porting to new platforms.
269
  However, it may also be able to better isolate user errors than just
270
  using runtime checks.  The assertions in the check routines spell
271
  out in more detail the assumptions and invariants underlying the
272
  algorithms.  The checking is fairly extensive, and will slow down
273
  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
274
  set will attempt to check every non-mmapped allocated and free chunk
275
  in the course of computing the summaries.
276
 
277
ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
278
  Debugging assertion failures can be nearly impossible if your
279
  version of the assert macro causes malloc to be called, which will
280
  lead to a cascade of further failures, blowing the runtime stack.
281
  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
282
  which will usually make debugging easier.
283
 
284
MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
285
  The action to take before "return 0" when malloc fails to be able to
286
  return memory because there is none available.
287
 
288
HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
289
  True if this system supports sbrk or an emulation of it.
290
 
291
MORECORE                  default: sbrk
292
  The name of the sbrk-style system routine to call to obtain more
293
  memory.  See below for guidance on writing custom MORECORE
294
  functions. The type of the argument to sbrk/MORECORE varies across
295
  systems.  It cannot be size_t, because it supports negative
296
  arguments, so it is normally the signed type of the same width as
297
  size_t (sometimes declared as "intptr_t").  It doesn't much matter
298
  though. Internally, we only call it with arguments less than half
299
  the max value of a size_t, which should work across all reasonable
300
  possibilities, although sometimes generating compiler warnings.  See
301
  near the end of this file for guidelines for creating a custom
302
  version of MORECORE.
303
 
304
MORECORE_CONTIGUOUS       default: 1 (true)
305
  If true, take advantage of fact that consecutive calls to MORECORE
306
  with positive arguments always return contiguous increasing
307
  addresses.  This is true of unix sbrk. It does not hurt too much to
308
  set it true anyway, since malloc copes with non-contiguities.
309
  Setting it false when definitely non-contiguous saves time
310
  and possibly wasted space it would take to discover this though.
311
 
312
MORECORE_CANNOT_TRIM      default: NOT defined
313
  True if MORECORE cannot release space back to the system when given
314
  negative arguments. This is generally necessary only if you are
315
  using a hand-crafted MORECORE function that cannot handle negative
316
  arguments.
317
 
318
HAVE_MMAP                 default: 1 (true)
319
  True if this system supports mmap or an emulation of it.  If so, and
320
  HAVE_MORECORE is not true, MMAP is used for all system
321
  allocation. If set and HAVE_MORECORE is true as well, MMAP is
322
  primarily used to directly allocate very large blocks. It is also
323
  used as a backup strategy in cases where MORECORE fails to provide
324
  space from system. Note: A single call to MUNMAP is assumed to be
325
  able to unmap memory that may have be allocated using multiple calls
326
  to MMAP, so long as they are adjacent.
327
 
328
HAVE_MREMAP               default: 1 on linux, else 0
329
  If true realloc() uses mremap() to re-allocate large blocks and
330
  extend or shrink allocation spaces.
331
 
332
MMAP_CLEARS               default: 1 on unix
333
  True if mmap clears memory so calloc doesn't need to. This is true
334
  for standard unix mmap using /dev/zero.
335
 
336
USE_BUILTIN_FFS            default: 0 (i.e., not used)
337
  Causes malloc to use the builtin ffs() function to compute indices.
338
  Some compilers may recognize and intrinsify ffs to be faster than the
339
  supplied C version. Also, the case of x86 using gcc is special-cased
340
  to an asm instruction, so is already as fast as it can be, and so
341
  this setting has no effect. (On most x86s, the asm version is only
342
  slightly faster than the C version.)
343
 
344
malloc_getpagesize         default: derive from system includes, or 4096.
345
  The system page size. To the extent possible, this malloc manages
346
  memory from the system in page-size units.  This may be (and
347
  usually is) a function rather than a constant. This is ignored
348
  if WIN32, where page size is determined using getSystemInfo during
349
  initialization.
350
 
351
USE_DEV_RANDOM             default: 0 (i.e., not used)
352
  Causes malloc to use /dev/random to initialize secure magic seed for
353
  stamping footers. Otherwise, the current time is used.
354
 
355
NO_MALLINFO                default: 0
356
  If defined, don't compile "mallinfo". This can be a simple way
357
  of dealing with mismatches between system declarations and
358
  those in this file.
359
 
360
MALLINFO_FIELD_TYPE        default: size_t
361
  The type of the fields in the mallinfo struct. This was originally
362
  defined as "int" in SVID etc, but is more usefully defined as
363
  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
364
 
365
REALLOC_ZERO_BYTES_FREES    default: not defined
366
  This should be set if a call to realloc with zero bytes should
367
  be the same as a call to free. Some people think it should. Otherwise,
368
  since this malloc returns a unique pointer for malloc(0), so does
369
  realloc(p, 0).
370
 
371
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
372
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
373
LACKS_STDLIB_H                default: NOT defined unless on WIN32
374
  Define these if your system does not have these header files.
375
  You might need to manually insert some of the declarations they provide.
376
 
377
DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
378
                                system_info.dwAllocationGranularity in WIN32,
379
                                otherwise 64K.
380
      Also settable using mallopt(M_GRANULARITY, x)
381
  The unit for allocating and deallocating memory from the system.  On
382
  most systems with contiguous MORECORE, there is no reason to
383
  make this more than a page. However, systems with MMAP tend to
384
  either require or encourage larger granularities.  You can increase
385
  this value to prevent system allocation functions to be called so
386
  often, especially if they are slow.  The value must be at least one
387
  page and must be a power of two.  Setting to 0 causes initialization
388
  to either page size or win32 region size.  (Note: In previous
389
  versions of malloc, the equivalent of this option was called
390
  "TOP_PAD")
391
 
392
DEFAULT_TRIM_THRESHOLD    default: 2MB
393
      Also settable using mallopt(M_TRIM_THRESHOLD, x)
394
  The maximum amount of unused top-most memory to keep before
395
  releasing via malloc_trim in free().  Automatic trimming is mainly
396
  useful in long-lived programs using contiguous MORECORE.  Because
397
  trimming via sbrk can be slow on some systems, and can sometimes be
398
  wasteful (in cases where programs immediately afterward allocate
399
  more large chunks) the value should be high enough so that your
400
  overall system performance would improve by releasing this much
401
  memory.  As a rough guide, you might set to a value close to the
402
  average size of a process (program) running on your system.
403
  Releasing this much memory would allow such a process to run in
404
  memory.  Generally, it is worth tuning trim thresholds when a
405
  program undergoes phases where several large chunks are allocated
406
  and released in ways that can reuse each other's storage, perhaps
407
  mixed with phases where there are no such chunks at all. The trim
408
  value must be greater than page size to have any useful effect.  To
409
  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
410
  some people use of mallocing a huge space and then freeing it at
411
  program startup, in an attempt to reserve system memory, doesn't
412
  have the intended effect under automatic trimming, since that memory
413
  will immediately be returned to the system.
414
 
415
DEFAULT_MMAP_THRESHOLD       default: 256K
416
      Also settable using mallopt(M_MMAP_THRESHOLD, x)
417
  The request size threshold for using MMAP to directly service a
418
  request. Requests of at least this size that cannot be allocated
419
  using already-existing space will be serviced via mmap.  (If enough
420
  normal freed space already exists it is used instead.)  Using mmap
421
  segregates relatively large chunks of memory so that they can be
422
  individually obtained and released from the host system. A request
423
  serviced through mmap is never reused by any other request (at least
424
  not directly; the system may just so happen to remap successive
425
  requests to the same locations).  Segregating space in this way has
426
  the benefits that: Mmapped space can always be individually released
427
  back to the system, which helps keep the system level memory demands
428
  of a long-lived program low.  Also, mapped memory doesn't become
429
  `locked' between other chunks, as can happen with normally allocated
430
  chunks, which means that even trimming via malloc_trim would not
431
  release them.  However, it has the disadvantage that the space
432
  cannot be reclaimed, consolidated, and then used to service later
433
  requests, as happens with normal chunks.  The advantages of mmap
434
  nearly always outweigh disadvantages for "large" chunks, but the
435
  value of "large" may vary across systems.  The default is an
436
  empirically derived value that works well in most systems. You can
437
  disable mmap by setting to MAX_SIZE_T.
438
 
439
*/
440
 
441
#define STDCALL __attribute__ ((stdcall)) __attribute__ ((dllimport))
442
void*  STDCALL KernelAlloc(unsigned size)__asm__("KernelAlloc");
443
int KernelFree(void*);
444
 
445
#define MALLOC_ALIGNMENT ((size_t)8U)
446
#define DEFAULT_MMAP_THRESHOLD  ((size_t)32U * (size_t)1024U)
447
#define NO_MALLINFO  1
448
#define HAVE_MMAP 1
449
#define MORECORE_CANNOT_TRIM
450
#define FOOTERS 0
451
#define ABORT
452
 
453
#ifndef WIN32
454
#ifdef _WIN32
455
#define WIN32 1
456
#endif  /* _WIN32 */
457
#endif  /* WIN32 */
458
#ifdef WIN32
459
#define WIN32_LEAN_AND_MEAN
460
#include 
461
#define HAVE_MMAP 1
462
#define HAVE_MORECORE 0
463
#define LACKS_UNISTD_H
464
#define LACKS_SYS_PARAM_H
465
#define LACKS_SYS_MMAN_H
466
#define LACKS_STRING_H
467
#define LACKS_STRINGS_H
468
#define LACKS_SYS_TYPES_H
469
#define LACKS_ERRNO_H
470
#define MALLOC_FAILURE_ACTION
471
#define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
472
#endif  /* WIN32 */
473
 
474
#if defined(DARWIN) || defined(_DARWIN)
475
/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
476
#ifndef HAVE_MORECORE
477
#define HAVE_MORECORE 0
478
#define HAVE_MMAP 1
479
#endif  /* HAVE_MORECORE */
480
#endif  /* DARWIN */
481
 
482
#ifndef LACKS_SYS_TYPES_H
483
#include   /* For size_t */
484
#endif  /* LACKS_SYS_TYPES_H */
485
 
486
/* The maximum possible size_t value has all bits set */
487
#define MAX_SIZE_T           (~(size_t)0)
488
 
489
#ifndef ONLY_MSPACES
490
#define ONLY_MSPACES 0
491
#endif  /* ONLY_MSPACES */
492
#ifndef MSPACES
493
#if ONLY_MSPACES
494
#define MSPACES 1
495
#else   /* ONLY_MSPACES */
496
#define MSPACES 0
497
#endif  /* ONLY_MSPACES */
498
#endif  /* MSPACES */
499
#ifndef MALLOC_ALIGNMENT
500
#define MALLOC_ALIGNMENT ((size_t)8U)
501
#endif  /* MALLOC_ALIGNMENT */
502
#ifndef FOOTERS
503
#define FOOTERS 0
504
#endif  /* FOOTERS */
505
#ifndef ABORT
506
#define ABORT  abort()
507
#endif  /* ABORT */
508
#ifndef ABORT_ON_ASSERT_FAILURE
509
#define ABORT_ON_ASSERT_FAILURE 1
510
#endif  /* ABORT_ON_ASSERT_FAILURE */
511
#ifndef PROCEED_ON_ERROR
512
#define PROCEED_ON_ERROR 0
513
#endif  /* PROCEED_ON_ERROR */
514
#ifndef USE_LOCKS
515
#define USE_LOCKS 0
516
#endif  /* USE_LOCKS */
517
#ifndef INSECURE
518
#define INSECURE 0
519
#endif  /* INSECURE */
520
#ifndef HAVE_MMAP
521
#define HAVE_MMAP 1
522
#endif  /* HAVE_MMAP */
523
#ifndef MMAP_CLEARS
524
#define MMAP_CLEARS 1
525
#endif  /* MMAP_CLEARS */
526
#ifndef HAVE_MREMAP
527
#ifdef linux
528
#define HAVE_MREMAP 1
529
#else   /* linux */
530
#define HAVE_MREMAP 0
531
#endif  /* linux */
532
#endif  /* HAVE_MREMAP */
533
#ifndef MALLOC_FAILURE_ACTION
534
#define MALLOC_FAILURE_ACTION  errno = ENOMEM;
535
#endif  /* MALLOC_FAILURE_ACTION */
536
#ifndef HAVE_MORECORE
537
#if ONLY_MSPACES
538
#define HAVE_MORECORE 0
539
#else   /* ONLY_MSPACES */
540
#define HAVE_MORECORE 1
541
#endif  /* ONLY_MSPACES */
542
#endif  /* HAVE_MORECORE */
543
#if !HAVE_MORECORE
544
#define MORECORE_CONTIGUOUS 0
545
#else   /* !HAVE_MORECORE */
546
#ifndef MORECORE
547
#define MORECORE sbrk
548
#endif  /* MORECORE */
549
#ifndef MORECORE_CONTIGUOUS
550
#define MORECORE_CONTIGUOUS 1
551
#endif  /* MORECORE_CONTIGUOUS */
552
#endif  /* HAVE_MORECORE */
553
#ifndef DEFAULT_GRANULARITY
554
#if MORECORE_CONTIGUOUS
555
#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
556
#else   /* MORECORE_CONTIGUOUS */
557
#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
558
#endif  /* MORECORE_CONTIGUOUS */
559
#endif  /* DEFAULT_GRANULARITY */
560
#ifndef DEFAULT_TRIM_THRESHOLD
561
#ifndef MORECORE_CANNOT_TRIM
562
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
563
#else   /* MORECORE_CANNOT_TRIM */
564
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
565
#endif  /* MORECORE_CANNOT_TRIM */
566
#endif  /* DEFAULT_TRIM_THRESHOLD */
567
#ifndef DEFAULT_MMAP_THRESHOLD
568
#if HAVE_MMAP
569
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
570
#else   /* HAVE_MMAP */
571
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
572
#endif  /* HAVE_MMAP */
573
#endif  /* DEFAULT_MMAP_THRESHOLD */
574
#ifndef USE_BUILTIN_FFS
575
#define USE_BUILTIN_FFS 0
576
#endif  /* USE_BUILTIN_FFS */
577
#ifndef USE_DEV_RANDOM
578
#define USE_DEV_RANDOM 0
579
#endif  /* USE_DEV_RANDOM */
580
#ifndef NO_MALLINFO
581
#define NO_MALLINFO 0
582
#endif  /* NO_MALLINFO */
583
#ifndef MALLINFO_FIELD_TYPE
584
#define MALLINFO_FIELD_TYPE size_t
585
#endif  /* MALLINFO_FIELD_TYPE */
586
 
587
 
588
/*
589
  mallopt tuning options.  SVID/XPG defines four standard parameter
590
  numbers for mallopt, normally defined in malloc.h.  None of these
591
  are used in this malloc, so setting them has no effect. But this
592
  malloc does support the following options.
593
*/
594
 
595
#define M_TRIM_THRESHOLD     (-1)
596
#define M_GRANULARITY        (-2)
597
#define M_MMAP_THRESHOLD     (-3)
598
 
599
/* ------------------------ Mallinfo declarations ------------------------ */
600
 
601
#if !NO_MALLINFO
602
#endif /* NO_MALLINFO */
603
 
604
#ifdef __cplusplus
605
extern "C" {
606
#endif /* __cplusplus */
607
 
608
#if !ONLY_MSPACES
609
 
610
/* ------------------- Declarations of public routines ------------------- */
611
 
612
#ifndef USE_DL_PREFIX
613
#define dlcalloc               calloc
614
#define dlfree                 free
615
#define dlmalloc               malloc
616
#define dlmemalign             memalign
617
#define dlrealloc              realloc
618
#define dlvalloc               valloc
619
#define dlpvalloc              pvalloc
620
#define dlmallinfo             mallinfo
621
#define dlmallopt              mallopt
622
#define dlmalloc_trim          malloc_trim
623
#define dlmalloc_stats         malloc_stats
624
#define dlmalloc_usable_size   malloc_usable_size
625
#define dlmalloc_footprint     malloc_footprint
626
#define dlmalloc_max_footprint malloc_max_footprint
627
#define dlindependent_calloc   independent_calloc
628
#define dlindependent_comalloc independent_comalloc
629
#endif /* USE_DL_PREFIX */
630
 
631
 
632
/*
633
  malloc(size_t n)
634
  Returns a pointer to a newly allocated chunk of at least n bytes, or
635
  null if no space is available, in which case errno is set to ENOMEM
636
  on ANSI C systems.
637
 
638
  If n is zero, malloc returns a minimum-sized chunk. (The minimum
639
  size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
640
  systems.)  Note that size_t is an unsigned type, so calls with
641
  arguments that would be negative if signed are interpreted as
642
  requests for huge amounts of space, which will often fail. The
643
  maximum supported value of n differs across systems, but is in all
644
  cases less than the maximum representable value of a size_t.
645
*/
646
void* dlmalloc(size_t);
647
 
648
/*
649
  free(void* p)
650
  Releases the chunk of memory pointed to by p, that had been previously
651
  allocated using malloc or a related routine such as realloc.
652
  It has no effect if p is null. If p was not malloced or already
653
  freed, free(p) will by default cause the current program to abort.
654
*/
655
void  dlfree(void*);
656
 
657
/*
658
  calloc(size_t n_elements, size_t element_size);
659
  Returns a pointer to n_elements * element_size bytes, with all locations
660
  set to zero.
661
*/
662
void* dlcalloc(size_t, size_t);
663
 
664
/*
665
  realloc(void* p, size_t n)
666
  Returns a pointer to a chunk of size n that contains the same data
667
  as does chunk p up to the minimum of (n, p's size) bytes, or null
668
  if no space is available.
669
 
670
  The returned pointer may or may not be the same as p. The algorithm
671
  prefers extending p in most cases when possible, otherwise it
672
  employs the equivalent of a malloc-copy-free sequence.
673
 
674
  If p is null, realloc is equivalent to malloc.
675
 
676
  If space is not available, realloc returns null, errno is set (if on
677
  ANSI) and p is NOT freed.
678
 
679
  if n is for fewer bytes than already held by p, the newly unused
680
  space is lopped off and freed if possible.  realloc with a size
681
  argument of zero (re)allocates a minimum-sized chunk.
682
 
683
  The old unix realloc convention of allowing the last-free'd chunk
684
  to be used as an argument to realloc is not supported.
685
*/
686
 
687
void* dlrealloc(void*, size_t);
688
 
689
/*
690
  memalign(size_t alignment, size_t n);
691
  Returns a pointer to a newly allocated chunk of n bytes, aligned
692
  in accord with the alignment argument.
693
 
694
  The alignment argument should be a power of two. If the argument is
695
  not a power of two, the nearest greater power is used.
696
  8-byte alignment is guaranteed by normal malloc calls, so don't
697
  bother calling memalign with an argument of 8 or less.
698
 
699
  Overreliance on memalign is a sure way to fragment space.
700
*/
701
void* dlmemalign(size_t, size_t);
702
 
703
/*
704
  valloc(size_t n);
705
  Equivalent to memalign(pagesize, n), where pagesize is the page
706
  size of the system. If the pagesize is unknown, 4096 is used.
707
*/
708
void* dlvalloc(size_t);
709
 
710
/*
711
  mallopt(int parameter_number, int parameter_value)
712
  Sets tunable parameters The format is to provide a
713
  (parameter-number, parameter-value) pair.  mallopt then sets the
714
  corresponding parameter to the argument value if it can (i.e., so
715
  long as the value is meaningful), and returns 1 if successful else
716
  0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
717
  normally defined in malloc.h.  None of these are use in this malloc,
718
  so setting them has no effect. But this malloc also supports other
719
  options in mallopt. See below for details.  Briefly, supported
720
  parameters are as follows (listed defaults are for "typical"
721
  configurations).
722
 
723
  Symbol            param #  default    allowed param values
724
  M_TRIM_THRESHOLD     -1   2*1024*1024   any   (MAX_SIZE_T disables)
725
  M_GRANULARITY        -2     page size   any power of 2 >= page size
726
  M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
727
*/
728
int dlmallopt(int, int);
729
 
730
/*
731
  malloc_footprint();
732
  Returns the number of bytes obtained from the system.  The total
733
  number of bytes allocated by malloc, realloc etc., is less than this
734
  value. Unlike mallinfo, this function returns only a precomputed
735
  result, so can be called frequently to monitor memory consumption.
736
  Even if locks are otherwise defined, this function does not use them,
737
  so results might not be up to date.
738
*/
739
size_t dlmalloc_footprint(void);
740
 
741
/*
742
  malloc_max_footprint();
743
  Returns the maximum number of bytes obtained from the system. This
744
  value will be greater than current footprint if deallocated space
745
  has been reclaimed by the system. The peak number of bytes allocated
746
  by malloc, realloc etc., is less than this value. Unlike mallinfo,
747
  this function returns only a precomputed result, so can be called
748
  frequently to monitor memory consumption.  Even if locks are
749
  otherwise defined, this function does not use them, so results might
750
  not be up to date.
751
*/
752
size_t dlmalloc_max_footprint(void);
753
 
754
#if !NO_MALLINFO
755
#endif /* NO_MALLINFO */
756
 
757
/*
758
  independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
759
 
760
  independent_calloc is similar to calloc, but instead of returning a
761
  single cleared space, it returns an array of pointers to n_elements
762
  independent elements that can hold contents of size elem_size, each
763
  of which starts out cleared, and can be independently freed,
764
  realloc'ed etc. The elements are guaranteed to be adjacently
765
  allocated (this is not guaranteed to occur with multiple callocs or
766
  mallocs), which may also improve cache locality in some
767
  applications.
768
 
769
  The "chunks" argument is optional (i.e., may be null, which is
770
  probably the most typical usage). If it is null, the returned array
771
  is itself dynamically allocated and should also be freed when it is
772
  no longer needed. Otherwise, the chunks array must be of at least
773
  n_elements in length. It is filled in with the pointers to the
774
  chunks.
775
 
776
  In either case, independent_calloc returns this pointer array, or
777
  null if the allocation failed.  If n_elements is zero and "chunks"
778
  is null, it returns a chunk representing an array with zero elements
779
  (which should be freed if not wanted).
780
 
781
  Each element must be individually freed when it is no longer
782
  needed. If you'd like to instead be able to free all at once, you
783
  should instead use regular calloc and assign pointers into this
784
  space to represent elements.  (In this case though, you cannot
785
  independently free elements.)
786
 
787
  independent_calloc simplifies and speeds up implementations of many
788
  kinds of pools.  It may also be useful when constructing large data
789
  structures that initially have a fixed number of fixed-sized nodes,
790
  but the number is not known at compile time, and some of the nodes
791
  may later need to be freed. For example:
792
 
793
  struct Node { int item; struct Node* next; };
794
 
795
  struct Node* build_list() {
796
    struct Node** pool;
797
    int n = read_number_of_nodes_needed();
798
    if (n <= 0) return 0;
799
    pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
800
    if (pool == 0) die();
801
    // organize into a linked list...
802
    struct Node* first = pool[0];
803
    for (i = 0; i < n-1; ++i)
804
      pool[i]->next = pool[i+1];
805
    free(pool);     // Can now free the array (or not, if it is needed later)
806
    return first;
807
  }
808
*/
809
void** dlindependent_calloc(size_t, size_t, void**);
810
 
811
/*
812
  independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
813
 
814
  independent_comalloc allocates, all at once, a set of n_elements
815
  chunks with sizes indicated in the "sizes" array.    It returns
816
  an array of pointers to these elements, each of which can be
817
  independently freed, realloc'ed etc. The elements are guaranteed to
818
  be adjacently allocated (this is not guaranteed to occur with
819
  multiple callocs or mallocs), which may also improve cache locality
820
  in some applications.
821
 
822
  The "chunks" argument is optional (i.e., may be null). If it is null
823
  the returned array is itself dynamically allocated and should also
824
  be freed when it is no longer needed. Otherwise, the chunks array
825
  must be of at least n_elements in length. It is filled in with the
826
  pointers to the chunks.
827
 
828
  In either case, independent_comalloc returns this pointer array, or
829
  null if the allocation failed.  If n_elements is zero and chunks is
830
  null, it returns a chunk representing an array with zero elements
831
  (which should be freed if not wanted).
832
 
833
  Each element must be individually freed when it is no longer
834
  needed. If you'd like to instead be able to free all at once, you
835
  should instead use a single regular malloc, and assign pointers at
836
  particular offsets in the aggregate space. (In this case though, you
837
  cannot independently free elements.)
838
 
839
  independent_comallac differs from independent_calloc in that each
840
  element may have a different size, and also that it does not
841
  automatically clear elements.
842
 
843
  independent_comalloc can be used to speed up allocation in cases
844
  where several structs or objects must always be allocated at the
845
  same time.  For example:
846
 
847
  struct Head { ... }
848
  struct Foot { ... }
849
 
850
  void send_message(char* msg) {
851
    int msglen = strlen(msg);
852
    size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
853
    void* chunks[3];
854
    if (independent_comalloc(3, sizes, chunks) == 0)
855
      die();
856
    struct Head* head = (struct Head*)(chunks[0]);
857
    char*        body = (char*)(chunks[1]);
858
    struct Foot* foot = (struct Foot*)(chunks[2]);
859
    // ...
860
  }
861
 
862
  In general though, independent_comalloc is worth using only for
863
  larger values of n_elements. For small values, you probably won't
864
  detect enough difference from series of malloc calls to bother.
865
 
866
  Overuse of independent_comalloc can increase overall memory usage,
867
  since it cannot reuse existing noncontiguous small chunks that
868
  might be available for some of the elements.
869
*/
870
void** dlindependent_comalloc(size_t, size_t*, void**);
871
 
872
 
873
/*
874
  pvalloc(size_t n);
875
  Equivalent to valloc(minimum-page-that-holds(n)), that is,
876
  round up n to nearest pagesize.
877
 */
878
void*  dlpvalloc(size_t);
879
 
880
/*
881
  malloc_trim(size_t pad);
882
 
883
  If possible, gives memory back to the system (via negative arguments
884
  to sbrk) if there is unused memory at the `high' end of the malloc
885
  pool or in unused MMAP segments. You can call this after freeing
886
  large blocks of memory to potentially reduce the system-level memory
887
  requirements of a program. However, it cannot guarantee to reduce
888
  memory. Under some allocation patterns, some large free blocks of
889
  memory will be locked between two used chunks, so they cannot be
890
  given back to the system.
891
 
892
  The `pad' argument to malloc_trim represents the amount of free
893
  trailing space to leave untrimmed. If this argument is zero, only
894
  the minimum amount of memory to maintain internal data structures
895
  will be left. Non-zero arguments can be supplied to maintain enough
896
  trailing space to service future expected allocations without having
897
  to re-obtain memory from the system.
898
 
899
  Malloc_trim returns 1 if it actually released any memory, else 0.
900
*/
901
int  dlmalloc_trim(size_t);
902
 
903
/*
904
  malloc_usable_size(void* p);
905
 
906
  Returns the number of bytes you can actually use in
907
  an allocated chunk, which may be more than you requested (although
908
  often not) due to alignment and minimum size constraints.
909
  You can use this many bytes without worrying about
910
  overwriting other allocated objects. This is not a particularly great
911
  programming practice. malloc_usable_size can be more useful in
912
  debugging and assertions, for example:
913
 
914
  p = malloc(n);
915
  assert(malloc_usable_size(p) >= 256);
916
*/
917
size_t dlmalloc_usable_size(void*);
918
 
919
/*
920
  malloc_stats();
921
  Prints on stderr the amount of space obtained from the system (both
922
  via sbrk and mmap), the maximum amount (which may be more than
923
  current if malloc_trim and/or munmap got called), and the current
924
  number of bytes allocated via malloc (or realloc, etc) but not yet
925
  freed. Note that this is the number of bytes allocated, not the
926
  number requested. It will be larger than the number requested
927
  because of alignment and bookkeeping overhead. Because it includes
928
  alignment wastage as being in use, this figure may be greater than
929
  zero even when no user-level chunks are allocated.
930
 
931
  The reported current and maximum system memory can be inaccurate if
932
  a program makes other calls to system memory allocation functions
933
  (normally sbrk) outside of malloc.
934
 
935
  malloc_stats prints only the most commonly interesting statistics.
936
  More information can be obtained by calling mallinfo.
937
*/
938
void  dlmalloc_stats(void);
939
 
940
#endif /* ONLY_MSPACES */
941
 
942
#if MSPACES
943
#endif /* MSPACES */
944
 
945
#ifdef __cplusplus
946
};  /* end of extern "C" */
947
#endif /* __cplusplus */
948
 
949
/*
950
  ========================================================================
951
  To make a fully customizable malloc.h header file, cut everything
952
  above this line, put into file malloc.h, edit to suit, and #include it
953
  on the next line, as well as in programs that use this malloc.
954
  ========================================================================
955
*/
956
 
957
/* #include "malloc.h" */
958
 
959
/*------------------------------ internal #includes ---------------------- */
960
 
961
#ifdef WIN32
962
#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
963
#endif /* WIN32 */
964
 
965
#include        /* for printing in malloc_stats */
966
 
967
#ifndef LACKS_ERRNO_H
968
#include        /* for MALLOC_FAILURE_ACTION */
969
#endif /* LACKS_ERRNO_H */
970
#if FOOTERS
971
#include         /* for magic initialization */
972
#endif /* FOOTERS */
973
#ifndef LACKS_STDLIB_H
974
#include       /* for abort() */
975
#endif /* LACKS_STDLIB_H */
976
#ifdef DEBUG
977
#if ABORT_ON_ASSERT_FAILURE
978
#define assert(x) if(!(x)) ABORT
979
#else /* ABORT_ON_ASSERT_FAILURE */
980
#include 
981
#endif /* ABORT_ON_ASSERT_FAILURE */
982
#else  /* DEBUG */
983
#define assert(x)
984
#endif /* DEBUG */
985
#ifndef LACKS_STRING_H
986
#include       /* for memset etc */
987
#endif  /* LACKS_STRING_H */
988
#if USE_BUILTIN_FFS
989
#ifndef LACKS_STRINGS_H
990
#include      /* for ffs */
991
#endif /* LACKS_STRINGS_H */
992
#endif /* USE_BUILTIN_FFS */
993
#if HAVE_MMAP
994
#ifndef LACKS_SYS_MMAN_H
995
#include     /* for mmap */
996
#endif /* LACKS_SYS_MMAN_H */
997
#ifndef LACKS_FCNTL_H
998
#include 
999
#endif /* LACKS_FCNTL_H */
1000
#endif /* HAVE_MMAP */
1001
#if HAVE_MORECORE
1002
#endif /* HAVE_MMAP */
1003
 
1004
#ifndef WIN32
1005
#endif
1006
 
1007
/* ------------------- size_t and alignment properties -------------------- */
1008
 
1009
/* The byte and bit size of a size_t */
1010
#define SIZE_T_SIZE         (sizeof(size_t))
1011
#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
1012
 
1013
/* Some constants coerced to size_t */
1014
/* Annoying but necessary to avoid errors on some plaftorms */
1015
#define SIZE_T_ZERO         ((size_t)0)
1016
#define SIZE_T_ONE          ((size_t)1)
1017
#define SIZE_T_TWO          ((size_t)2)
1018
#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
1019
#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
1020
#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1021
#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
1022
 
1023
/* The bit mask value corresponding to MALLOC_ALIGNMENT */
1024
#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
1025
 
1026
/* True if address a has acceptable alignment */
1027
#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1028
 
1029
/* the number of bytes to offset an address to align it */
1030
#define align_offset(A)\
1031
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1032
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1033
 
1034
/* -------------------------- MMAP preliminaries ------------------------- */
1035
 
1036
/*
1037
   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1038
   checks to fail so compiler optimizer can delete code rather than
1039
   using so many "#if"s.
1040
*/
1041
 
1042
 
1043
/* MORECORE and MMAP must return MFAIL on failure */
1044
#define MFAIL                ((void*)(MAX_SIZE_T))
1045
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
1046
 
1047
#if !HAVE_MMAP
1048
#else /* HAVE_MMAP */
1049
#define IS_MMAPPED_BIT       (SIZE_T_ONE)
1050
#define USE_MMAP_BIT         (SIZE_T_ONE)
1051
 
1052
#ifndef WIN32
1053
 
1054
#else /* WIN32 */
1055
 
1056
/* Win32 MMAP via VirtualAlloc */
1057
static void* win32mmap(size_t size) {
1058
  void* ptr = KernelAlloc(size);
1059
  return (ptr != 0)? ptr: MFAIL;
1060
}
1061
 
1062
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1063
static void* win32direct_mmap(size_t size) {
1064
  void* ptr = KernelAlloc(size);
1065
  return (ptr != 0)? ptr: MFAIL;
1066
}
1067
 
1068
/* This function supports releasing coalesed segments */
1069
static int win32munmap(void* ptr, size_t size) {
1070
  KernelFree(ptr);
1071
  return 0;
1072
}
1073
 
1074
#define CALL_MMAP(s)         win32mmap(s)
1075
#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
1076
#define DIRECT_MMAP(s)       win32direct_mmap(s)
1077
#endif /* WIN32 */
1078
#endif /* HAVE_MMAP */
1079
 
1080
#if HAVE_MMAP && HAVE_MREMAP
1081
#else  /* HAVE_MMAP && HAVE_MREMAP */
1082
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1083
#endif /* HAVE_MMAP && HAVE_MREMAP */
1084
 
1085
#if HAVE_MORECORE
1086
#else  /* HAVE_MORECORE */
1087
#define CALL_MORECORE(S)     MFAIL
1088
#endif /* HAVE_MORECORE */
1089
 
1090
/* mstate bit set if continguous morecore disabled or failed */
1091
#define USE_NONCONTIGUOUS_BIT (4U)
1092
 
1093
/* segment bit set in create_mspace_with_base */
1094
#define EXTERN_BIT            (8U)
1095
 
1096
 
1097
/* --------------------------- Lock preliminaries ------------------------ */
1098
 
1099
#if USE_LOCKS
1100
#else  /* USE_LOCKS */
1101
#define USE_LOCK_BIT               (0U)
1102
#define INITIAL_LOCK(l)
1103
#endif /* USE_LOCKS */
1104
 
1105
#if USE_LOCKS && HAVE_MORECORE
1106
#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
1107
#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
1108
#else /* USE_LOCKS && HAVE_MORECORE */
1109
#define ACQUIRE_MORECORE_LOCK()
1110
#define RELEASE_MORECORE_LOCK()
1111
#endif /* USE_LOCKS && HAVE_MORECORE */
1112
 
1113
#if USE_LOCKS
1114
#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
1115
#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
1116
#else  /* USE_LOCKS */
1117
#define ACQUIRE_MAGIC_INIT_LOCK()
1118
#define RELEASE_MAGIC_INIT_LOCK()
1119
#endif /* USE_LOCKS */
1120
 
1121
 
1122
/* -----------------------  Chunk representations ------------------------ */
1123
 
1124
/*
1125
  (The following includes lightly edited explanations by Colin Plumb.)
1126
 
1127
  The malloc_chunk declaration below is misleading (but accurate and
1128
  necessary).  It declares a "view" into memory allowing access to
1129
  necessary fields at known offsets from a given base.
1130
 
1131
  Chunks of memory are maintained using a `boundary tag' method as
1132
  originally described by Knuth.  (See the paper by Paul Wilson
1133
  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1134
  techniques.)  Sizes of free chunks are stored both in the front of
1135
  each chunk and at the end.  This makes consolidating fragmented
1136
  chunks into bigger chunks fast.  The head fields also hold bits
1137
  representing whether chunks are free or in use.
1138
 
1139
  Here are some pictures to make it clearer.  They are "exploded" to
1140
  show that the state of a chunk can be thought of as extending from
1141
  the high 31 bits of the head field of its header through the
1142
  prev_foot and PINUSE_BIT bit of the following chunk header.
1143
 
1144
  A chunk that's in use looks like:
1145
 
1146
   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1147
           | Size of previous chunk (if P = 1)                             |
1148
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1149
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1150
         | Size of this chunk                                         1| +-+
1151
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1152
         |                                                               |
1153
         +-                                                             -+
1154
         |                                                               |
1155
         +-                                                             -+
1156
         |                                                               :
1157
         +-      size - sizeof(size_t) available payload bytes          -+
1158
         :                                                               |
1159
 chunk-> +-                                                             -+
1160
         |                                                               |
1161
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1162
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1163
       | Size of next chunk (may or may not be in use)               | +-+
1164
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1165
 
1166
    And if it's free, it looks like this:
1167
 
1168
   chunk-> +-                                                             -+
1169
           | User payload (must be in use, or we would have merged!)       |
1170
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1171
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1172
         | Size of this chunk                                         0| +-+
1173
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1174
         | Next pointer                                                  |
1175
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1176
         | Prev pointer                                                  |
1177
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1178
         |                                                               :
1179
         +-      size - sizeof(struct chunk) unused bytes               -+
1180
         :                                                               |
1181
 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1182
         | Size of this chunk                                            |
1183
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1184
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1185
       | Size of next chunk (must be in use, or we would have merged)| +-+
1186
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1187
       |                                                               :
1188
       +- User payload                                                -+
1189
       :                                                               |
1190
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1191
                                                                     |0|
1192
                                                                     +-+
1193
  Note that since we always merge adjacent free chunks, the chunks
1194
  adjacent to a free chunk must be in use.
1195
 
1196
  Given a pointer to a chunk (which can be derived trivially from the
1197
  payload pointer) we can, in O(1) time, find out whether the adjacent
1198
  chunks are free, and if so, unlink them from the lists that they
1199
  are on and merge them with the current chunk.
1200
 
1201
  Chunks always begin on even word boundaries, so the mem portion
1202
  (which is returned to the user) is also on an even word boundary, and
1203
  thus at least double-word aligned.
1204
 
1205
  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1206
  chunk size (which is always a multiple of two words), is an in-use
1207
  bit for the *previous* chunk.  If that bit is *clear*, then the
1208
  word before the current chunk size contains the previous chunk
1209
  size, and can be used to find the front of the previous chunk.
1210
  The very first chunk allocated always has this bit set, preventing
1211
  access to non-existent (or non-owned) memory. If pinuse is set for
1212
  any given chunk, then you CANNOT determine the size of the
1213
  previous chunk, and might even get a memory addressing fault when
1214
  trying to do so.
1215
 
1216
  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1217
  the chunk size redundantly records whether the current chunk is
1218
  inuse. This redundancy enables usage checks within free and realloc,
1219
  and reduces indirection when freeing and consolidating chunks.
1220
 
1221
  Each freshly allocated chunk must have both cinuse and pinuse set.
1222
  That is, each allocated chunk borders either a previously allocated
1223
  and still in-use chunk, or the base of its memory arena. This is
1224
  ensured by making all allocations from the the `lowest' part of any
1225
  found chunk.  Further, no free chunk physically borders another one,
1226
  so each free chunk is known to be preceded and followed by either
1227
  inuse chunks or the ends of memory.
1228
 
1229
  Note that the `foot' of the current chunk is actually represented
1230
  as the prev_foot of the NEXT chunk. This makes it easier to
1231
  deal with alignments etc but can be very confusing when trying
1232
  to extend or adapt this code.
1233
 
1234
  The exceptions to all this are
1235
 
1236
     1. The special chunk `top' is the top-most available chunk (i.e.,
1237
        the one bordering the end of available memory). It is treated
1238
        specially.  Top is never included in any bin, is used only if
1239
        no other chunk is available, and is released back to the
1240
        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
1241
        the top chunk is treated as larger (and thus less well
1242
        fitting) than any other available chunk.  The top chunk
1243
        doesn't update its trailing size field since there is no next
1244
        contiguous chunk that would have to index off it. However,
1245
        space is still allocated for it (TOP_FOOT_SIZE) to enable
1246
        separation or merging when space is extended.
1247
 
1248
     3. Chunks allocated via mmap, which have the lowest-order bit
1249
        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1250
        PINUSE_BIT in their head fields.  Because they are allocated
1251
        one-by-one, each must carry its own prev_foot field, which is
1252
        also used to hold the offset this chunk has within its mmapped
1253
        region, which is needed to preserve alignment. Each mmapped
1254
        chunk is trailed by the first two fields of a fake next-chunk
1255
        for sake of usage checks.
1256
 
1257
*/
1258
 
1259
struct malloc_chunk {
1260
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
1261
  size_t               head;       /* Size and inuse bits. */
1262
  struct malloc_chunk* fd;         /* double links -- used only if free. */
1263
  struct malloc_chunk* bk;
1264
};
1265
 
1266
typedef struct malloc_chunk  mchunk;
1267
typedef struct malloc_chunk* mchunkptr;
1268
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
1269
typedef unsigned int bindex_t;         /* Described below */
1270
typedef unsigned int binmap_t;         /* Described below */
1271
typedef unsigned int flag_t;           /* The type of various bit flag sets */
1272
 
1273
/* ------------------- Chunks sizes and alignments ----------------------- */
1274
 
1275
#define MCHUNK_SIZE         (sizeof(mchunk))
1276
 
1277
#if FOOTERS
1278
#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
1279
#else /* FOOTERS */
1280
#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
1281
#endif /* FOOTERS */
1282
 
1283
/* MMapped chunks need a second word of overhead ... */
1284
#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1285
/* ... and additional padding for fake next-chunk at foot */
1286
#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
1287
 
1288
/* The smallest size we can malloc is an aligned minimal chunk */
1289
#define MIN_CHUNK_SIZE\
1290
  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1291
 
1292
/* conversion from malloc headers to user pointers, and back */
1293
#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
1294
#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1295
/* chunk associated with aligned address A */
1296
#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
1297
 
1298
/* Bounds on request (not chunk) sizes. */
1299
#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
1300
#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1301
 
1302
/* pad request bytes into a usable size */
1303
#define pad_request(req) \
1304
   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1305
 
1306
/* pad request, checking for minimum (but not maximum) */
1307
#define request2size(req) \
1308
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1309
 
1310
 
1311
/* ------------------ Operations on head and foot fields ----------------- */
1312
 
1313
/*
1314
  The head field of a chunk is or'ed with PINUSE_BIT when previous
1315
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1316
  use. If the chunk was obtained with mmap, the prev_foot field has
1317
  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1318
  mmapped region to the base of the chunk.
1319
*/
1320
 
1321
#define PINUSE_BIT          (SIZE_T_ONE)
1322
#define CINUSE_BIT          (SIZE_T_TWO)
1323
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1324
 
1325
/* Head value for fenceposts */
1326
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1327
 
1328
/* extraction of fields from head words */
1329
#define cinuse(p)           ((p)->head & CINUSE_BIT)
1330
#define pinuse(p)           ((p)->head & PINUSE_BIT)
1331
#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1332
 
1333
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1334
#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1335
 
1336
/* Treat space at ptr +/- offset as a chunk */
1337
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1338
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1339
 
1340
/* Ptr to next or previous physical malloc_chunk. */
1341
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1342
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1343
 
1344
/* extract next chunk's pinuse bit */
1345
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1346
 
1347
/* Get/set size at footer */
1348
#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1349
#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1350
 
1351
/* Set size, pinuse bit, and foot */
1352
#define set_size_and_pinuse_of_free_chunk(p, s)\
1353
  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1354
 
1355
/* Set size, pinuse bit, foot, and clear next pinuse */
1356
#define set_free_with_pinuse(p, s, n)\
1357
  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1358
 
1359
#define is_mmapped(p)\
1360
  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1361
 
1362
/* Get the internal overhead associated with chunk p */
1363
#define overhead_for(p)\
1364
 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1365
 
1366
/* Return true if malloced space is not necessarily cleared */
1367
#if MMAP_CLEARS
1368
#define calloc_must_clear(p) (!is_mmapped(p))
1369
#else /* MMAP_CLEARS */
1370
#define calloc_must_clear(p) (1)
1371
#endif /* MMAP_CLEARS */
1372
 
1373
/* ---------------------- Overlaid data structures ----------------------- */
1374
 
1375
/*
1376
  When chunks are not in use, they are treated as nodes of either
1377
  lists or trees.
1378
 
1379
  "Small"  chunks are stored in circular doubly-linked lists, and look
1380
  like this:
1381
 
1382
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1383
            |             Size of previous chunk                            |
1384
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1385
    `head:' |             Size of chunk, in bytes                         |P|
1386
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1387
            |             Forward pointer to next chunk in list             |
1388
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1389
            |             Back pointer to previous chunk in list            |
1390
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1391
            |             Unused space (may be 0 bytes long)                .
1392
            .                                                               .
1393
            .                                                               |
1394
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1395
    `foot:' |             Size of chunk, in bytes                           |
1396
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1397
 
1398
  Larger chunks are kept in a form of bitwise digital trees (aka
1399
  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1400
  free chunks greater than 256 bytes, their size doesn't impose any
1401
  constraints on user chunk sizes.  Each node looks like:
1402
 
1403
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1404
            |             Size of previous chunk                            |
1405
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1406
    `head:' |             Size of chunk, in bytes                         |P|
1407
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1408
            |             Forward pointer to next chunk of same size        |
1409
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1410
            |             Back pointer to previous chunk of same size       |
1411
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1412
            |             Pointer to left child (child[0])                  |
1413
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1414
            |             Pointer to right child (child[1])                 |
1415
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1416
            |             Pointer to parent                                 |
1417
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1418
            |             bin index of this chunk                           |
1419
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1420
            |             Unused space                                      .
1421
            .                                                               |
1422
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1423
    `foot:' |             Size of chunk, in bytes                           |
1424
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1425
 
1426
  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1427
  of the same size are arranged in a circularly-linked list, with only
1428
  the oldest chunk (the next to be used, in our FIFO ordering)
1429
  actually in the tree.  (Tree members are distinguished by a non-null
1430
  parent pointer.)  If a chunk with the same size an an existing node
1431
  is inserted, it is linked off the existing node using pointers that
1432
  work in the same way as fd/bk pointers of small chunks.
1433
 
1434
  Each tree contains a power of 2 sized range of chunk sizes (the
1435
  smallest is 0x100 <= x < 0x180), which is is divided in half at each
1436
  tree level, with the chunks in the smaller half of the range (0x100
1437
  <= x < 0x140 for the top nose) in the left subtree and the larger
1438
  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1439
  done by inspecting individual bits.
1440
 
1441
  Using these rules, each node's left subtree contains all smaller
1442
  sizes than its right subtree.  However, the node at the root of each
1443
  subtree has no particular ordering relationship to either.  (The
1444
  dividing line between the subtree sizes is based on trie relation.)
1445
  If we remove the last chunk of a given size from the interior of the
1446
  tree, we need to replace it with a leaf node.  The tree ordering
1447
  rules permit a node to be replaced by any leaf below it.
1448
 
1449
  The smallest chunk in a tree (a common operation in a best-fit
1450
  allocator) can be found by walking a path to the leftmost leaf in
1451
  the tree.  Unlike a usual binary tree, where we follow left child
1452
  pointers until we reach a null, here we follow the right child
1453
  pointer any time the left one is null, until we reach a leaf with
1454
  both child pointers null. The smallest chunk in the tree will be
1455
  somewhere along that path.
1456
 
1457
  The worst case number of steps to add, find, or remove a node is
1458
  bounded by the number of bits differentiating chunks within
1459
  bins. Under current bin calculations, this ranges from 6 up to 21
1460
  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1461
  is of course much better.
1462
*/
1463
 
1464
struct malloc_tree_chunk {
1465
  /* The first four fields must be compatible with malloc_chunk */
1466
  size_t                    prev_foot;
1467
  size_t                    head;
1468
  struct malloc_tree_chunk* fd;
1469
  struct malloc_tree_chunk* bk;
1470
 
1471
  struct malloc_tree_chunk* child[2];
1472
  struct malloc_tree_chunk* parent;
1473
  bindex_t                  index;
1474
};
1475
 
1476
typedef struct malloc_tree_chunk  tchunk;
1477
typedef struct malloc_tree_chunk* tchunkptr;
1478
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1479
 
1480
/* A little helper macro for trees */
1481
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1482
 
1483
/* ----------------------------- Segments -------------------------------- */
1484
 
1485
/*
1486
  Each malloc space may include non-contiguous segments, held in a
1487
  list headed by an embedded malloc_segment record representing the
1488
  top-most space. Segments also include flags holding properties of
1489
  the space. Large chunks that are directly allocated by mmap are not
1490
  included in this list. They are instead independently created and
1491
  destroyed without otherwise keeping track of them.
1492
 
1493
  Segment management mainly comes into play for spaces allocated by
1494
  MMAP.  Any call to MMAP might or might not return memory that is
1495
  adjacent to an existing segment.  MORECORE normally contiguously
1496
  extends the current space, so this space is almost always adjacent,
1497
  which is simpler and faster to deal with. (This is why MORECORE is
1498
  used preferentially to MMAP when both are available -- see
1499
  sys_alloc.)  When allocating using MMAP, we don't use any of the
1500
  hinting mechanisms (inconsistently) supported in various
1501
  implementations of unix mmap, or distinguish reserving from
1502
  committing memory. Instead, we just ask for space, and exploit
1503
  contiguity when we get it.  It is probably possible to do
1504
  better than this on some systems, but no general scheme seems
1505
  to be significantly better.
1506
 
1507
  Management entails a simpler variant of the consolidation scheme
1508
  used for chunks to reduce fragmentation -- new adjacent memory is
1509
  normally prepended or appended to an existing segment. However,
1510
  there are limitations compared to chunk consolidation that mostly
1511
  reflect the fact that segment processing is relatively infrequent
1512
  (occurring only when getting memory from system) and that we
1513
  don't expect to have huge numbers of segments:
1514
 
1515
  * Segments are not indexed, so traversal requires linear scans.  (It
1516
    would be possible to index these, but is not worth the extra
1517
    overhead and complexity for most programs on most platforms.)
1518
  * New segments are only appended to old ones when holding top-most
1519
    memory; if they cannot be prepended to others, they are held in
1520
    different segments.
1521
 
1522
  Except for the top-most segment of an mstate, each segment record
1523
  is kept at the tail of its segment. Segments are added by pushing
1524
  segment records onto the list headed by &mstate.seg for the
1525
  containing mstate.
1526
 
1527
  Segment flags control allocation/merge/deallocation policies:
1528
  * If EXTERN_BIT set, then we did not allocate this segment,
1529
    and so should not try to deallocate or merge with others.
1530
    (This currently holds only for the initial segment passed
1531
    into create_mspace_with_base.)
1532
  * If IS_MMAPPED_BIT set, the segment may be merged with
1533
    other surrounding mmapped segments and trimmed/de-allocated
1534
    using munmap.
1535
  * If neither bit is set, then the segment was obtained using
1536
    MORECORE so can be merged with surrounding MORECORE'd segments
1537
    and deallocated/trimmed using MORECORE with negative arguments.
1538
*/
1539
 
1540
struct malloc_segment {
1541
  char*        base;             /* base address */
1542
  size_t       size;             /* allocated size */
1543
  struct malloc_segment* next;   /* ptr to next segment */
1544
  flag_t       sflags;           /* mmap and extern flag */
1545
};
1546
 
1547
#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1548
#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1549
 
1550
typedef struct malloc_segment  msegment;
1551
typedef struct malloc_segment* msegmentptr;
1552
 
1553
/* ---------------------------- malloc_state ----------------------------- */
1554
 
1555
/*
1556
   A malloc_state holds all of the bookkeeping for a space.
1557
   The main fields are:
1558
 
1559
  Top
1560
    The topmost chunk of the currently active segment. Its size is
1561
    cached in topsize.  The actual size of topmost space is
1562
    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1563
    fenceposts and segment records if necessary when getting more
1564
    space from the system.  The size at which to autotrim top is
1565
    cached from mparams in trim_check, except that it is disabled if
1566
    an autotrim fails.
1567
 
1568
  Designated victim (dv)
1569
    This is the preferred chunk for servicing small requests that
1570
    don't have exact fits.  It is normally the chunk split off most
1571
    recently to service another small request.  Its size is cached in
1572
    dvsize. The link fields of this chunk are not maintained since it
1573
    is not kept in a bin.
1574
 
1575
  SmallBins
1576
    An array of bin headers for free chunks.  These bins hold chunks
1577
    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1578
    chunks of all the same size, spaced 8 bytes apart.  To simplify
1579
    use in double-linked lists, each bin header acts as a malloc_chunk
1580
    pointing to the real first node, if it exists (else pointing to
1581
    itself).  This avoids special-casing for headers.  But to avoid
1582
    waste, we allocate only the fd/bk pointers of bins, and then use
1583
    repositioning tricks to treat these as the fields of a chunk.
1584
 
1585
  TreeBins
1586
    Treebins are pointers to the roots of trees holding a range of
1587
    sizes. There are 2 equally spaced treebins for each power of two
1588
    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1589
    larger.
1590
 
1591
  Bin maps
1592
    There is one bit map for small bins ("smallmap") and one for
1593
    treebins ("treemap).  Each bin sets its bit when non-empty, and
1594
    clears the bit when empty.  Bit operations are then used to avoid
1595
    bin-by-bin searching -- nearly all "search" is done without ever
1596
    looking at bins that won't be selected.  The bit maps
1597
    conservatively use 32 bits per map word, even if on 64bit system.
1598
    For a good description of some of the bit-based techniques used
1599
    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1600
    supplement at http://hackersdelight.org/). Many of these are
1601
    intended to reduce the branchiness of paths through malloc etc, as
1602
    well as to reduce the number of memory locations read or written.
1603
 
1604
  Segments
1605
    A list of segments headed by an embedded malloc_segment record
1606
    representing the initial space.
1607
 
1608
  Address check support
1609
    The least_addr field is the least address ever obtained from
1610
    MORECORE or MMAP. Attempted frees and reallocs of any address less
1611
    than this are trapped (unless INSECURE is defined).
1612
 
1613
  Magic tag
1614
    A cross-check field that should always hold same value as mparams.magic.
1615
 
1616
  Flags
1617
    Bits recording whether to use MMAP, locks, or contiguous MORECORE
1618
 
1619
  Statistics
1620
    Each space keeps track of current and maximum system memory
1621
    obtained via MORECORE or MMAP.
1622
 
1623
  Locking
1624
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
1625
    around every public call using this mspace.
1626
*/
1627
 
1628
/* Bin types, widths and sizes */
1629
#define NSMALLBINS        (32U)
1630
#define NTREEBINS         (32U)
1631
#define SMALLBIN_SHIFT    (3U)
1632
#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
1633
#define TREEBIN_SHIFT     (8U)
1634
#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
1635
#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
1636
#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1637
 
1638
struct malloc_state {
1639
  binmap_t   smallmap;
1640
  binmap_t   treemap;
1641
  size_t     dvsize;
1642
  size_t     topsize;
1643
  char*      least_addr;
1644
  mchunkptr  dv;
1645
  mchunkptr  top;
1646
  size_t     trim_check;
1647
  size_t     magic;
1648
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
1649
  tbinptr    treebins[NTREEBINS];
1650
  size_t     footprint;
1651
  size_t     max_footprint;
1652
  flag_t     mflags;
1653
#if USE_LOCKS
1654
  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
1655
#endif /* USE_LOCKS */
1656
  msegment   seg;
1657
};
1658
 
1659
typedef struct malloc_state*    mstate;
1660
 
1661
/* ------------- Global malloc_state and malloc_params ------------------- */
1662
 
1663
/*
1664
  malloc_params holds global properties, including those that can be
1665
  dynamically set using mallopt. There is a single instance, mparams,
1666
  initialized in init_mparams.
1667
*/
1668
 
1669
struct malloc_params {
1670
  size_t magic;
1671
  size_t page_size;
1672
  size_t granularity;
1673
  size_t mmap_threshold;
1674
  size_t trim_threshold;
1675
  flag_t default_mflags;
1676
};
1677
 
1678
static struct malloc_params mparams;
1679
 
1680
/* The global malloc_state used for all non-"mspace" calls */
1681
static struct malloc_state _gm_;
1682
#define gm                 (&_gm_)
1683
#define is_global(M)       ((M) == &_gm_)
1684
#define is_initialized(M)  ((M)->top != 0)
1685
 
1686
/* -------------------------- system alloc setup ------------------------- */
1687
 
1688
/* Operations on mflags */
1689
 
1690
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
1691
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
1692
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
1693
 
1694
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
1695
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
1696
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
1697
 
1698
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
1699
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
1700
 
1701
#define set_lock(M,L)\
1702
 ((M)->mflags = (L)?\
1703
  ((M)->mflags | USE_LOCK_BIT) :\
1704
  ((M)->mflags & ~USE_LOCK_BIT))
1705
 
1706
/* page-align a size */
1707
#define page_align(S)\
1708
 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
1709
 
1710
/* granularity-align a size */
1711
#define granularity_align(S)\
1712
  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
1713
 
1714
#define is_page_aligned(S)\
1715
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
1716
#define is_granularity_aligned(S)\
1717
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
1718
 
1719
/*  True if segment S holds address A */
1720
#define segment_holds(S, A)\
1721
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
1722
 
1723
/* Return segment holding given address */
1724
static msegmentptr segment_holding(mstate m, char* addr) {
1725
  msegmentptr sp = &m->seg;
1726
  for (;;) {
1727
    if (addr >= sp->base && addr < sp->base + sp->size)
1728
      return sp;
1729
    if ((sp = sp->next) == 0)
1730
      return 0;
1731
  }
1732
}
1733
 
1734
/* Return true if segment contains a segment link */
1735
static int has_segment_link(mstate m, msegmentptr ss) {
1736
  msegmentptr sp = &m->seg;
1737
  for (;;) {
1738
    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
1739
      return 1;
1740
    if ((sp = sp->next) == 0)
1741
      return 0;
1742
  }
1743
}
1744
 
1745
#ifndef MORECORE_CANNOT_TRIM
1746
#define should_trim(M,s)  ((s) > (M)->trim_check)
1747
#else  /* MORECORE_CANNOT_TRIM */
1748
#define should_trim(M,s)  (0)
1749
#endif /* MORECORE_CANNOT_TRIM */
1750
 
1751
/*
1752
  TOP_FOOT_SIZE is padding at the end of a segment, including space
1753
  that may be needed to place segment records and fenceposts when new
1754
  noncontiguous segments are added.
1755
*/
1756
#define TOP_FOOT_SIZE\
1757
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
1758
 
1759
 
1760
/* -------------------------------  Hooks -------------------------------- */
1761
 
1762
/*
1763
  PREACTION should be defined to return 0 on success, and nonzero on
1764
  failure. If you are not using locking, you can redefine these to do
1765
  anything you like.
1766
*/
1767
 
1768
#if USE_LOCKS
1769
 
1770
/* Ensure locks are initialized */
1771
#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
1772
 
1773
#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
1774
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
1775
#else /* USE_LOCKS */
1776
 
1777
#ifndef PREACTION
1778
#define PREACTION(M) (0)
1779
#endif  /* PREACTION */
1780
 
1781
#ifndef POSTACTION
1782
#define POSTACTION(M)
1783
#endif  /* POSTACTION */
1784
 
1785
#endif /* USE_LOCKS */
1786
 
1787
/*
1788
  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
1789
  USAGE_ERROR_ACTION is triggered on detected bad frees and
1790
  reallocs. The argument p is an address that might have triggered the
1791
  fault. It is ignored by the two predefined actions, but might be
1792
  useful in custom actions that try to help diagnose errors.
1793
*/
1794
 
1795
#if PROCEED_ON_ERROR
1796
 
1797
/* A count of the number of corruption errors causing resets */
1798
int malloc_corruption_error_count;
1799
 
1800
/* default corruption action */
1801
static void reset_on_error(mstate m);
1802
 
1803
#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
1804
#define USAGE_ERROR_ACTION(m, p)
1805
 
1806
#else /* PROCEED_ON_ERROR */
1807
 
1808
#ifndef CORRUPTION_ERROR_ACTION
1809
#define CORRUPTION_ERROR_ACTION(m) ABORT
1810
#endif /* CORRUPTION_ERROR_ACTION */
1811
 
1812
#ifndef USAGE_ERROR_ACTION
1813
#define USAGE_ERROR_ACTION(m,p) ABORT
1814
#endif /* USAGE_ERROR_ACTION */
1815
 
1816
#endif /* PROCEED_ON_ERROR */
1817
 
1818
/* -------------------------- Debugging setup ---------------------------- */
1819
 
1820
#if ! DEBUG
1821
 
1822
#define check_free_chunk(M,P)
1823
#define check_inuse_chunk(M,P)
1824
#define check_malloced_chunk(M,P,N)
1825
#define check_mmapped_chunk(M,P)
1826
#define check_malloc_state(M)
1827
#define check_top_chunk(M,P)
1828
 
1829
#else /* DEBUG */
1830
#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
1831
#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
1832
#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
1833
#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
1834
#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
1835
#define check_malloc_state(M)       do_check_malloc_state(M)
1836
 
1837
static void   do_check_any_chunk(mstate m, mchunkptr p);
1838
static void   do_check_top_chunk(mstate m, mchunkptr p);
1839
static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
1840
static void   do_check_inuse_chunk(mstate m, mchunkptr p);
1841
static void   do_check_free_chunk(mstate m, mchunkptr p);
1842
static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
1843
static void   do_check_tree(mstate m, tchunkptr t);
1844
static void   do_check_treebin(mstate m, bindex_t i);
1845
static void   do_check_smallbin(mstate m, bindex_t i);
1846
static void   do_check_malloc_state(mstate m);
1847
static int    bin_find(mstate m, mchunkptr x);
1848
static size_t traverse_and_check(mstate m);
1849
#endif /* DEBUG */
1850
 
1851
/* ---------------------------- Indexing Bins ---------------------------- */
1852
 
1853
#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
1854
#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
1855
#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
1856
#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
1857
 
1858
/* addressing by index. See above about smallbin repositioning */
1859
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
1860
#define treebin_at(M,i)     (&((M)->treebins[i]))
1861
 
1862
/* assign tree index for size S to variable I */
1863
#if defined(__GNUC__) && defined(i386)
1864
#define compute_tree_index(S, I)\
1865
{\
1866
  size_t X = S >> TREEBIN_SHIFT;\
1867
  if (X == 0)\
1868
    I = 0;\
1869
  else if (X > 0xFFFF)\
1870
    I = NTREEBINS-1;\
1871
  else {\
1872
    unsigned int K;\
1873
    __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
1874
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
1875
  }\
1876
}
1877
#else /* GNUC */
1878
#define compute_tree_index(S, I)\
1879
{\
1880
  size_t X = S >> TREEBIN_SHIFT;\
1881
  if (X == 0)\
1882
    I = 0;\
1883
  else if (X > 0xFFFF)\
1884
    I = NTREEBINS-1;\
1885
  else {\
1886
    unsigned int Y = (unsigned int)X;\
1887
    unsigned int N = ((Y - 0x100) >> 16) & 8;\
1888
    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
1889
    N += K;\
1890
    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
1891
    K = 14 - N + ((Y <<= K) >> 15);\
1892
    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
1893
  }\
1894
}
1895
#endif /* GNUC */
1896
 
1897
/* Bit representing maximum resolved size in a treebin at i */
1898
#define bit_for_tree_index(i) \
1899
   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
1900
 
1901
/* Shift placing maximum resolved bit in a treebin at i as sign bit */
1902
#define leftshift_for_tree_index(i) \
1903
   ((i == NTREEBINS-1)? 0 : \
1904
    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
1905
 
1906
/* The size of the smallest chunk held in bin with index i */
1907
#define minsize_for_tree_index(i) \
1908
   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
1909
   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
1910
 
1911
 
1912
/* ------------------------ Operations on bin maps ----------------------- */
1913
 
1914
/* bit corresponding to given index */
1915
#define idx2bit(i)              ((binmap_t)(1) << (i))
1916
 
1917
/* Mark/Clear bits with given index */
1918
#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
1919
#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
1920
#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
1921
 
1922
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
1923
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
1924
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
1925
 
1926
/* index corresponding to given bit */
1927
 
1928
#if defined(__GNUC__) && defined(i386)
1929
#define compute_bit2idx(X, I)\
1930
{\
1931
  unsigned int J;\
1932
  __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
1933
  I = (bindex_t)J;\
1934
}
1935
 
1936
#else /* GNUC */
1937
#if  USE_BUILTIN_FFS
1938
#define compute_bit2idx(X, I) I = ffs(X)-1
1939
 
1940
#else /* USE_BUILTIN_FFS */
1941
#define compute_bit2idx(X, I)\
1942
{\
1943
  unsigned int Y = X - 1;\
1944
  unsigned int K = Y >> (16-4) & 16;\
1945
  unsigned int N = K;        Y >>= K;\
1946
  N += K = Y >> (8-3) &  8;  Y >>= K;\
1947
  N += K = Y >> (4-2) &  4;  Y >>= K;\
1948
  N += K = Y >> (2-1) &  2;  Y >>= K;\
1949
  N += K = Y >> (1-0) &  1;  Y >>= K;\
1950
  I = (bindex_t)(N + Y);\
1951
}
1952
#endif /* USE_BUILTIN_FFS */
1953
#endif /* GNUC */
1954
 
1955
/* isolate the least set bit of a bitmap */
1956
#define least_bit(x)         ((x) & -(x))
1957
 
1958
/* mask with all bits to left of least bit of x on */
1959
#define left_bits(x)         ((x<<1) | -(x<<1))
1960
 
1961
/* mask with all bits to left of or equal to least bit of x on */
1962
#define same_or_left_bits(x) ((x) | -(x))
1963
 
1964
 
1965
/* ----------------------- Runtime Check Support ------------------------- */
1966
 
1967
/*
1968
  For security, the main invariant is that malloc/free/etc never
1969
  writes to a static address other than malloc_state, unless static
1970
  malloc_state itself has been corrupted, which cannot occur via
1971
  malloc (because of these checks). In essence this means that we
1972
  believe all pointers, sizes, maps etc held in malloc_state, but
1973
  check all of those linked or offsetted from other embedded data
1974
  structures.  These checks are interspersed with main code in a way
1975
  that tends to minimize their run-time cost.
1976
 
1977
  When FOOTERS is defined, in addition to range checking, we also
1978
  verify footer fields of inuse chunks, which can be used guarantee
1979
  that the mstate controlling malloc/free is intact.  This is a
1980
  streamlined version of the approach described by William Robertson
1981
  et al in "Run-time Detection of Heap-based Overflows" LISA'03
1982
  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
1983
  of an inuse chunk holds the xor of its mstate and a random seed,
1984
  that is checked upon calls to free() and realloc().  This is
1985
  (probablistically) unguessable from outside the program, but can be
1986
  computed by any code successfully malloc'ing any chunk, so does not
1987
  itself provide protection against code that has already broken
1988
  security through some other means.  Unlike Robertson et al, we
1989
  always dynamically check addresses of all offset chunks (previous,
1990
  next, etc). This turns out to be cheaper than relying on hashes.
1991
*/
1992
 
1993
#if !INSECURE
1994
/* Check if address a is at least as high as any from MORECORE or MMAP */
1995
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
1996
/* Check if address of next chunk n is higher than base chunk p */
1997
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
1998
/* Check if p has its cinuse bit on */
1999
#define ok_cinuse(p)     cinuse(p)
2000
/* Check if p has its pinuse bit on */
2001
#define ok_pinuse(p)     pinuse(p)
2002
 
2003
#else /* !INSECURE */
2004
#define ok_address(M, a) (1)
2005
#define ok_next(b, n)    (1)
2006
#define ok_cinuse(p)     (1)
2007
#define ok_pinuse(p)     (1)
2008
#endif /* !INSECURE */
2009
 
2010
#if (FOOTERS && !INSECURE)
2011
/* Check if (alleged) mstate m has expected magic field */
2012
#define ok_magic(M)      ((M)->magic == mparams.magic)
2013
#else  /* (FOOTERS && !INSECURE) */
2014
#define ok_magic(M)      (1)
2015
#endif /* (FOOTERS && !INSECURE) */
2016
 
2017
 
2018
/* In gcc, use __builtin_expect to minimize impact of checks */
2019
#if !INSECURE
2020
#if defined(__GNUC__) && __GNUC__ >= 3
2021
#define RTCHECK(e)  __builtin_expect(e, 1)
2022
#else /* GNUC */
2023
#define RTCHECK(e)  (e)
2024
#endif /* GNUC */
2025
#else /* !INSECURE */
2026
#define RTCHECK(e)  (1)
2027
#endif /* !INSECURE */
2028
 
2029
/* macros to set up inuse chunks with or without footers */
2030
 
2031
#if !FOOTERS
2032
 
2033
#define mark_inuse_foot(M,p,s)
2034
 
2035
/* Set cinuse bit and pinuse bit of next chunk */
2036
#define set_inuse(M,p,s)\
2037
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2038
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2039
 
2040
/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2041
#define set_inuse_and_pinuse(M,p,s)\
2042
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2043
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2044
 
2045
/* Set size, cinuse and pinuse bit of this chunk */
2046
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2047
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2048
 
2049
#else /* FOOTERS */
2050
 
2051
/* Set foot of inuse chunk to be xor of mstate and seed */
2052
#define mark_inuse_foot(M,p,s)\
2053
  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2054
 
2055
#define get_mstate_for(p)\
2056
  ((mstate)(((mchunkptr)((char*)(p) +\
2057
    (chunksize(p))))->prev_foot ^ mparams.magic))
2058
 
2059
#define set_inuse(M,p,s)\
2060
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2061
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2062
  mark_inuse_foot(M,p,s))
2063
 
2064
#define set_inuse_and_pinuse(M,p,s)\
2065
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2066
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2067
 mark_inuse_foot(M,p,s))
2068
 
2069
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2070
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2071
  mark_inuse_foot(M, p, s))
2072
 
2073
#endif /* !FOOTERS */
2074
 
2075
/* ---------------------------- setting mparams -------------------------- */
2076
 
2077
/* Initialize mparams */
2078
static int init_mparams(void) {
2079
  if (mparams.page_size == 0) {
2080
    size_t s;
2081
 
2082
    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2083
    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2084
#if MORECORE_CONTIGUOUS
2085
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2086
#else  /* MORECORE_CONTIGUOUS */
2087
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2088
#endif /* MORECORE_CONTIGUOUS */
2089
 
2090
#if (FOOTERS && !INSECURE)
2091
    {
2092
#if USE_DEV_RANDOM
2093
      int fd;
2094
      unsigned char buf[sizeof(size_t)];
2095
      /* Try to use /dev/urandom, else fall back on using time */
2096
      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2097
          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2098
        s = *((size_t *) buf);
2099
        close(fd);
2100
      }
2101
      else
2102
#endif /* USE_DEV_RANDOM */
2103
        s = (size_t)(time(0) ^ (size_t)0x55555555U);
2104
 
2105
      s |= (size_t)8U;    /* ensure nonzero */
2106
      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
2107
 
2108
    }
2109
#else /* (FOOTERS && !INSECURE) */
2110
    s = (size_t)0x58585858U;
2111
#endif /* (FOOTERS && !INSECURE) */
2112
    ACQUIRE_MAGIC_INIT_LOCK();
2113
    if (mparams.magic == 0) {
2114
      mparams.magic = s;
2115
      /* Set up lock for main malloc area */
2116
      INITIAL_LOCK(&gm->mutex);
2117
      gm->mflags = mparams.default_mflags;
2118
    }
2119
    RELEASE_MAGIC_INIT_LOCK();
2120
 
2121
#ifndef WIN32
2122
    mparams.page_size = malloc_getpagesize;
2123
    mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2124
                           DEFAULT_GRANULARITY : mparams.page_size);
2125
#else /* WIN32 */
2126
    {
2127
      mparams.page_size = 4096;
2128
      mparams.granularity = 16384;
2129
    }
2130
#endif /* WIN32 */
2131
 
2132
    /* Sanity-check configuration:
2133
       size_t must be unsigned and as wide as pointer type.
2134
       ints must be at least 4 bytes.
2135
       alignment must be at least 8.
2136
       Alignment, min chunk size, and page size must all be powers of 2.
2137
    */
2138
    if ((sizeof(size_t) != sizeof(char*)) ||
2139
        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
2140
        (sizeof(int) < 4)  ||
2141
        (MALLOC_ALIGNMENT < (size_t)8U) ||
2142
        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
2143
        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
2144
        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2145
        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
2146
      ABORT;
2147
  }
2148
  return 0;
2149
}
2150
 
2151
/* support for mallopt */
2152
static int change_mparam(int param_number, int value) {
2153
  size_t val = (size_t)value;
2154
  init_mparams();
2155
  switch(param_number) {
2156
  case M_TRIM_THRESHOLD:
2157
    mparams.trim_threshold = val;
2158
    return 1;
2159
  case M_GRANULARITY:
2160
    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2161
      mparams.granularity = val;
2162
      return 1;
2163
    }
2164
    else
2165
      return 0;
2166
  case M_MMAP_THRESHOLD:
2167
    mparams.mmap_threshold = val;
2168
    return 1;
2169
  default:
2170
    return 0;
2171
  }
2172
}
2173
 
2174
#if DEBUG
2175
#endif /* DEBUG */
2176
 
2177
/* ----------------------------- statistics ------------------------------ */
2178
 
2179
#if !NO_MALLINFO
2180
#endif /* !NO_MALLINFO */
2181
 
2182
/* ----------------------- Operations on smallbins ----------------------- */
2183
 
2184
/*
2185
  Various forms of linking and unlinking are defined as macros.  Even
2186
  the ones for trees, which are very long but have very short typical
2187
  paths.  This is ugly but reduces reliance on inlining support of
2188
  compilers.
2189
*/
2190
 
2191
/* Link a free chunk into a smallbin  */
2192
#define insert_small_chunk(M, P, S) {\
2193
  bindex_t I  = small_index(S);\
2194
  mchunkptr B = smallbin_at(M, I);\
2195
  mchunkptr F = B;\
2196
  assert(S >= MIN_CHUNK_SIZE);\
2197
  if (!smallmap_is_marked(M, I))\
2198
    mark_smallmap(M, I);\
2199
  else if (RTCHECK(ok_address(M, B->fd)))\
2200
    F = B->fd;\
2201
  else {\
2202
    CORRUPTION_ERROR_ACTION(M);\
2203
  }\
2204
  B->fd = P;\
2205
  F->bk = P;\
2206
  P->fd = F;\
2207
  P->bk = B;\
2208
}
2209
 
2210
/* Unlink a chunk from a smallbin  */
2211
#define unlink_small_chunk(M, P, S) {\
2212
  mchunkptr F = P->fd;\
2213
  mchunkptr B = P->bk;\
2214
  bindex_t I = small_index(S);\
2215
  assert(P != B);\
2216
  assert(P != F);\
2217
  assert(chunksize(P) == small_index2size(I));\
2218
  if (F == B)\
2219
    clear_smallmap(M, I);\
2220
  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2221
                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2222
    F->bk = B;\
2223
    B->fd = F;\
2224
  }\
2225
  else {\
2226
    CORRUPTION_ERROR_ACTION(M);\
2227
  }\
2228
}
2229
 
2230
/* Unlink the first chunk from a smallbin */
2231
#define unlink_first_small_chunk(M, B, P, I) {\
2232
  mchunkptr F = P->fd;\
2233
  assert(P != B);\
2234
  assert(P != F);\
2235
  assert(chunksize(P) == small_index2size(I));\
2236
  if (B == F)\
2237
    clear_smallmap(M, I);\
2238
  else if (RTCHECK(ok_address(M, F))) {\
2239
    B->fd = F;\
2240
    F->bk = B;\
2241
  }\
2242
  else {\
2243
    CORRUPTION_ERROR_ACTION(M);\
2244
  }\
2245
}
2246
 
2247
/* Replace dv node, binning the old one */
2248
/* Used only when dvsize known to be small */
2249
#define replace_dv(M, P, S) {\
2250
  size_t DVS = M->dvsize;\
2251
  if (DVS != 0) {\
2252
    mchunkptr DV = M->dv;\
2253
    assert(is_small(DVS));\
2254
    insert_small_chunk(M, DV, DVS);\
2255
  }\
2256
  M->dvsize = S;\
2257
  M->dv = P;\
2258
}
2259
 
2260
/* ------------------------- Operations on trees ------------------------- */
2261
 
2262
/* Insert chunk into tree */
2263
#define insert_large_chunk(M, X, S) {\
2264
  tbinptr* H;\
2265
  bindex_t I;\
2266
  compute_tree_index(S, I);\
2267
  H = treebin_at(M, I);\
2268
  X->index = I;\
2269
  X->child[0] = X->child[1] = 0;\
2270
  if (!treemap_is_marked(M, I)) {\
2271
    mark_treemap(M, I);\
2272
    *H = X;\
2273
    X->parent = (tchunkptr)H;\
2274
    X->fd = X->bk = X;\
2275
  }\
2276
  else {\
2277
    tchunkptr T = *H;\
2278
    size_t K = S << leftshift_for_tree_index(I);\
2279
    for (;;) {\
2280
      if (chunksize(T) != S) {\
2281
        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2282
        K <<= 1;\
2283
        if (*C != 0)\
2284
          T = *C;\
2285
        else if (RTCHECK(ok_address(M, C))) {\
2286
          *C = X;\
2287
          X->parent = T;\
2288
          X->fd = X->bk = X;\
2289
          break;\
2290
        }\
2291
        else {\
2292
          CORRUPTION_ERROR_ACTION(M);\
2293
          break;\
2294
        }\
2295
      }\
2296
      else {\
2297
        tchunkptr F = T->fd;\
2298
        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2299
          T->fd = F->bk = X;\
2300
          X->fd = F;\
2301
          X->bk = T;\
2302
          X->parent = 0;\
2303
          break;\
2304
        }\
2305
        else {\
2306
          CORRUPTION_ERROR_ACTION(M);\
2307
          break;\
2308
        }\
2309
      }\
2310
    }\
2311
  }\
2312
}
2313
 
2314
/*
2315
  Unlink steps:
2316
 
2317
  1. If x is a chained node, unlink it from its same-sized fd/bk links
2318
     and choose its bk node as its replacement.
2319
  2. If x was the last node of its size, but not a leaf node, it must
2320
     be replaced with a leaf node (not merely one with an open left or
2321
     right), to make sure that lefts and rights of descendents
2322
     correspond properly to bit masks.  We use the rightmost descendent
2323
     of x.  We could use any other leaf, but this is easy to locate and
2324
     tends to counteract removal of leftmosts elsewhere, and so keeps
2325
     paths shorter than minimally guaranteed.  This doesn't loop much
2326
     because on average a node in a tree is near the bottom.
2327
  3. If x is the base of a chain (i.e., has parent links) relink
2328
     x's parent and children to x's replacement (or null if none).
2329
*/
2330
 
2331
#define unlink_large_chunk(M, X) {\
2332
  tchunkptr XP = X->parent;\
2333
  tchunkptr R;\
2334
  if (X->bk != X) {\
2335
    tchunkptr F = X->fd;\
2336
    R = X->bk;\
2337
    if (RTCHECK(ok_address(M, F))) {\
2338
      F->bk = R;\
2339
      R->fd = F;\
2340
    }\
2341
    else {\
2342
      CORRUPTION_ERROR_ACTION(M);\
2343
    }\
2344
  }\
2345
  else {\
2346
    tchunkptr* RP;\
2347
    if (((R = *(RP = &(X->child[1]))) != 0) ||\
2348
        ((R = *(RP = &(X->child[0]))) != 0)) {\
2349
      tchunkptr* CP;\
2350
      while ((*(CP = &(R->child[1])) != 0) ||\
2351
             (*(CP = &(R->child[0])) != 0)) {\
2352
        R = *(RP = CP);\
2353
      }\
2354
      if (RTCHECK(ok_address(M, RP)))\
2355
        *RP = 0;\
2356
      else {\
2357
        CORRUPTION_ERROR_ACTION(M);\
2358
      }\
2359
    }\
2360
  }\
2361
  if (XP != 0) {\
2362
    tbinptr* H = treebin_at(M, X->index);\
2363
    if (X == *H) {\
2364
      if ((*H = R) == 0) \
2365
        clear_treemap(M, X->index);\
2366
    }\
2367
    else if (RTCHECK(ok_address(M, XP))) {\
2368
      if (XP->child[0] == X) \
2369
        XP->child[0] = R;\
2370
      else \
2371
        XP->child[1] = R;\
2372
    }\
2373
    else\
2374
      CORRUPTION_ERROR_ACTION(M);\
2375
    if (R != 0) {\
2376
      if (RTCHECK(ok_address(M, R))) {\
2377
        tchunkptr C0, C1;\
2378
        R->parent = XP;\
2379
        if ((C0 = X->child[0]) != 0) {\
2380
          if (RTCHECK(ok_address(M, C0))) {\
2381
            R->child[0] = C0;\
2382
            C0->parent = R;\
2383
          }\
2384
          else\
2385
            CORRUPTION_ERROR_ACTION(M);\
2386
        }\
2387
        if ((C1 = X->child[1]) != 0) {\
2388
          if (RTCHECK(ok_address(M, C1))) {\
2389
            R->child[1] = C1;\
2390
            C1->parent = R;\
2391
          }\
2392
          else\
2393
            CORRUPTION_ERROR_ACTION(M);\
2394
        }\
2395
      }\
2396
      else\
2397
        CORRUPTION_ERROR_ACTION(M);\
2398
    }\
2399
  }\
2400
}
2401
 
2402
/* Relays to large vs small bin operations */
2403
 
2404
#define insert_chunk(M, P, S)\
2405
  if (is_small(S)) insert_small_chunk(M, P, S)\
2406
  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
2407
 
2408
#define unlink_chunk(M, P, S)\
2409
  if (is_small(S)) unlink_small_chunk(M, P, S)\
2410
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
2411
 
2412
 
2413
/* Relays to internal calls to malloc/free from realloc, memalign etc */
2414
 
2415
#if ONLY_MSPACES
2416
#define internal_malloc(m, b) mspace_malloc(m, b)
2417
#define internal_free(m, mem) mspace_free(m,mem);
2418
#else /* ONLY_MSPACES */
2419
#if MSPACES
2420
#define internal_malloc(m, b)\
2421
   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
2422
#define internal_free(m, mem)\
2423
   if (m == gm) dlfree(mem); else mspace_free(m,mem);
2424
#else /* MSPACES */
2425
#define internal_malloc(m, b) dlmalloc(b)
2426
#define internal_free(m, mem) dlfree(mem)
2427
#endif /* MSPACES */
2428
#endif /* ONLY_MSPACES */
2429
 
2430
/* -----------------------  Direct-mmapping chunks ----------------------- */
2431
 
2432
/*
2433
  Directly mmapped chunks are set up with an offset to the start of
2434
  the mmapped region stored in the prev_foot field of the chunk. This
2435
  allows reconstruction of the required argument to MUNMAP when freed,
2436
  and also allows adjustment of the returned chunk to meet alignment
2437
  requirements (especially in memalign).  There is also enough space
2438
  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
2439
  the PINUSE bit so frees can be checked.
2440
*/
2441
 
2442
/* Malloc using mmap */
2443
static void* mmap_alloc(mstate m, size_t nb) {
2444
  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2445
  if (mmsize > nb) {     /* Check for wrap around 0 */
2446
    char* mm = (char*)(DIRECT_MMAP(mmsize));
2447
    if (mm != CMFAIL) {
2448
      size_t offset = align_offset(chunk2mem(mm));
2449
      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
2450
      mchunkptr p = (mchunkptr)(mm + offset);
2451
      p->prev_foot = offset | IS_MMAPPED_BIT;
2452
      (p)->head = (psize|CINUSE_BIT);
2453
      mark_inuse_foot(m, p, psize);
2454
      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
2455
      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
2456
 
2457
      if (mm < m->least_addr)
2458
        m->least_addr = mm;
2459
      if ((m->footprint += mmsize) > m->max_footprint)
2460
        m->max_footprint = m->footprint;
2461
      assert(is_aligned(chunk2mem(p)));
2462
      check_mmapped_chunk(m, p);
2463
      return chunk2mem(p);
2464
    }
2465
  }
2466
  return 0;
2467
}
2468
 
2469
/* Realloc using mmap */
2470
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
2471
  size_t oldsize = chunksize(oldp);
2472
  if (is_small(nb)) /* Can't shrink mmap regions below small size */
2473
    return 0;
2474
  /* Keep old chunk if big enough but not too big */
2475
  if (oldsize >= nb + SIZE_T_SIZE &&
2476
      (oldsize - nb) <= (mparams.granularity << 1))
2477
    return oldp;
2478
  else {
2479
    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
2480
    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
2481
    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
2482
                                         CHUNK_ALIGN_MASK);
2483
    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
2484
                                  oldmmsize, newmmsize, 1);
2485
    if (cp != CMFAIL) {
2486
      mchunkptr newp = (mchunkptr)(cp + offset);
2487
      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
2488
      newp->head = (psize|CINUSE_BIT);
2489
      mark_inuse_foot(m, newp, psize);
2490
      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
2491
      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
2492
 
2493
      if (cp < m->least_addr)
2494
        m->least_addr = cp;
2495
      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
2496
        m->max_footprint = m->footprint;
2497
      check_mmapped_chunk(m, newp);
2498
      return newp;
2499
    }
2500
  }
2501
  return 0;
2502
}
2503
 
2504
/* -------------------------- mspace management -------------------------- */
2505
 
2506
/* Initialize top chunk and its size */
2507
static void init_top(mstate m, mchunkptr p, size_t psize) {
2508
  /* Ensure alignment */
2509
  size_t offset = align_offset(chunk2mem(p));
2510
  p = (mchunkptr)((char*)p + offset);
2511
  psize -= offset;
2512
 
2513
  m->top = p;
2514
  m->topsize = psize;
2515
  p->head = psize | PINUSE_BIT;
2516
  /* set size of fake trailing chunk holding overhead space only once */
2517
  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
2518
  m->trim_check = mparams.trim_threshold; /* reset on each update */
2519
}
2520
 
2521
/* Initialize bins for a new mstate that is otherwise zeroed out */
2522
static void init_bins(mstate m) {
2523
  /* Establish circular links for smallbins */
2524
  bindex_t i;
2525
  for (i = 0; i < NSMALLBINS; ++i) {
2526
    sbinptr bin = smallbin_at(m,i);
2527
    bin->fd = bin->bk = bin;
2528
  }
2529
}
2530
 
2531
#if PROCEED_ON_ERROR
2532
 
2533
/* default corruption action */
2534
static void reset_on_error(mstate m) {
2535
  int i;
2536
  ++malloc_corruption_error_count;
2537
  /* Reinitialize fields to forget about all memory */
2538
  m->smallbins = m->treebins = 0;
2539
  m->dvsize = m->topsize = 0;
2540
  m->seg.base = 0;
2541
  m->seg.size = 0;
2542
  m->seg.next = 0;
2543
  m->top = m->dv = 0;
2544
  for (i = 0; i < NTREEBINS; ++i)
2545
    *treebin_at(m, i) = 0;
2546
  init_bins(m);
2547
}
2548
#endif /* PROCEED_ON_ERROR */
2549
 
2550
/* Allocate chunk and prepend remainder with chunk in successor base. */
2551
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
2552
                           size_t nb) {
2553
  mchunkptr p = align_as_chunk(newbase);
2554
  mchunkptr oldfirst = align_as_chunk(oldbase);
2555
  size_t psize = (char*)oldfirst - (char*)p;
2556
  mchunkptr q = chunk_plus_offset(p, nb);
2557
  size_t qsize = psize - nb;
2558
  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2559
 
2560
  assert((char*)oldfirst > (char*)q);
2561
  assert(pinuse(oldfirst));
2562
  assert(qsize >= MIN_CHUNK_SIZE);
2563
 
2564
  /* consolidate remainder with first chunk of old base */
2565
  if (oldfirst == m->top) {
2566
    size_t tsize = m->topsize += qsize;
2567
    m->top = q;
2568
    q->head = tsize | PINUSE_BIT;
2569
    check_top_chunk(m, q);
2570
  }
2571
  else if (oldfirst == m->dv) {
2572
    size_t dsize = m->dvsize += qsize;
2573
    m->dv = q;
2574
    set_size_and_pinuse_of_free_chunk(q, dsize);
2575
  }
2576
  else {
2577
    if (!cinuse(oldfirst)) {
2578
      size_t nsize = chunksize(oldfirst);
2579
      unlink_chunk(m, oldfirst, nsize);
2580
      oldfirst = chunk_plus_offset(oldfirst, nsize);
2581
      qsize += nsize;
2582
    }
2583
    set_free_with_pinuse(q, qsize, oldfirst);
2584
    insert_chunk(m, q, qsize);
2585
    check_free_chunk(m, q);
2586
  }
2587
 
2588
  check_malloced_chunk(m, chunk2mem(p), nb);
2589
  return chunk2mem(p);
2590
}
2591
 
2592
 
2593
/* Add a segment to hold a new noncontiguous region */
2594
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
2595
  /* Determine locations and sizes of segment, fenceposts, old top */
2596
  char* old_top = (char*)m->top;
2597
  msegmentptr oldsp = segment_holding(m, old_top);
2598
  char* old_end = oldsp->base + oldsp->size;
2599
  size_t ssize = pad_request(sizeof(struct malloc_segment));
2600
  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2601
  size_t offset = align_offset(chunk2mem(rawsp));
2602
  char* asp = rawsp + offset;
2603
  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
2604
  mchunkptr sp = (mchunkptr)csp;
2605
  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
2606
  mchunkptr tnext = chunk_plus_offset(sp, ssize);
2607
  mchunkptr p = tnext;
2608
  int nfences = 0;
2609
 
2610
  /* reset top to new space */
2611
  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2612
 
2613
  /* Set up segment record */
2614
  assert(is_aligned(ss));
2615
  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
2616
  *ss = m->seg; /* Push current record */
2617
  m->seg.base = tbase;
2618
  m->seg.size = tsize;
2619
  m->seg.sflags = mmapped;
2620
  m->seg.next = ss;
2621
 
2622
  /* Insert trailing fenceposts */
2623
  for (;;) {
2624
    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
2625
    p->head = FENCEPOST_HEAD;
2626
    ++nfences;
2627
    if ((char*)(&(nextp->head)) < old_end)
2628
      p = nextp;
2629
    else
2630
      break;
2631
  }
2632
  assert(nfences >= 2);
2633
 
2634
  /* Insert the rest of old top into a bin as an ordinary free chunk */
2635
  if (csp != old_top) {
2636
    mchunkptr q = (mchunkptr)old_top;
2637
    size_t psize = csp - old_top;
2638
    mchunkptr tn = chunk_plus_offset(q, psize);
2639
    set_free_with_pinuse(q, psize, tn);
2640
    insert_chunk(m, q, psize);
2641
  }
2642
 
2643
  check_top_chunk(m, m->top);
2644
}
2645
 
2646
/* -------------------------- System allocation -------------------------- */
2647
 
2648
/* Get memory from system using MORECORE or MMAP */
2649
static void* sys_alloc(mstate m, size_t nb) {
2650
  char* tbase = CMFAIL;
2651
  size_t tsize = 0;
2652
  flag_t mmap_flag = 0;
2653
 
2654
  init_mparams();
2655
 
2656
  /* Directly map large chunks */
2657
  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
2658
    void* mem = mmap_alloc(m, nb);
2659
    if (mem != 0)
2660
      return mem;
2661
  }
2662
 
2663
  /*
2664
    Try getting memory in any of three ways (in most-preferred to
2665
    least-preferred order):
2666
    1. A call to MORECORE that can normally contiguously extend memory.
2667
       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
2668
       or main space is mmapped or a previous contiguous call failed)
2669
    2. A call to MMAP new space (disabled if not HAVE_MMAP).
2670
       Note that under the default settings, if MORECORE is unable to
2671
       fulfill a request, and HAVE_MMAP is true, then mmap is
2672
       used as a noncontiguous system allocator. This is a useful backup
2673
       strategy for systems with holes in address spaces -- in this case
2674
       sbrk cannot contiguously expand the heap, but mmap may be able to
2675
       find space.
2676
    3. A call to MORECORE that cannot usually contiguously extend memory.
2677
       (disabled if not HAVE_MORECORE)
2678
  */
2679
 
2680
  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
2681
    char* br = CMFAIL;
2682
    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
2683
    size_t asize = 0;
2684
    ACQUIRE_MORECORE_LOCK();
2685
 
2686
    if (ss == 0) {  /* First time through or recovery */
2687
      char* base = (char*)CALL_MORECORE(0);
2688
      if (base != CMFAIL) {
2689
        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2690
        /* Adjust to end on a page boundary */
2691
        if (!is_page_aligned(base))
2692
          asize += (page_align((size_t)base) - (size_t)base);
2693
        /* Can't call MORECORE if size is negative when treated as signed */
2694
        if (asize < HALF_MAX_SIZE_T &&
2695
            (br = (char*)(CALL_MORECORE(asize))) == base) {
2696
          tbase = base;
2697
          tsize = asize;
2698
        }
2699
      }
2700
    }
2701
    else {
2702
      /* Subtract out existing available top space from MORECORE request. */
2703
      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
2704
      /* Use mem here only if it did continuously extend old space */
2705
      if (asize < HALF_MAX_SIZE_T &&
2706
          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
2707
        tbase = br;
2708
        tsize = asize;
2709
      }
2710
    }
2711
 
2712
    if (tbase == CMFAIL) {    /* Cope with partial failure */
2713
      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
2714
        if (asize < HALF_MAX_SIZE_T &&
2715
            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
2716
          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
2717
          if (esize < HALF_MAX_SIZE_T) {
2718
            char* end = (char*)CALL_MORECORE(esize);
2719
            if (end != CMFAIL)
2720
              asize += esize;
2721
            else {            /* Can't use; try to release */
2722
              CALL_MORECORE(-asize);
2723
              br = CMFAIL;
2724
            }
2725
          }
2726
        }
2727
      }
2728
      if (br != CMFAIL) {    /* Use the space we did get */
2729
        tbase = br;
2730
        tsize = asize;
2731
      }
2732
      else
2733
        disable_contiguous(m); /* Don't try contiguous path in the future */
2734
    }
2735
 
2736
    RELEASE_MORECORE_LOCK();
2737
  }
2738
 
2739
  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
2740
    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
2741
    size_t rsize = granularity_align(req);
2742
    if (rsize > nb) { /* Fail if wraps around zero */
2743
      char* mp = (char*)(CALL_MMAP(rsize));
2744
      if (mp != CMFAIL) {
2745
        tbase = mp;
2746
        tsize = rsize;
2747
        mmap_flag = IS_MMAPPED_BIT;
2748
      }
2749
    }
2750
  }
2751
 
2752
  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
2753
    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2754
    if (asize < HALF_MAX_SIZE_T) {
2755
      char* br = CMFAIL;
2756
      char* end = CMFAIL;
2757
      ACQUIRE_MORECORE_LOCK();
2758
      br = (char*)(CALL_MORECORE(asize));
2759
      end = (char*)(CALL_MORECORE(0));
2760
      RELEASE_MORECORE_LOCK();
2761
      if (br != CMFAIL && end != CMFAIL && br < end) {
2762
        size_t ssize = end - br;
2763
        if (ssize > nb + TOP_FOOT_SIZE) {
2764
          tbase = br;
2765
          tsize = ssize;
2766
        }
2767
      }
2768
    }
2769
  }
2770
 
2771
  if (tbase != CMFAIL) {
2772
 
2773
    if ((m->footprint += tsize) > m->max_footprint)
2774
      m->max_footprint = m->footprint;
2775
 
2776
    if (!is_initialized(m)) { /* first-time initialization */
2777
      m->seg.base = m->least_addr = tbase;
2778
      m->seg.size = tsize;
2779
      m->seg.sflags = mmap_flag;
2780
      m->magic = mparams.magic;
2781
      init_bins(m);
2782
      if (is_global(m))
2783
        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2784
      else {
2785
        /* Offset top by embedded malloc_state */
2786
        mchunkptr mn = next_chunk(mem2chunk(m));
2787
        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
2788
      }
2789
    }
2790
 
2791
    else {
2792
      /* Try to merge with an existing segment */
2793
      msegmentptr sp = &m->seg;
2794
      while (sp != 0 && tbase != sp->base + sp->size)
2795
        sp = sp->next;
2796
      if (sp != 0 &&
2797
          !is_extern_segment(sp) &&
2798
          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
2799
          segment_holds(sp, m->top)) { /* append */
2800
        sp->size += tsize;
2801
        init_top(m, m->top, m->topsize + tsize);
2802
      }
2803
      else {
2804
        if (tbase < m->least_addr)
2805
          m->least_addr = tbase;
2806
        sp = &m->seg;
2807
        while (sp != 0 && sp->base != tbase + tsize)
2808
          sp = sp->next;
2809
        if (sp != 0 &&
2810
            !is_extern_segment(sp) &&
2811
            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
2812
          char* oldbase = sp->base;
2813
          sp->base = tbase;
2814
          sp->size += tsize;
2815
          return prepend_alloc(m, tbase, oldbase, nb);
2816
        }
2817
        else
2818
          add_segment(m, tbase, tsize, mmap_flag);
2819
      }
2820
    }
2821
 
2822
    if (nb < m->topsize) { /* Allocate from new or extended top space */
2823
      size_t rsize = m->topsize -= nb;
2824
      mchunkptr p = m->top;
2825
      mchunkptr r = m->top = chunk_plus_offset(p, nb);
2826
      r->head = rsize | PINUSE_BIT;
2827
      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2828
      check_top_chunk(m, m->top);
2829
      check_malloced_chunk(m, chunk2mem(p), nb);
2830
      return chunk2mem(p);
2831
    }
2832
  }
2833
 
2834
  MALLOC_FAILURE_ACTION;
2835
  return 0;
2836
}
2837
 
2838
/* -----------------------  system deallocation -------------------------- */
2839
 
2840
/* Unmap and unlink any mmapped segments that don't contain used chunks */
2841
static size_t release_unused_segments(mstate m) {
2842
  size_t released = 0;
2843
  msegmentptr pred = &m->seg;
2844
  msegmentptr sp = pred->next;
2845
  while (sp != 0) {
2846
    char* base = sp->base;
2847
    size_t size = sp->size;
2848
    msegmentptr next = sp->next;
2849
    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
2850
      mchunkptr p = align_as_chunk(base);
2851
      size_t psize = chunksize(p);
2852
      /* Can unmap if first chunk holds entire segment and not pinned */
2853
      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
2854
        tchunkptr tp = (tchunkptr)p;
2855
        assert(segment_holds(sp, (char*)sp));
2856
        if (p == m->dv) {
2857
          m->dv = 0;
2858
          m->dvsize = 0;
2859
        }
2860
        else {
2861
          unlink_large_chunk(m, tp);
2862
        }
2863
        if (CALL_MUNMAP(base, size) == 0) {
2864
          released += size;
2865
          m->footprint -= size;
2866
          /* unlink obsoleted record */
2867
          sp = pred;
2868
          sp->next = next;
2869
        }
2870
        else { /* back out if cannot unmap */
2871
          insert_large_chunk(m, tp, psize);
2872
        }
2873
      }
2874
    }
2875
    pred = sp;
2876
    sp = next;
2877
  }
2878
  return released;
2879
}
2880
 
2881
static int sys_trim(mstate m, size_t pad) {
2882
  size_t released = 0;
2883
  if (pad < MAX_REQUEST && is_initialized(m)) {
2884
    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
2885
 
2886
    if (m->topsize > pad) {
2887
      /* Shrink top space in granularity-size units, keeping at least one */
2888
      size_t unit = mparams.granularity;
2889
      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
2890
                      SIZE_T_ONE) * unit;
2891
      msegmentptr sp = segment_holding(m, (char*)m->top);
2892
 
2893
      if (!is_extern_segment(sp)) {
2894
        if (is_mmapped_segment(sp)) {
2895
          if (HAVE_MMAP &&
2896
              sp->size >= extra &&
2897
              !has_segment_link(m, sp)) { /* can't shrink if pinned */
2898
            size_t newsize = sp->size - extra;
2899
            /* Prefer mremap, fall back to munmap */
2900
            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
2901
                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
2902
              released = extra;
2903
            }
2904
          }
2905
        }
2906
        else if (HAVE_MORECORE) {
2907
          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
2908
            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
2909
          ACQUIRE_MORECORE_LOCK();
2910
          {
2911
            /* Make sure end of memory is where we last set it. */
2912
            char* old_br = (char*)(CALL_MORECORE(0));
2913
            if (old_br == sp->base + sp->size) {
2914
              char* rel_br = (char*)(CALL_MORECORE(-extra));
2915
              char* new_br = (char*)(CALL_MORECORE(0));
2916
              if (rel_br != CMFAIL && new_br < old_br)
2917
                released = old_br - new_br;
2918
            }
2919
          }
2920
          RELEASE_MORECORE_LOCK();
2921
        }
2922
      }
2923
 
2924
      if (released != 0) {
2925
        sp->size -= released;
2926
        m->footprint -= released;
2927
        init_top(m, m->top, m->topsize - released);
2928
        check_top_chunk(m, m->top);
2929
      }
2930
    }
2931
 
2932
    /* Unmap any unused mmapped segments */
2933
    if (HAVE_MMAP)
2934
      released += release_unused_segments(m);
2935
 
2936
    /* On failure, disable autotrim to avoid repeated failed future calls */
2937
    if (released == 0)
2938
      m->trim_check = MAX_SIZE_T;
2939
  }
2940
 
2941
  return (released != 0)? 1 : 0;
2942
}
2943
 
2944
/* ---------------------------- malloc support --------------------------- */
2945
 
2946
/* allocate a large request from the best fitting chunk in a treebin */
2947
static void* tmalloc_large(mstate m, size_t nb) {
2948
  tchunkptr v = 0;
2949
  size_t rsize = -nb; /* Unsigned negation */
2950
  tchunkptr t;
2951
  bindex_t idx;
2952
  compute_tree_index(nb, idx);
2953
 
2954
  if ((t = *treebin_at(m, idx)) != 0) {
2955
    /* Traverse tree for this bin looking for node with size == nb */
2956
    size_t sizebits = nb << leftshift_for_tree_index(idx);
2957
    tchunkptr rst = 0;  /* The deepest untaken right subtree */
2958
    for (;;) {
2959
      tchunkptr rt;
2960
      size_t trem = chunksize(t) - nb;
2961
      if (trem < rsize) {
2962
        v = t;
2963
        if ((rsize = trem) == 0)
2964
          break;
2965
      }
2966
      rt = t->child[1];
2967
      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2968
      if (rt != 0 && rt != t)
2969
        rst = rt;
2970
      if (t == 0) {
2971
        t = rst; /* set t to least subtree holding sizes > nb */
2972
        break;
2973
      }
2974
      sizebits <<= 1;
2975
    }
2976
  }
2977
 
2978
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
2979
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
2980
    if (leftbits != 0) {
2981
      bindex_t i;
2982
      binmap_t leastbit = least_bit(leftbits);
2983
      compute_bit2idx(leastbit, i);
2984
      t = *treebin_at(m, i);
2985
    }
2986
  }
2987
 
2988
  while (t != 0) { /* find smallest of tree or subtree */
2989
    size_t trem = chunksize(t) - nb;
2990
    if (trem < rsize) {
2991
      rsize = trem;
2992
      v = t;
2993
    }
2994
    t = leftmost_child(t);
2995
  }
2996
 
2997
  /*  If dv is a better fit, return 0 so malloc will use it */
2998
  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
2999
    if (RTCHECK(ok_address(m, v))) { /* split */
3000
      mchunkptr r = chunk_plus_offset(v, nb);
3001
      assert(chunksize(v) == rsize + nb);
3002
      if (RTCHECK(ok_next(v, r))) {
3003
        unlink_large_chunk(m, v);
3004
        if (rsize < MIN_CHUNK_SIZE)
3005
          set_inuse_and_pinuse(m, v, (rsize + nb));
3006
        else {
3007
          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3008
          set_size_and_pinuse_of_free_chunk(r, rsize);
3009
          insert_chunk(m, r, rsize);
3010
        }
3011
        return chunk2mem(v);
3012
      }
3013
    }
3014
    CORRUPTION_ERROR_ACTION(m);
3015
  }
3016
  return 0;
3017
}
3018
 
3019
/* allocate a small request from the best fitting chunk in a treebin */
3020
static void* tmalloc_small(mstate m, size_t nb) {
3021
  tchunkptr t, v;
3022
  size_t rsize;
3023
  bindex_t i;
3024
  binmap_t leastbit = least_bit(m->treemap);
3025
  compute_bit2idx(leastbit, i);
3026
 
3027
  v = t = *treebin_at(m, i);
3028
  rsize = chunksize(t) - nb;
3029
 
3030
  while ((t = leftmost_child(t)) != 0) {
3031
    size_t trem = chunksize(t) - nb;
3032
    if (trem < rsize) {
3033
      rsize = trem;
3034
      v = t;
3035
    }
3036
  }
3037
 
3038
  if (RTCHECK(ok_address(m, v))) {
3039
    mchunkptr r = chunk_plus_offset(v, nb);
3040
    assert(chunksize(v) == rsize + nb);
3041
    if (RTCHECK(ok_next(v, r))) {
3042
      unlink_large_chunk(m, v);
3043
      if (rsize < MIN_CHUNK_SIZE)
3044
        set_inuse_and_pinuse(m, v, (rsize + nb));
3045
      else {
3046
        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3047
        set_size_and_pinuse_of_free_chunk(r, rsize);
3048
        replace_dv(m, r, rsize);
3049
      }
3050
      return chunk2mem(v);
3051
    }
3052
  }
3053
 
3054
  CORRUPTION_ERROR_ACTION(m);
3055
  return 0;
3056
}
3057
 
3058
/* --------------------------- realloc support --------------------------- */
3059
 
3060
static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3061
  if (bytes >= MAX_REQUEST) {
3062
    MALLOC_FAILURE_ACTION;
3063
    return 0;
3064
  }
3065
  if (!PREACTION(m)) {
3066
    mchunkptr oldp = mem2chunk(oldmem);
3067
    size_t oldsize = chunksize(oldp);
3068
    mchunkptr next = chunk_plus_offset(oldp, oldsize);
3069
    mchunkptr newp = 0;
3070
    void* extra = 0;
3071
 
3072
    /* Try to either shrink or extend into top. Else malloc-copy-free */
3073
 
3074
    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3075
                ok_next(oldp, next) && ok_pinuse(next))) {
3076
      size_t nb = request2size(bytes);
3077
      if (is_mmapped(oldp))
3078
        newp = mmap_resize(m, oldp, nb);
3079
      else if (oldsize >= nb) { /* already big enough */
3080
        size_t rsize = oldsize - nb;
3081
        newp = oldp;
3082
        if (rsize >= MIN_CHUNK_SIZE) {
3083
          mchunkptr remainder = chunk_plus_offset(newp, nb);
3084
          set_inuse(m, newp, nb);
3085
          set_inuse(m, remainder, rsize);
3086
          extra = chunk2mem(remainder);
3087
        }
3088
      }
3089
      else if (next == m->top && oldsize + m->topsize > nb) {
3090
        /* Expand into top */
3091
        size_t newsize = oldsize + m->topsize;
3092
        size_t newtopsize = newsize - nb;
3093
        mchunkptr newtop = chunk_plus_offset(oldp, nb);
3094
        set_inuse(m, oldp, nb);
3095
        newtop->head = newtopsize |PINUSE_BIT;
3096
        m->top = newtop;
3097
        m->topsize = newtopsize;
3098
        newp = oldp;
3099
      }
3100
    }
3101
    else {
3102
      USAGE_ERROR_ACTION(m, oldmem);
3103
      POSTACTION(m);
3104
      return 0;
3105
    }
3106
 
3107
    POSTACTION(m);
3108
 
3109
    if (newp != 0) {
3110
      if (extra != 0) {
3111
        internal_free(m, extra);
3112
      }
3113
      check_inuse_chunk(m, newp);
3114
      return chunk2mem(newp);
3115
    }
3116
    else {
3117
      void* newmem = internal_malloc(m, bytes);
3118
      if (newmem != 0) {
3119
        size_t oc = oldsize - overhead_for(oldp);
3120
        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3121
        internal_free(m, oldmem);
3122
      }
3123
      return newmem;
3124
    }
3125
  }
3126
  return 0;
3127
}
3128
 
3129
/* --------------------------- memalign support -------------------------- */
3130
 
3131
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3132
  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3133
    return internal_malloc(m, bytes);
3134
  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3135
    alignment = MIN_CHUNK_SIZE;
3136
  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3137
    size_t a = MALLOC_ALIGNMENT << 1;
3138
    while (a < alignment) a <<= 1;
3139
    alignment = a;
3140
  }
3141
 
3142
  if (bytes >= MAX_REQUEST - alignment) {
3143
    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3144
      MALLOC_FAILURE_ACTION;
3145
    }
3146
  }
3147
  else {
3148
    size_t nb = request2size(bytes);
3149
    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3150
    char* mem = (char*)internal_malloc(m, req);
3151
    if (mem != 0) {
3152
      void* leader = 0;
3153
      void* trailer = 0;
3154
      mchunkptr p = mem2chunk(mem);
3155
 
3156
      if (PREACTION(m)) return 0;
3157
      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3158
        /*
3159
          Find an aligned spot inside chunk.  Since we need to give
3160
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3161
          the first calculation places us at a spot with less than
3162
          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3163
          We've allocated enough total room so that this is always
3164
          possible.
3165
        */
3166
        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3167
                                                       alignment -
3168
                                                       SIZE_T_ONE)) &
3169
                                             -alignment));
3170
        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3171
          br : br+alignment;
3172
        mchunkptr newp = (mchunkptr)pos;
3173
        size_t leadsize = pos - (char*)(p);
3174
        size_t newsize = chunksize(p) - leadsize;
3175
 
3176
        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3177
          newp->prev_foot = p->prev_foot + leadsize;
3178
          newp->head = (newsize|CINUSE_BIT);
3179
        }
3180
        else { /* Otherwise, give back leader, use the rest */
3181
          set_inuse(m, newp, newsize);
3182
          set_inuse(m, p, leadsize);
3183
          leader = chunk2mem(p);
3184
        }
3185
        p = newp;
3186
      }
3187
 
3188
      /* Give back spare room at the end */
3189
      if (!is_mmapped(p)) {
3190
        size_t size = chunksize(p);
3191
        if (size > nb + MIN_CHUNK_SIZE) {
3192
          size_t remainder_size = size - nb;
3193
          mchunkptr remainder = chunk_plus_offset(p, nb);
3194
          set_inuse(m, p, nb);
3195
          set_inuse(m, remainder, remainder_size);
3196
          trailer = chunk2mem(remainder);
3197
        }
3198
      }
3199
 
3200
      assert (chunksize(p) >= nb);
3201
      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3202
      check_inuse_chunk(m, p);
3203
      POSTACTION(m);
3204
      if (leader != 0) {
3205
        internal_free(m, leader);
3206
      }
3207
      if (trailer != 0) {
3208
        internal_free(m, trailer);
3209
      }
3210
      return chunk2mem(p);
3211
    }
3212
  }
3213
  return 0;
3214
}
3215
 
3216
/* ------------------------ comalloc/coalloc support --------------------- */
3217
 
3218
static void** ialloc(mstate m,
3219
                     size_t n_elements,
3220
                     size_t* sizes,
3221
                     int opts,
3222
                     void* chunks[]) {
3223
  /*
3224
    This provides common support for independent_X routines, handling
3225
    all of the combinations that can result.
3226
 
3227
    The opts arg has:
3228
    bit 0 set if all elements are same size (using sizes[0])
3229
    bit 1 set if elements should be zeroed
3230
  */
3231
 
3232
  size_t    element_size;   /* chunksize of each element, if all same */
3233
  size_t    contents_size;  /* total size of elements */
3234
  size_t    array_size;     /* request size of pointer array */
3235
  void*     mem;            /* malloced aggregate space */
3236
  mchunkptr p;              /* corresponding chunk */
3237
  size_t    remainder_size; /* remaining bytes while splitting */
3238
  void**    marray;         /* either "chunks" or malloced ptr array */
3239
  mchunkptr array_chunk;    /* chunk for malloced ptr array */
3240
  flag_t    was_enabled;    /* to disable mmap */
3241
  size_t    size;
3242
  size_t    i;
3243
 
3244
  /* compute array length, if needed */
3245
  if (chunks != 0) {
3246
    if (n_elements == 0)
3247
      return chunks; /* nothing to do */
3248
    marray = chunks;
3249
    array_size = 0;
3250
  }
3251
  else {
3252
    /* if empty req, must still return chunk representing empty array */
3253
    if (n_elements == 0)
3254
      return (void**)internal_malloc(m, 0);
3255
    marray = 0;
3256
    array_size = request2size(n_elements * (sizeof(void*)));
3257
  }
3258
 
3259
  /* compute total element size */
3260
  if (opts & 0x1) { /* all-same-size */
3261
    element_size = request2size(*sizes);
3262
    contents_size = n_elements * element_size;
3263
  }
3264
  else { /* add up all the sizes */
3265
    element_size = 0;
3266
    contents_size = 0;
3267
    for (i = 0; i != n_elements; ++i)
3268
      contents_size += request2size(sizes[i]);
3269
  }
3270
 
3271
  size = contents_size + array_size;
3272
 
3273
  /*
3274
     Allocate the aggregate chunk.  First disable direct-mmapping so
3275
     malloc won't use it, since we would not be able to later
3276
     free/realloc space internal to a segregated mmap region.
3277
  */
3278
  was_enabled = use_mmap(m);
3279
  disable_mmap(m);
3280
  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3281
  if (was_enabled)
3282
    enable_mmap(m);
3283
  if (mem == 0)
3284
    return 0;
3285
 
3286
  if (PREACTION(m)) return 0;
3287
  p = mem2chunk(mem);
3288
  remainder_size = chunksize(p);
3289
 
3290
  assert(!is_mmapped(p));
3291
 
3292
  if (opts & 0x2) {       /* optionally clear the elements */
3293
    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3294
  }
3295
 
3296
  /* If not provided, allocate the pointer array as final part of chunk */
3297
  if (marray == 0) {
3298
    size_t  array_chunk_size;
3299
    array_chunk = chunk_plus_offset(p, contents_size);
3300
    array_chunk_size = remainder_size - contents_size;
3301
    marray = (void**) (chunk2mem(array_chunk));
3302
    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
3303
    remainder_size = contents_size;
3304
  }
3305
 
3306
  /* split out elements */
3307
  for (i = 0; ; ++i) {
3308
    marray[i] = chunk2mem(p);
3309
    if (i != n_elements-1) {
3310
      if (element_size != 0)
3311
        size = element_size;
3312
      else
3313
        size = request2size(sizes[i]);
3314
      remainder_size -= size;
3315
      set_size_and_pinuse_of_inuse_chunk(m, p, size);
3316
      p = chunk_plus_offset(p, size);
3317
    }
3318
    else { /* the final element absorbs any overallocation slop */
3319
      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
3320
      break;
3321
    }
3322
  }
3323
 
3324
#if DEBUG
3325
  if (marray != chunks) {
3326
    /* final element must have exactly exhausted chunk */
3327
    if (element_size != 0) {
3328
      assert(remainder_size == element_size);
3329
    }
3330
    else {
3331
      assert(remainder_size == request2size(sizes[i]));
3332
    }
3333
    check_inuse_chunk(m, mem2chunk(marray));
3334
  }
3335
  for (i = 0; i != n_elements; ++i)
3336
    check_inuse_chunk(m, mem2chunk(marray[i]));
3337
 
3338
#endif /* DEBUG */
3339
 
3340
  POSTACTION(m);
3341
  return marray;
3342
}
3343
 
3344
 
3345
/* -------------------------- public routines ---------------------------- */
3346
 
3347
#if !ONLY_MSPACES
3348
 
3349
void* dlmalloc(size_t bytes) {
3350
  /*
3351
     Basic algorithm:
3352
     If a small request (< 256 bytes minus per-chunk overhead):
3353
       1. If one exists, use a remainderless chunk in associated smallbin.
3354
          (Remainderless means that there are too few excess bytes to
3355
          represent as a chunk.)
3356
       2. If it is big enough, use the dv chunk, which is normally the
3357
          chunk adjacent to the one used for the most recent small request.
3358
       3. If one exists, split the smallest available chunk in a bin,
3359
          saving remainder in dv.
3360
       4. If it is big enough, use the top chunk.
3361
       5. If available, get memory from system and use it
3362
     Otherwise, for a large request:
3363
       1. Find the smallest available binned chunk that fits, and use it
3364
          if it is better fitting than dv chunk, splitting if necessary.
3365
       2. If better fitting than any binned chunk, use the dv chunk.
3366
       3. If it is big enough, use the top chunk.
3367
       4. If request size >= mmap threshold, try to directly mmap this chunk.
3368
       5. If available, get memory from system and use it
3369
 
3370
     The ugly goto's here ensure that postaction occurs along all paths.
3371
  */
3372
 
3373
  if (!PREACTION(gm)) {
3374
    void* mem;
3375
    size_t nb;
3376
    if (bytes <= MAX_SMALL_REQUEST) {
3377
      bindex_t idx;
3378
      binmap_t smallbits;
3379
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3380
      idx = small_index(nb);
3381
      smallbits = gm->smallmap >> idx;
3382
 
3383
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3384
        mchunkptr b, p;
3385
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3386
        b = smallbin_at(gm, idx);
3387
        p = b->fd;
3388
        assert(chunksize(p) == small_index2size(idx));
3389
        unlink_first_small_chunk(gm, b, p, idx);
3390
        set_inuse_and_pinuse(gm, p, small_index2size(idx));
3391
        mem = chunk2mem(p);
3392
        check_malloced_chunk(gm, mem, nb);
3393
        goto postaction;
3394
      }
3395
 
3396
      else if (nb > gm->dvsize) {
3397
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3398
          mchunkptr b, p, r;
3399
          size_t rsize;
3400
          bindex_t i;
3401
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3402
          binmap_t leastbit = least_bit(leftbits);
3403
          compute_bit2idx(leastbit, i);
3404
          b = smallbin_at(gm, i);
3405
          p = b->fd;
3406
          assert(chunksize(p) == small_index2size(i));
3407
          unlink_first_small_chunk(gm, b, p, i);
3408
          rsize = small_index2size(i) - nb;
3409
          /* Fit here cannot be remainderless if 4byte sizes */
3410
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3411
            set_inuse_and_pinuse(gm, p, small_index2size(i));
3412
          else {
3413
            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3414
            r = chunk_plus_offset(p, nb);
3415
            set_size_and_pinuse_of_free_chunk(r, rsize);
3416
            replace_dv(gm, r, rsize);
3417
          }
3418
          mem = chunk2mem(p);
3419
          check_malloced_chunk(gm, mem, nb);
3420
          goto postaction;
3421
        }
3422
 
3423
        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
3424
          check_malloced_chunk(gm, mem, nb);
3425
          goto postaction;
3426
        }
3427
      }
3428
    }
3429
    else if (bytes >= MAX_REQUEST)
3430
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3431
    else {
3432
      nb = pad_request(bytes);
3433
      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
3434
        check_malloced_chunk(gm, mem, nb);
3435
        goto postaction;
3436
      }
3437
    }
3438
 
3439
    if (nb <= gm->dvsize) {
3440
      size_t rsize = gm->dvsize - nb;
3441
      mchunkptr p = gm->dv;
3442
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3443
        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
3444
        gm->dvsize = rsize;
3445
        set_size_and_pinuse_of_free_chunk(r, rsize);
3446
        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3447
      }
3448
      else { /* exhaust dv */
3449
        size_t dvs = gm->dvsize;
3450
        gm->dvsize = 0;
3451
        gm->dv = 0;
3452
        set_inuse_and_pinuse(gm, p, dvs);
3453
      }
3454
      mem = chunk2mem(p);
3455
      check_malloced_chunk(gm, mem, nb);
3456
      goto postaction;
3457
    }
3458
 
3459
    else if (nb < gm->topsize) { /* Split top */
3460
      size_t rsize = gm->topsize -= nb;
3461
      mchunkptr p = gm->top;
3462
      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
3463
      r->head = rsize | PINUSE_BIT;
3464
      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3465
      mem = chunk2mem(p);
3466
      check_top_chunk(gm, gm->top);
3467
      check_malloced_chunk(gm, mem, nb);
3468
      goto postaction;
3469
    }
3470
 
3471
    mem = sys_alloc(gm, nb);
3472
 
3473
  postaction:
3474
    POSTACTION(gm);
3475
    return mem;
3476
  }
3477
 
3478
  return 0;
3479
}
3480
 
3481
void dlfree(void* mem) {
3482
  /*
3483
     Consolidate freed chunks with preceeding or succeeding bordering
3484
     free chunks, if they exist, and then place in a bin.  Intermixed
3485
     with special cases for top, dv, mmapped chunks, and usage errors.
3486
  */
3487
 
3488
  if (mem != 0) {
3489
    mchunkptr p  = mem2chunk(mem);
3490
#if FOOTERS
3491
    mstate fm = get_mstate_for(p);
3492
    if (!ok_magic(fm)) {
3493
      USAGE_ERROR_ACTION(fm, p);
3494
      return;
3495
    }
3496
#else /* FOOTERS */
3497
#define fm gm
3498
#endif /* FOOTERS */
3499
    if (!PREACTION(fm)) {
3500
      check_inuse_chunk(fm, p);
3501
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3502
        size_t psize = chunksize(p);
3503
        mchunkptr next = chunk_plus_offset(p, psize);
3504
        if (!pinuse(p)) {
3505
          size_t prevsize = p->prev_foot;
3506
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3507
            prevsize &= ~IS_MMAPPED_BIT;
3508
            psize += prevsize + MMAP_FOOT_PAD;
3509
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3510
              fm->footprint -= psize;
3511
            goto postaction;
3512
          }
3513
          else {
3514
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3515
            psize += prevsize;
3516
            p = prev;
3517
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3518
              if (p != fm->dv) {
3519
                unlink_chunk(fm, p, prevsize);
3520
              }
3521
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3522
                fm->dvsize = psize;
3523
                set_free_with_pinuse(p, psize, next);
3524
                goto postaction;
3525
              }
3526
            }
3527
            else
3528
              goto erroraction;
3529
          }
3530
        }
3531
 
3532
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3533
          if (!cinuse(next)) {  /* consolidate forward */
3534
            if (next == fm->top) {
3535
              size_t tsize = fm->topsize += psize;
3536
              fm->top = p;
3537
              p->head = tsize | PINUSE_BIT;
3538
              if (p == fm->dv) {
3539
                fm->dv = 0;
3540
                fm->dvsize = 0;
3541
              }
3542
              if (should_trim(fm, tsize))
3543
                sys_trim(fm, 0);
3544
              goto postaction;
3545
            }
3546
            else if (next == fm->dv) {
3547
              size_t dsize = fm->dvsize += psize;
3548
              fm->dv = p;
3549
              set_size_and_pinuse_of_free_chunk(p, dsize);
3550
              goto postaction;
3551
            }
3552
            else {
3553
              size_t nsize = chunksize(next);
3554
              psize += nsize;
3555
              unlink_chunk(fm, next, nsize);
3556
              set_size_and_pinuse_of_free_chunk(p, psize);
3557
              if (p == fm->dv) {
3558
                fm->dvsize = psize;
3559
                goto postaction;
3560
              }
3561
            }
3562
          }
3563
          else
3564
            set_free_with_pinuse(p, psize, next);
3565
          insert_chunk(fm, p, psize);
3566
          check_free_chunk(fm, p);
3567
          goto postaction;
3568
        }
3569
      }
3570
    erroraction:
3571
      USAGE_ERROR_ACTION(fm, p);
3572
    postaction:
3573
      POSTACTION(fm);
3574
    }
3575
  }
3576
#if !FOOTERS
3577
#undef fm
3578
#endif /* FOOTERS */
3579
}
3580
 
3581
void* dlcalloc(size_t n_elements, size_t elem_size) {
3582
  void* mem;
3583
  size_t req = 0;
3584
  if (n_elements != 0) {
3585
    req = n_elements * elem_size;
3586
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
3587
        (req / n_elements != elem_size))
3588
      req = MAX_SIZE_T; /* force downstream failure on overflow */
3589
  }
3590
  mem = dlmalloc(req);
3591
  if (mem != 0)
3592
    memset(mem, 0, req);
3593
  return mem;
3594
}
3595
 
3596
void* dlrealloc(void* oldmem, size_t bytes) {
3597
  if (oldmem == 0)
3598
    return dlmalloc(bytes);
3599
#ifdef REALLOC_ZERO_BYTES_FREES
3600
  if (bytes == 0) {
3601
    dlfree(oldmem);
3602
    return 0;
3603
  }
3604
#endif /* REALLOC_ZERO_BYTES_FREES */
3605
  else {
3606
#if ! FOOTERS
3607
    mstate m = gm;
3608
#else /* FOOTERS */
3609
    mstate m = get_mstate_for(mem2chunk(oldmem));
3610
    if (!ok_magic(m)) {
3611
      USAGE_ERROR_ACTION(m, oldmem);
3612
      return 0;
3613
    }
3614
#endif /* FOOTERS */
3615
    return internal_realloc(m, oldmem, bytes);
3616
  }
3617
}
3618
 
3619
void* dlmemalign(size_t alignment, size_t bytes) {
3620
  return internal_memalign(gm, alignment, bytes);
3621
}
3622
 
3623
void** dlindependent_calloc(size_t n_elements, size_t elem_size,
3624
                                 void* chunks[]) {
3625
  size_t sz = elem_size; /* serves as 1-element array */
3626
  return ialloc(gm, n_elements, &sz, 3, chunks);
3627
}
3628
 
3629
void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
3630
                                   void* chunks[]) {
3631
  return ialloc(gm, n_elements, sizes, 0, chunks);
3632
}
3633
 
3634
void* dlvalloc(size_t bytes) {
3635
  size_t pagesz;
3636
  init_mparams();
3637
  pagesz = mparams.page_size;
3638
  return dlmemalign(pagesz, bytes);
3639
}
3640
 
3641
void* dlpvalloc(size_t bytes) {
3642
  size_t pagesz;
3643
  init_mparams();
3644
  pagesz = mparams.page_size;
3645
  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
3646
}
3647
 
3648
int dlmalloc_trim(size_t pad) {
3649
  int result = 0;
3650
  if (!PREACTION(gm)) {
3651
    result = sys_trim(gm, pad);
3652
    POSTACTION(gm);
3653
  }
3654
  return result;
3655
}
3656
 
3657
size_t dlmalloc_footprint(void) {
3658
  return gm->footprint;
3659
}
3660
 
3661
size_t dlmalloc_max_footprint(void) {
3662
  return gm->max_footprint;
3663
}
3664
 
3665
#if !NO_MALLINFO
3666
struct mallinfo dlmallinfo(void) {
3667
  return internal_mallinfo(gm);
3668
}
3669
#endif /* NO_MALLINFO */
3670
 
3671
//void dlmalloc_stats() {
3672
//  internal_malloc_stats(gm);
3673
//}
3674
 
3675
size_t dlmalloc_usable_size(void* mem) {
3676
  if (mem != 0) {
3677
    mchunkptr p = mem2chunk(mem);
3678
    if (cinuse(p))
3679
      return chunksize(p) - overhead_for(p);
3680
  }
3681
  return 0;
3682
}
3683
 
3684
int dlmallopt(int param_number, int value) {
3685
  return change_mparam(param_number, value);
3686
}
3687
 
3688
#endif /* !ONLY_MSPACES */
3689
 
3690
/* ----------------------------- user mspaces ---------------------------- */
3691
 
3692
#if MSPACES
3693
#endif /* MSPACES */
3694
 
3695
/* -------------------- Alternative MORECORE functions ------------------- */
3696
 
3697
/*
3698
  Guidelines for creating a custom version of MORECORE:
3699
 
3700
  * For best performance, MORECORE should allocate in multiples of pagesize.
3701
  * MORECORE may allocate more memory than requested. (Or even less,
3702
      but this will usually result in a malloc failure.)
3703
  * MORECORE must not allocate memory when given argument zero, but
3704
      instead return one past the end address of memory from previous
3705
      nonzero call.
3706
  * For best performance, consecutive calls to MORECORE with positive
3707
      arguments should return increasing addresses, indicating that
3708
      space has been contiguously extended.
3709
  * Even though consecutive calls to MORECORE need not return contiguous
3710
      addresses, it must be OK for malloc'ed chunks to span multiple
3711
      regions in those cases where they do happen to be contiguous.
3712
  * MORECORE need not handle negative arguments -- it may instead
3713
      just return MFAIL when given negative arguments.
3714
      Negative arguments are always multiples of pagesize. MORECORE
3715
      must not misinterpret negative args as large positive unsigned
3716
      args. You can suppress all such calls from even occurring by defining
3717
      MORECORE_CANNOT_TRIM,
3718
 
3719
  As an example alternative MORECORE, here is a custom allocator
3720
  kindly contributed for pre-OSX macOS.  It uses virtually but not
3721
  necessarily physically contiguous non-paged memory (locked in,
3722
  present and won't get swapped out).  You can use it by uncommenting
3723
  this section, adding some #includes, and setting up the appropriate
3724
  defines above:
3725
 
3726
      #define MORECORE osMoreCore
3727
 
3728
  There is also a shutdown routine that should somehow be called for
3729
  cleanup upon program exit.
3730
 
3731
  #define MAX_POOL_ENTRIES 100
3732
  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
3733
  static int next_os_pool;
3734
  void *our_os_pools[MAX_POOL_ENTRIES];
3735
 
3736
  void *osMoreCore(int size)
3737
  {
3738
    void *ptr = 0;
3739
    static void *sbrk_top = 0;
3740
 
3741
    if (size > 0)
3742
    {
3743
      if (size < MINIMUM_MORECORE_SIZE)
3744
         size = MINIMUM_MORECORE_SIZE;
3745
      if (CurrentExecutionLevel() == kTaskLevel)
3746
         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
3747
      if (ptr == 0)
3748
      {
3749
        return (void *) MFAIL;
3750
      }
3751
      // save ptrs so they can be freed during cleanup
3752
      our_os_pools[next_os_pool] = ptr;
3753
      next_os_pool++;
3754
      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
3755
      sbrk_top = (char *) ptr + size;
3756
      return ptr;
3757
    }
3758
    else if (size < 0)
3759
    {
3760
      // we don't currently support shrink behavior
3761
      return (void *) MFAIL;
3762
    }
3763
    else
3764
    {
3765
      return sbrk_top;
3766
    }
3767
  }
3768
 
3769
  // cleanup any allocated memory pools
3770
  // called as last thing before shutting down driver
3771
 
3772
  void osCleanupMem(void)
3773
  {
3774
    void **ptr;
3775
 
3776
    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
3777
      if (*ptr)
3778
      {
3779
         PoolDeallocate(*ptr);
3780
         *ptr = 0;
3781
      }
3782
  }
3783
 
3784
*/
3785
 
3786
 
3787
/* -----------------------------------------------------------------------
3788
History:
3789
    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
3790
      * Add max_footprint functions
3791
      * Ensure all appropriate literals are size_t
3792
      * Fix conditional compilation problem for some #define settings
3793
      * Avoid concatenating segments with the one provided
3794
        in create_mspace_with_base
3795
      * Rename some variables to avoid compiler shadowing warnings
3796
      * Use explicit lock initialization.
3797
      * Better handling of sbrk interference.
3798
      * Simplify and fix segment insertion, trimming and mspace_destroy
3799
      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
3800
      * Thanks especially to Dennis Flanagan for help on these.
3801
 
3802
    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
3803
      * Fix memalign brace error.
3804
 
3805
    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
3806
      * Fix improper #endif nesting in C++
3807
      * Add explicit casts needed for C++
3808
 
3809
    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
3810
      * Use trees for large bins
3811
      * Support mspaces
3812
      * Use segments to unify sbrk-based and mmap-based system allocation,
3813
        removing need for emulation on most platforms without sbrk.
3814
      * Default safety checks
3815
      * Optional footer checks. Thanks to William Robertson for the idea.
3816
      * Internal code refactoring
3817
      * Incorporate suggestions and platform-specific changes.
3818
        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
3819
        Aaron Bachmann,  Emery Berger, and others.
3820
      * Speed up non-fastbin processing enough to remove fastbins.
3821
      * Remove useless cfree() to avoid conflicts with other apps.
3822
      * Remove internal memcpy, memset. Compilers handle builtins better.
3823
      * Remove some options that no one ever used and rename others.
3824
 
3825
    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
3826
      * Fix malloc_state bitmap array misdeclaration
3827
 
3828
    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
3829
      * Allow tuning of FIRST_SORTED_BIN_SIZE
3830
      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
3831
      * Better detection and support for non-contiguousness of MORECORE.
3832
        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
3833
      * Bypass most of malloc if no frees. Thanks To Emery Berger.
3834
      * Fix freeing of old top non-contiguous chunk im sysmalloc.
3835
      * Raised default trim and map thresholds to 256K.
3836
      * Fix mmap-related #defines. Thanks to Lubos Lunak.
3837
      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
3838
      * Branch-free bin calculation
3839
      * Default trim and mmap thresholds now 256K.
3840
 
3841
    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
3842
      * Introduce independent_comalloc and independent_calloc.
3843
        Thanks to Michael Pachos for motivation and help.
3844
      * Make optional .h file available
3845
      * Allow > 2GB requests on 32bit systems.
3846
      * new WIN32 sbrk, mmap, munmap, lock code from .
3847
        Thanks also to Andreas Mueller ,
3848
        and Anonymous.
3849
      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
3850
        helping test this.)
3851
      * memalign: check alignment arg
3852
      * realloc: don't try to shift chunks backwards, since this
3853
        leads to  more fragmentation in some programs and doesn't
3854
        seem to help in any others.
3855
      * Collect all cases in malloc requiring system memory into sysmalloc
3856
      * Use mmap as backup to sbrk
3857
      * Place all internal state in malloc_state
3858
      * Introduce fastbins (although similar to 2.5.1)
3859
      * Many minor tunings and cosmetic improvements
3860
      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
3861
      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
3862
        Thanks to Tony E. Bennett  and others.
3863
      * Include errno.h to support default failure action.
3864
 
3865
    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
3866
      * return null for negative arguments
3867
      * Added Several WIN32 cleanups from Martin C. Fong 
3868
         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
3869
          (e.g. WIN32 platforms)
3870
         * Cleanup header file inclusion for WIN32 platforms
3871
         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
3872
         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
3873
           memory allocation routines
3874
         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
3875
         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
3876
           usage of 'assert' in non-WIN32 code
3877
         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
3878
           avoid infinite loop
3879
      * Always call 'fREe()' rather than 'free()'
3880
 
3881
    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
3882
      * Fixed ordering problem with boundary-stamping
3883
 
3884
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
3885
      * Added pvalloc, as recommended by H.J. Liu
3886
      * Added 64bit pointer support mainly from Wolfram Gloger
3887
      * Added anonymously donated WIN32 sbrk emulation
3888
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3889
      * malloc_extend_top: fix mask error that caused wastage after
3890
        foreign sbrks
3891
      * Add linux mremap support code from HJ Liu
3892
 
3893
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
3894
      * Integrated most documentation with the code.
3895
      * Add support for mmap, with help from
3896
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3897
      * Use last_remainder in more cases.
3898
      * Pack bins using idea from  colin@nyx10.cs.du.edu
3899
      * Use ordered bins instead of best-fit threshhold
3900
      * Eliminate block-local decls to simplify tracing and debugging.
3901
      * Support another case of realloc via move into top
3902
      * Fix error occuring when initial sbrk_base not word-aligned.
3903
      * Rely on page size for units instead of SBRK_UNIT to
3904
        avoid surprises about sbrk alignment conventions.
3905
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3906
        (raymond@es.ele.tue.nl) for the suggestion.
3907
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3908
      * More precautions for cases where other routines call sbrk,
3909
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3910
      * Added macros etc., allowing use in linux libc from
3911
        H.J. Lu (hjl@gnu.ai.mit.edu)
3912
      * Inverted this history list
3913
 
3914
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
3915
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3916
      * Removed all preallocation code since under current scheme
3917
        the work required to undo bad preallocations exceeds
3918
        the work saved in good cases for most test programs.
3919
      * No longer use return list or unconsolidated bins since
3920
        no scheme using them consistently outperforms those that don't
3921
        given above changes.
3922
      * Use best fit for very large chunks to prevent some worst-cases.
3923
      * Added some support for debugging
3924
 
3925
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
3926
      * Removed footers when chunks are in use. Thanks to
3927
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3928
 
3929
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
3930
      * Added malloc_trim, with help from Wolfram Gloger
3931
        (wmglo@Dent.MED.Uni-Muenchen.DE).
3932
 
3933
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
3934
 
3935
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
3936
      * realloc: try to expand in both directions
3937
      * malloc: swap order of clean-bin strategy;
3938
      * realloc: only conditionally expand backwards
3939
      * Try not to scavenge used bins
3940
      * Use bin counts as a guide to preallocation
3941
      * Occasionally bin return list chunks in first scan
3942
      * Add a few optimizations from colin@nyx10.cs.du.edu
3943
 
3944
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
3945
      * faster bin computation & slightly different binning
3946
      * merged all consolidations to one part of malloc proper
3947
         (eliminating old malloc_find_space & malloc_clean_bin)
3948
      * Scan 2 returns chunks (not just 1)
3949
      * Propagate failure in realloc if malloc returns 0
3950
      * Add stuff to allow compilation on non-ANSI compilers
3951
          from kpv@research.att.com
3952
 
3953
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
3954
      * removed potential for odd address access in prev_chunk
3955
      * removed dependency on getpagesize.h
3956
      * misc cosmetics and a bit more internal documentation
3957
      * anticosmetics: mangled names in macros to evade debugger strangeness
3958
      * tested on sparc, hp-700, dec-mips, rs6000
3959
          with gcc & native cc (hp, dec only) allowing
3960
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3961
 
3962
    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
3963
      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3964
         structure of old version,  but most details differ.)
3965
 
3966
*/