Subversion Repositories Kolibri OS

Rev

Rev 1616 | Go to most recent revision | Details | Last modification | View Log | RSS feed

Rev Author Line No. Line
1408 serge 1
/*
2
  This is a version (aka dlmalloc) of malloc/free/realloc written by
3
  Doug Lea and released to the public domain, as explained at
4
  http://creativecommons.org/licenses/publicdomain.  Send questions,
5
  comments, complaints, performance data, etc to dl@cs.oswego.edu
6
 
7
* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
8
 
9
   Note: There may be an updated version of this malloc obtainable at
10
           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11
         Check before installing!
12
 
13
* Quickstart
14
 
15
  This library is all in one file to simplify the most common usage:
16
  ftp it, compile it (-O3), and link it into another program. All of
17
  the compile-time options default to reasonable values for use on
18
  most platforms.  You might later want to step through various
19
  compile-time and dynamic tuning options.
20
 
21
  For convenience, an include file for code using this malloc is at:
22
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
23
  You don't really need this .h file unless you call functions not
24
  defined in your system include files.  The .h file contains only the
25
  excerpts from this file needed for using this malloc on ANSI C/C++
26
  systems, so long as you haven't changed compile-time options about
27
  naming and tuning parameters.  If you do, then you can create your
28
  own malloc.h that does include all settings by cutting at the point
29
  indicated below. Note that you may already by default be using a C
30
  library containing a malloc that is based on some version of this
31
  malloc (for example in linux). You might still want to use the one
32
  in this file to customize settings or to avoid overheads associated
33
  with library versions.
34
 
35
* Vital statistics:
36
 
37
  Supported pointer/size_t representation:       4 or 8 bytes
38
       size_t MUST be an unsigned type of the same width as
39
       pointers. (If you are using an ancient system that declares
40
       size_t as a signed type, or need it to be a different width
41
       than pointers, you can use a previous release of this malloc
42
       (e.g. 2.7.2) supporting these.)
43
 
44
  Alignment:                                     8 bytes (default)
45
       This suffices for nearly all current machines and C compilers.
46
       However, you can define MALLOC_ALIGNMENT to be wider than this
47
       if necessary (up to 128bytes), at the expense of using more space.
48
 
49
  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
50
                                          8 or 16 bytes (if 8byte sizes)
51
       Each malloced chunk has a hidden word of overhead holding size
52
       and status information, and additional cross-check word
53
       if FOOTERS is defined.
54
 
55
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
56
                          8-byte ptrs:  32 bytes    (including overhead)
57
 
58
       Even a request for zero bytes (i.e., malloc(0)) returns a
59
       pointer to something of the minimum allocatable size.
60
       The maximum overhead wastage (i.e., number of extra bytes
61
       allocated than were requested in malloc) is less than or equal
62
       to the minimum size, except for requests >= mmap_threshold that
63
       are serviced via mmap(), where the worst case wastage is about
64
       32 bytes plus the remainder from a system page (the minimal
65
       mmap unit); typically 4096 or 8192 bytes.
66
 
67
  Security: static-safe; optionally more or less
68
       The "security" of malloc refers to the ability of malicious
69
       code to accentuate the effects of errors (for example, freeing
70
       space that is not currently malloc'ed or overwriting past the
71
       ends of chunks) in code that calls malloc.  This malloc
72
       guarantees not to modify any memory locations below the base of
73
       heap, i.e., static variables, even in the presence of usage
74
       errors.  The routines additionally detect most improper frees
75
       and reallocs.  All this holds as long as the static bookkeeping
76
       for malloc itself is not corrupted by some other means.  This
77
       is only one aspect of security -- these checks do not, and
78
       cannot, detect all possible programming errors.
79
 
80
       If FOOTERS is defined nonzero, then each allocated chunk
81
       carries an additional check word to verify that it was malloced
82
       from its space.  These check words are the same within each
83
       execution of a program using malloc, but differ across
84
       executions, so externally crafted fake chunks cannot be
85
       freed. This improves security by rejecting frees/reallocs that
86
       could corrupt heap memory, in addition to the checks preventing
87
       writes to statics that are always on.  This may further improve
88
       security at the expense of time and space overhead.  (Note that
89
       FOOTERS may also be worth using with MSPACES.)
90
 
91
       By default detected errors cause the program to abort (calling
92
       "abort()"). You can override this to instead proceed past
93
       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
94
       has no effect, and a malloc that encounters a bad address
95
       caused by user overwrites will ignore the bad address by
96
       dropping pointers and indices to all known memory. This may
97
       be appropriate for programs that should continue if at all
98
       possible in the face of programming errors, although they may
99
       run out of memory because dropped memory is never reclaimed.
100
 
101
       If you don't like either of these options, you can define
102
       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103
       else. And if if you are sure that your program using malloc has
104
       no errors or vulnerabilities, you can define INSECURE to 1,
105
       which might (or might not) provide a small performance improvement.
106
 
107
  Thread-safety: NOT thread-safe unless USE_LOCKS defined
108
       When USE_LOCKS is defined, each public call to malloc, free,
109
       etc is surrounded with either a pthread mutex or a win32
110
       spinlock (depending on WIN32). This is not especially fast, and
111
       can be a major bottleneck.  It is designed only to provide
112
       minimal protection in concurrent environments, and to provide a
113
       basis for extensions.  If you are using malloc in a concurrent
114
       program, consider instead using ptmalloc, which is derived from
115
       a version of this malloc. (See http://www.malloc.de).
116
 
117
  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
118
       This malloc can use unix sbrk or any emulation (invoked using
119
       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
120
       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
121
       memory.  On most unix systems, it tends to work best if both
122
       MORECORE and MMAP are enabled.  On Win32, it uses emulations
123
       based on VirtualAlloc. It also uses common C library functions
124
       like memset.
125
 
126
  Compliance: I believe it is compliant with the Single Unix Specification
127
       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
128
       others as well.
129
 
130
* Overview of algorithms
131
 
132
  This is not the fastest, most space-conserving, most portable, or
133
  most tunable malloc ever written. However it is among the fastest
134
  while also being among the most space-conserving, portable and
135
  tunable.  Consistent balance across these factors results in a good
136
  general-purpose allocator for malloc-intensive programs.
137
 
138
  In most ways, this malloc is a best-fit allocator. Generally, it
139
  chooses the best-fitting existing chunk for a request, with ties
140
  broken in approximately least-recently-used order. (This strategy
141
  normally maintains low fragmentation.) However, for requests less
142
  than 256bytes, it deviates from best-fit when there is not an
143
  exactly fitting available chunk by preferring to use space adjacent
144
  to that used for the previous small request, as well as by breaking
145
  ties in approximately most-recently-used order. (These enhance
146
  locality of series of small allocations.)  And for very large requests
147
  (>= 256Kb by default), it relies on system memory mapping
148
  facilities, if supported.  (This helps avoid carrying around and
149
  possibly fragmenting memory used only for large chunks.)
150
 
151
  All operations (except malloc_stats and mallinfo) have execution
152
  times that are bounded by a constant factor of the number of bits in
153
  a size_t, not counting any clearing in calloc or copying in realloc,
154
  or actions surrounding MORECORE and MMAP that have times
155
  proportional to the number of non-contiguous regions returned by
156
  system allocation routines, which is often just 1.
157
 
158
  The implementation is not very modular and seriously overuses
159
  macros. Perhaps someday all C compilers will do as good a job
160
  inlining modular code as can now be done by brute-force expansion,
161
  but now, enough of them seem not to.
162
 
163
  Some compilers issue a lot of warnings about code that is
164
  dead/unreachable only on some platforms, and also about intentional
165
  uses of negation on unsigned types. All known cases of each can be
166
  ignored.
167
 
168
  For a longer but out of date high-level description, see
169
     http://gee.cs.oswego.edu/dl/html/malloc.html
170
 
171
* MSPACES
172
  If MSPACES is defined, then in addition to malloc, free, etc.,
173
  this file also defines mspace_malloc, mspace_free, etc. These
174
  are versions of malloc routines that take an "mspace" argument
175
  obtained using create_mspace, to control all internal bookkeeping.
176
  If ONLY_MSPACES is defined, only these versions are compiled.
177
  So if you would like to use this allocator for only some allocations,
178
  and your system malloc for others, you can compile with
179
  ONLY_MSPACES and then do something like...
180
    static mspace mymspace = create_mspace(0,0); // for example
181
    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
182
 
183
  (Note: If you only need one instance of an mspace, you can instead
184
  use "USE_DL_PREFIX" to relabel the global malloc.)
185
 
186
  You can similarly create thread-local allocators by storing
187
  mspaces as thread-locals. For example:
188
    static __thread mspace tlms = 0;
189
    void*  tlmalloc(size_t bytes) {
190
      if (tlms == 0) tlms = create_mspace(0, 0);
191
      return mspace_malloc(tlms, bytes);
192
    }
193
    void  tlfree(void* mem) { mspace_free(tlms, mem); }
194
 
195
  Unless FOOTERS is defined, each mspace is completely independent.
196
  You cannot allocate from one and free to another (although
197
  conformance is only weakly checked, so usage errors are not always
198
  caught). If FOOTERS is defined, then each chunk carries around a tag
199
  indicating its originating mspace, and frees are directed to their
200
  originating spaces.
201
 
202
 -------------------------  Compile-time options ---------------------------
203
 
204
Be careful in setting #define values for numerical constants of type
205
size_t. On some systems, literal values are not automatically extended
206
to size_t precision unless they are explicitly casted.
207
 
208
WIN32                    default: defined if _WIN32 defined
209
  Defining WIN32 sets up defaults for MS environment and compilers.
210
  Otherwise defaults are for unix.
211
 
212
MALLOC_ALIGNMENT         default: (size_t)8
213
  Controls the minimum alignment for malloc'ed chunks.  It must be a
214
  power of two and at least 8, even on machines for which smaller
215
  alignments would suffice. It may be defined as larger than this
216
  though. Note however that code and data structures are optimized for
217
  the case of 8-byte alignment.
218
 
219
MSPACES                  default: 0 (false)
220
  If true, compile in support for independent allocation spaces.
221
  This is only supported if HAVE_MMAP is true.
222
 
223
ONLY_MSPACES             default: 0 (false)
224
  If true, only compile in mspace versions, not regular versions.
225
 
226
USE_LOCKS                default: 0 (false)
227
  Causes each call to each public routine to be surrounded with
228
  pthread or WIN32 mutex lock/unlock. (If set true, this can be
229
  overridden on a per-mspace basis for mspace versions.)
230
 
231
FOOTERS                  default: 0
232
  If true, provide extra checking and dispatching by placing
233
  information in the footers of allocated chunks. This adds
234
  space and time overhead.
235
 
236
INSECURE                 default: 0
237
  If true, omit checks for usage errors and heap space overwrites.
238
 
239
USE_DL_PREFIX            default: NOT defined
240
  Causes compiler to prefix all public routines with the string 'dl'.
241
  This can be useful when you only want to use this malloc in one part
242
  of a program, using your regular system malloc elsewhere.
243
 
244
ABORT                    default: defined as abort()
245
  Defines how to abort on failed checks.  On most systems, a failed
246
  check cannot die with an "assert" or even print an informative
247
  message, because the underlying print routines in turn call malloc,
248
  which will fail again.  Generally, the best policy is to simply call
249
  abort(). It's not very useful to do more than this because many
250
  errors due to overwriting will show up as address faults (null, odd
251
  addresses etc) rather than malloc-triggered checks, so will also
252
  abort.  Also, most compilers know that abort() does not return, so
253
  can better optimize code conditionally calling it.
254
 
255
PROCEED_ON_ERROR           default: defined as 0 (false)
256
  Controls whether detected bad addresses cause them to bypassed
257
  rather than aborting. If set, detected bad arguments to free and
258
  realloc are ignored. And all bookkeeping information is zeroed out
259
  upon a detected overwrite of freed heap space, thus losing the
260
  ability to ever return it from malloc again, but enabling the
261
  application to proceed. If PROCEED_ON_ERROR is defined, the
262
  static variable malloc_corruption_error_count is compiled in
263
  and can be examined to see if errors have occurred. This option
264
  generates slower code than the default abort policy.
265
 
266
DEBUG                    default: NOT defined
267
  The DEBUG setting is mainly intended for people trying to modify
268
  this code or diagnose problems when porting to new platforms.
269
  However, it may also be able to better isolate user errors than just
270
  using runtime checks.  The assertions in the check routines spell
271
  out in more detail the assumptions and invariants underlying the
272
  algorithms.  The checking is fairly extensive, and will slow down
273
  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
274
  set will attempt to check every non-mmapped allocated and free chunk
275
  in the course of computing the summaries.
276
 
277
ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
278
  Debugging assertion failures can be nearly impossible if your
279
  version of the assert macro causes malloc to be called, which will
280
  lead to a cascade of further failures, blowing the runtime stack.
281
  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
282
  which will usually make debugging easier.
283
 
284
MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
285
  The action to take before "return 0" when malloc fails to be able to
286
  return memory because there is none available.
287
 
288
HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
289
  True if this system supports sbrk or an emulation of it.
290
 
291
MORECORE                  default: sbrk
292
  The name of the sbrk-style system routine to call to obtain more
293
  memory.  See below for guidance on writing custom MORECORE
294
  functions. The type of the argument to sbrk/MORECORE varies across
295
  systems.  It cannot be size_t, because it supports negative
296
  arguments, so it is normally the signed type of the same width as
297
  size_t (sometimes declared as "intptr_t").  It doesn't much matter
298
  though. Internally, we only call it with arguments less than half
299
  the max value of a size_t, which should work across all reasonable
300
  possibilities, although sometimes generating compiler warnings.  See
301
  near the end of this file for guidelines for creating a custom
302
  version of MORECORE.
303
 
304
MORECORE_CONTIGUOUS       default: 1 (true)
305
  If true, take advantage of fact that consecutive calls to MORECORE
306
  with positive arguments always return contiguous increasing
307
  addresses.  This is true of unix sbrk. It does not hurt too much to
308
  set it true anyway, since malloc copes with non-contiguities.
309
  Setting it false when definitely non-contiguous saves time
310
  and possibly wasted space it would take to discover this though.
311
 
312
MORECORE_CANNOT_TRIM      default: NOT defined
313
  True if MORECORE cannot release space back to the system when given
314
  negative arguments. This is generally necessary only if you are
315
  using a hand-crafted MORECORE function that cannot handle negative
316
  arguments.
317
 
318
HAVE_MMAP                 default: 1 (true)
319
  True if this system supports mmap or an emulation of it.  If so, and
320
  HAVE_MORECORE is not true, MMAP is used for all system
321
  allocation. If set and HAVE_MORECORE is true as well, MMAP is
322
  primarily used to directly allocate very large blocks. It is also
323
  used as a backup strategy in cases where MORECORE fails to provide
324
  space from system. Note: A single call to MUNMAP is assumed to be
325
  able to unmap memory that may have be allocated using multiple calls
326
  to MMAP, so long as they are adjacent.
327
 
328
HAVE_MREMAP               default: 1 on linux, else 0
329
  If true realloc() uses mremap() to re-allocate large blocks and
330
  extend or shrink allocation spaces.
331
 
332
MMAP_CLEARS               default: 1 on unix
333
  True if mmap clears memory so calloc doesn't need to. This is true
334
  for standard unix mmap using /dev/zero.
335
 
336
USE_BUILTIN_FFS            default: 0 (i.e., not used)
337
  Causes malloc to use the builtin ffs() function to compute indices.
338
  Some compilers may recognize and intrinsify ffs to be faster than the
339
  supplied C version. Also, the case of x86 using gcc is special-cased
340
  to an asm instruction, so is already as fast as it can be, and so
341
  this setting has no effect. (On most x86s, the asm version is only
342
  slightly faster than the C version.)
343
 
344
malloc_getpagesize         default: derive from system includes, or 4096.
345
  The system page size. To the extent possible, this malloc manages
346
  memory from the system in page-size units.  This may be (and
347
  usually is) a function rather than a constant. This is ignored
348
  if WIN32, where page size is determined using getSystemInfo during
349
  initialization.
350
 
351
USE_DEV_RANDOM             default: 0 (i.e., not used)
352
  Causes malloc to use /dev/random to initialize secure magic seed for
353
  stamping footers. Otherwise, the current time is used.
354
 
355
NO_MALLINFO                default: 0
356
  If defined, don't compile "mallinfo". This can be a simple way
357
  of dealing with mismatches between system declarations and
358
  those in this file.
359
 
360
MALLINFO_FIELD_TYPE        default: size_t
361
  The type of the fields in the mallinfo struct. This was originally
362
  defined as "int" in SVID etc, but is more usefully defined as
363
  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
364
 
365
REALLOC_ZERO_BYTES_FREES    default: not defined
366
  This should be set if a call to realloc with zero bytes should
367
  be the same as a call to free. Some people think it should. Otherwise,
368
  since this malloc returns a unique pointer for malloc(0), so does
369
  realloc(p, 0).
370
 
371
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
372
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
373
LACKS_STDLIB_H                default: NOT defined unless on WIN32
374
  Define these if your system does not have these header files.
375
  You might need to manually insert some of the declarations they provide.
376
 
377
DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
378
                                system_info.dwAllocationGranularity in WIN32,
379
                                otherwise 64K.
380
      Also settable using mallopt(M_GRANULARITY, x)
381
  The unit for allocating and deallocating memory from the system.  On
382
  most systems with contiguous MORECORE, there is no reason to
383
  make this more than a page. However, systems with MMAP tend to
384
  either require or encourage larger granularities.  You can increase
385
  this value to prevent system allocation functions to be called so
386
  often, especially if they are slow.  The value must be at least one
387
  page and must be a power of two.  Setting to 0 causes initialization
388
  to either page size or win32 region size.  (Note: In previous
389
  versions of malloc, the equivalent of this option was called
390
  "TOP_PAD")
391
 
392
DEFAULT_TRIM_THRESHOLD    default: 2MB
393
      Also settable using mallopt(M_TRIM_THRESHOLD, x)
394
  The maximum amount of unused top-most memory to keep before
395
  releasing via malloc_trim in free().  Automatic trimming is mainly
396
  useful in long-lived programs using contiguous MORECORE.  Because
397
  trimming via sbrk can be slow on some systems, and can sometimes be
398
  wasteful (in cases where programs immediately afterward allocate
399
  more large chunks) the value should be high enough so that your
400
  overall system performance would improve by releasing this much
401
  memory.  As a rough guide, you might set to a value close to the
402
  average size of a process (program) running on your system.
403
  Releasing this much memory would allow such a process to run in
404
  memory.  Generally, it is worth tuning trim thresholds when a
405
  program undergoes phases where several large chunks are allocated
406
  and released in ways that can reuse each other's storage, perhaps
407
  mixed with phases where there are no such chunks at all. The trim
408
  value must be greater than page size to have any useful effect.  To
409
  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
410
  some people use of mallocing a huge space and then freeing it at
411
  program startup, in an attempt to reserve system memory, doesn't
412
  have the intended effect under automatic trimming, since that memory
413
  will immediately be returned to the system.
414
 
415
DEFAULT_MMAP_THRESHOLD       default: 256K
416
      Also settable using mallopt(M_MMAP_THRESHOLD, x)
417
  The request size threshold for using MMAP to directly service a
418
  request. Requests of at least this size that cannot be allocated
419
  using already-existing space will be serviced via mmap.  (If enough
420
  normal freed space already exists it is used instead.)  Using mmap
421
  segregates relatively large chunks of memory so that they can be
422
  individually obtained and released from the host system. A request
423
  serviced through mmap is never reused by any other request (at least
424
  not directly; the system may just so happen to remap successive
425
  requests to the same locations).  Segregating space in this way has
426
  the benefits that: Mmapped space can always be individually released
427
  back to the system, which helps keep the system level memory demands
428
  of a long-lived program low.  Also, mapped memory doesn't become
429
  `locked' between other chunks, as can happen with normally allocated
430
  chunks, which means that even trimming via malloc_trim would not
431
  release them.  However, it has the disadvantage that the space
432
  cannot be reclaimed, consolidated, and then used to service later
433
  requests, as happens with normal chunks.  The advantages of mmap
434
  nearly always outweigh disadvantages for "large" chunks, but the
435
  value of "large" may vary across systems.  The default is an
436
  empirically derived value that works well in most systems. You can
437
  disable mmap by setting to MAX_SIZE_T.
438
 
439
*/
440
 
441
#ifdef KOLIBRI
442
 
443
#define IMPORT __attribute__ ((stdcall)) __attribute__ ((dllimport))
444
 
445
void*  IMPORT KernelAlloc(unsigned size)__asm__("KernelAlloc");
446
void   IMPORT KernelFree(void *mem)__asm__("KernelFree");
447
 
448
#else
449
 
450
#define IMPORT __attribute__ ((dllimport))
451
 
452
void* __fastcall IMPORT  mem_alloc(unsigned size, unsigned flags)__asm__("MemAlloc");
453
void  __fastcall IMPORT  mem_free(void *mem)__asm__("MemFree");
454
 
455
#endif
456
 
457
#define MALLOC_ALIGNMENT ((size_t)8U)
458
#define DEFAULT_MMAP_THRESHOLD  ((size_t)32U * (size_t)1024U)
459
#define NO_MALLINFO  1
460
#define MORECORE_CANNOT_TRIM
461
#define FOOTERS 0
462
#define ABORT
463
 
464
#undef  WIN32
465
#undef  _WIN32
466
 
467
typedef unsigned int size_t;
468
 
469
#define HAVE_MMAP 1
470
#define HAVE_MORECORE 0
471
#define LACKS_UNISTD_H
472
#define LACKS_SYS_PARAM_H
473
#define LACKS_SYS_MMAN_H
474
#define LACKS_STRING_H
475
#define LACKS_STRINGS_H
476
#define LACKS_SYS_TYPES_H
477
#define LACKS_ERRNO_H
478
#define MALLOC_FAILURE_ACTION
479
#define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
480
 
481
#if defined(DARWIN) || defined(_DARWIN)
482
/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
483
#ifndef HAVE_MORECORE
484
#define HAVE_MORECORE 0
485
#define HAVE_MMAP 1
486
#endif  /* HAVE_MORECORE */
487
#endif  /* DARWIN */
488
 
489
#ifndef LACKS_SYS_TYPES_H
490
#include   /* For size_t */
491
#endif  /* LACKS_SYS_TYPES_H */
492
 
493
/* The maximum possible size_t value has all bits set */
494
#define MAX_SIZE_T           (~(size_t)0)
495
 
496
#ifndef ONLY_MSPACES
497
#define ONLY_MSPACES 0
498
#endif  /* ONLY_MSPACES */
499
#ifndef MSPACES
500
#if ONLY_MSPACES
501
#define MSPACES 1
502
#else   /* ONLY_MSPACES */
503
#define MSPACES 0
504
#endif  /* ONLY_MSPACES */
505
#endif  /* MSPACES */
506
#ifndef MALLOC_ALIGNMENT
507
#define MALLOC_ALIGNMENT ((size_t)8U)
508
#endif  /* MALLOC_ALIGNMENT */
509
#ifndef FOOTERS
510
#define FOOTERS 0
511
#endif  /* FOOTERS */
512
#ifndef ABORT
513
#define ABORT  abort()
514
#endif  /* ABORT */
515
#ifndef ABORT_ON_ASSERT_FAILURE
516
#define ABORT_ON_ASSERT_FAILURE 1
517
#endif  /* ABORT_ON_ASSERT_FAILURE */
518
#ifndef PROCEED_ON_ERROR
519
#define PROCEED_ON_ERROR 0
520
#endif  /* PROCEED_ON_ERROR */
521
#ifndef USE_LOCKS
522
#define USE_LOCKS 0
523
#endif  /* USE_LOCKS */
524
#ifndef INSECURE
525
#define INSECURE 0
526
#endif  /* INSECURE */
527
#ifndef HAVE_MMAP
528
#define HAVE_MMAP 1
529
#endif  /* HAVE_MMAP */
530
#ifndef MMAP_CLEARS
531
#define MMAP_CLEARS 1
532
#endif  /* MMAP_CLEARS */
533
#ifndef HAVE_MREMAP
534
#ifdef linux
535
#define HAVE_MREMAP 1
536
#else   /* linux */
537
#define HAVE_MREMAP 0
538
#endif  /* linux */
539
#endif  /* HAVE_MREMAP */
540
#ifndef MALLOC_FAILURE_ACTION
541
#define MALLOC_FAILURE_ACTION  errno = ENOMEM;
542
#endif  /* MALLOC_FAILURE_ACTION */
543
#ifndef HAVE_MORECORE
544
#if ONLY_MSPACES
545
#define HAVE_MORECORE 0
546
#else   /* ONLY_MSPACES */
547
#define HAVE_MORECORE 1
548
#endif  /* ONLY_MSPACES */
549
#endif  /* HAVE_MORECORE */
550
#if !HAVE_MORECORE
551
#define MORECORE_CONTIGUOUS 0
552
#else   /* !HAVE_MORECORE */
553
#ifndef MORECORE
554
#define MORECORE sbrk
555
#endif  /* MORECORE */
556
#ifndef MORECORE_CONTIGUOUS
557
#define MORECORE_CONTIGUOUS 1
558
#endif  /* MORECORE_CONTIGUOUS */
559
#endif  /* HAVE_MORECORE */
560
#ifndef DEFAULT_GRANULARITY
561
#if MORECORE_CONTIGUOUS
562
#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
563
#else   /* MORECORE_CONTIGUOUS */
564
#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
565
#endif  /* MORECORE_CONTIGUOUS */
566
#endif  /* DEFAULT_GRANULARITY */
567
#ifndef DEFAULT_TRIM_THRESHOLD
568
#ifndef MORECORE_CANNOT_TRIM
569
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
570
#else   /* MORECORE_CANNOT_TRIM */
571
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
572
#endif  /* MORECORE_CANNOT_TRIM */
573
#endif  /* DEFAULT_TRIM_THRESHOLD */
574
#ifndef DEFAULT_MMAP_THRESHOLD
575
#if HAVE_MMAP
576
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
577
#else   /* HAVE_MMAP */
578
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
579
#endif  /* HAVE_MMAP */
580
#endif  /* DEFAULT_MMAP_THRESHOLD */
581
#ifndef USE_BUILTIN_FFS
582
#define USE_BUILTIN_FFS 0
583
#endif  /* USE_BUILTIN_FFS */
584
#ifndef USE_DEV_RANDOM
585
#define USE_DEV_RANDOM 0
586
#endif  /* USE_DEV_RANDOM */
587
#ifndef NO_MALLINFO
588
#define NO_MALLINFO 0
589
#endif  /* NO_MALLINFO */
590
#ifndef MALLINFO_FIELD_TYPE
591
#define MALLINFO_FIELD_TYPE size_t
592
#endif  /* MALLINFO_FIELD_TYPE */
593
 
594
 
595
/*
596
  mallopt tuning options.  SVID/XPG defines four standard parameter
597
  numbers for mallopt, normally defined in malloc.h.  None of these
598
  are used in this malloc, so setting them has no effect. But this
599
  malloc does support the following options.
600
*/
601
 
602
#define M_TRIM_THRESHOLD     (-1)
603
#define M_GRANULARITY        (-2)
604
#define M_MMAP_THRESHOLD     (-3)
605
 
606
/* ------------------------ Mallinfo declarations ------------------------ */
607
 
608
#if !NO_MALLINFO
609
#endif /* NO_MALLINFO */
610
 
611
#ifdef __cplusplus
612
extern "C" {
613
#endif /* __cplusplus */
614
 
615
#if !ONLY_MSPACES
616
 
617
/* ------------------- Declarations of public routines ------------------- */
618
 
619
#ifndef USE_DL_PREFIX
620
#define dlcalloc               calloc
621
#define dlfree                 free
622
#define dlmalloc               malloc
623
#define dlmemalign             memalign
624
#define dlrealloc              realloc
625
#define dlvalloc               valloc
626
#define dlpvalloc              pvalloc
627
#define dlmallinfo             mallinfo
628
#define dlmallopt              mallopt
629
#define dlmalloc_trim          malloc_trim
630
#define dlmalloc_stats         malloc_stats
631
#define dlmalloc_usable_size   malloc_usable_size
632
#define dlmalloc_footprint     malloc_footprint
633
#define dlmalloc_max_footprint malloc_max_footprint
634
#define dlindependent_calloc   independent_calloc
635
#define dlindependent_comalloc independent_comalloc
636
#endif /* USE_DL_PREFIX */
637
 
638
 
639
/*
640
  malloc(size_t n)
641
  Returns a pointer to a newly allocated chunk of at least n bytes, or
642
  null if no space is available, in which case errno is set to ENOMEM
643
  on ANSI C systems.
644
 
645
  If n is zero, malloc returns a minimum-sized chunk. (The minimum
646
  size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
647
  systems.)  Note that size_t is an unsigned type, so calls with
648
  arguments that would be negative if signed are interpreted as
649
  requests for huge amounts of space, which will often fail. The
650
  maximum supported value of n differs across systems, but is in all
651
  cases less than the maximum representable value of a size_t.
652
*/
653
void* dlmalloc(size_t);
654
 
655
/*
656
  free(void* p)
657
  Releases the chunk of memory pointed to by p, that had been previously
658
  allocated using malloc or a related routine such as realloc.
659
  It has no effect if p is null. If p was not malloced or already
660
  freed, free(p) will by default cause the current program to abort.
661
*/
662
void  dlfree(void*);
663
 
664
/*
665
  calloc(size_t n_elements, size_t element_size);
666
  Returns a pointer to n_elements * element_size bytes, with all locations
667
  set to zero.
668
*/
669
void* dlcalloc(size_t, size_t);
670
 
671
/*
672
  realloc(void* p, size_t n)
673
  Returns a pointer to a chunk of size n that contains the same data
674
  as does chunk p up to the minimum of (n, p's size) bytes, or null
675
  if no space is available.
676
 
677
  The returned pointer may or may not be the same as p. The algorithm
678
  prefers extending p in most cases when possible, otherwise it
679
  employs the equivalent of a malloc-copy-free sequence.
680
 
681
  If p is null, realloc is equivalent to malloc.
682
 
683
  If space is not available, realloc returns null, errno is set (if on
684
  ANSI) and p is NOT freed.
685
 
686
  if n is for fewer bytes than already held by p, the newly unused
687
  space is lopped off and freed if possible.  realloc with a size
688
  argument of zero (re)allocates a minimum-sized chunk.
689
 
690
  The old unix realloc convention of allowing the last-free'd chunk
691
  to be used as an argument to realloc is not supported.
692
*/
693
 
694
void* dlrealloc(void*, size_t);
695
 
696
/*
697
  memalign(size_t alignment, size_t n);
698
  Returns a pointer to a newly allocated chunk of n bytes, aligned
699
  in accord with the alignment argument.
700
 
701
  The alignment argument should be a power of two. If the argument is
702
  not a power of two, the nearest greater power is used.
703
  8-byte alignment is guaranteed by normal malloc calls, so don't
704
  bother calling memalign with an argument of 8 or less.
705
 
706
  Overreliance on memalign is a sure way to fragment space.
707
*/
708
void* dlmemalign(size_t, size_t);
709
 
710
/*
711
  valloc(size_t n);
712
  Equivalent to memalign(pagesize, n), where pagesize is the page
713
  size of the system. If the pagesize is unknown, 4096 is used.
714
*/
715
void* dlvalloc(size_t);
716
 
717
/*
718
  mallopt(int parameter_number, int parameter_value)
719
  Sets tunable parameters The format is to provide a
720
  (parameter-number, parameter-value) pair.  mallopt then sets the
721
  corresponding parameter to the argument value if it can (i.e., so
722
  long as the value is meaningful), and returns 1 if successful else
723
  0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
724
  normally defined in malloc.h.  None of these are use in this malloc,
725
  so setting them has no effect. But this malloc also supports other
726
  options in mallopt. See below for details.  Briefly, supported
727
  parameters are as follows (listed defaults are for "typical"
728
  configurations).
729
 
730
  Symbol            param #  default    allowed param values
731
  M_TRIM_THRESHOLD     -1   2*1024*1024   any   (MAX_SIZE_T disables)
732
  M_GRANULARITY        -2     page size   any power of 2 >= page size
733
  M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
734
*/
735
int dlmallopt(int, int);
736
 
737
/*
738
  malloc_footprint();
739
  Returns the number of bytes obtained from the system.  The total
740
  number of bytes allocated by malloc, realloc etc., is less than this
741
  value. Unlike mallinfo, this function returns only a precomputed
742
  result, so can be called frequently to monitor memory consumption.
743
  Even if locks are otherwise defined, this function does not use them,
744
  so results might not be up to date.
745
*/
746
size_t dlmalloc_footprint(void);
747
 
748
/*
749
  malloc_max_footprint();
750
  Returns the maximum number of bytes obtained from the system. This
751
  value will be greater than current footprint if deallocated space
752
  has been reclaimed by the system. The peak number of bytes allocated
753
  by malloc, realloc etc., is less than this value. Unlike mallinfo,
754
  this function returns only a precomputed result, so can be called
755
  frequently to monitor memory consumption.  Even if locks are
756
  otherwise defined, this function does not use them, so results might
757
  not be up to date.
758
*/
759
size_t dlmalloc_max_footprint(void);
760
 
761
#if !NO_MALLINFO
762
#endif /* NO_MALLINFO */
763
 
764
/*
765
  independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
766
 
767
  independent_calloc is similar to calloc, but instead of returning a
768
  single cleared space, it returns an array of pointers to n_elements
769
  independent elements that can hold contents of size elem_size, each
770
  of which starts out cleared, and can be independently freed,
771
  realloc'ed etc. The elements are guaranteed to be adjacently
772
  allocated (this is not guaranteed to occur with multiple callocs or
773
  mallocs), which may also improve cache locality in some
774
  applications.
775
 
776
  The "chunks" argument is optional (i.e., may be null, which is
777
  probably the most typical usage). If it is null, the returned array
778
  is itself dynamically allocated and should also be freed when it is
779
  no longer needed. Otherwise, the chunks array must be of at least
780
  n_elements in length. It is filled in with the pointers to the
781
  chunks.
782
 
783
  In either case, independent_calloc returns this pointer array, or
784
  null if the allocation failed.  If n_elements is zero and "chunks"
785
  is null, it returns a chunk representing an array with zero elements
786
  (which should be freed if not wanted).
787
 
788
  Each element must be individually freed when it is no longer
789
  needed. If you'd like to instead be able to free all at once, you
790
  should instead use regular calloc and assign pointers into this
791
  space to represent elements.  (In this case though, you cannot
792
  independently free elements.)
793
 
794
  independent_calloc simplifies and speeds up implementations of many
795
  kinds of pools.  It may also be useful when constructing large data
796
  structures that initially have a fixed number of fixed-sized nodes,
797
  but the number is not known at compile time, and some of the nodes
798
  may later need to be freed. For example:
799
 
800
  struct Node { int item; struct Node* next; };
801
 
802
  struct Node* build_list() {
803
    struct Node** pool;
804
    int n = read_number_of_nodes_needed();
805
    if (n <= 0) return 0;
806
    pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
807
    if (pool == 0) die();
808
    // organize into a linked list...
809
    struct Node* first = pool[0];
810
    for (i = 0; i < n-1; ++i)
811
      pool[i]->next = pool[i+1];
812
    free(pool);     // Can now free the array (or not, if it is needed later)
813
    return first;
814
  }
815
*/
816
void** dlindependent_calloc(size_t, size_t, void**);
817
 
818
/*
819
  independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
820
 
821
  independent_comalloc allocates, all at once, a set of n_elements
822
  chunks with sizes indicated in the "sizes" array.    It returns
823
  an array of pointers to these elements, each of which can be
824
  independently freed, realloc'ed etc. The elements are guaranteed to
825
  be adjacently allocated (this is not guaranteed to occur with
826
  multiple callocs or mallocs), which may also improve cache locality
827
  in some applications.
828
 
829
  The "chunks" argument is optional (i.e., may be null). If it is null
830
  the returned array is itself dynamically allocated and should also
831
  be freed when it is no longer needed. Otherwise, the chunks array
832
  must be of at least n_elements in length. It is filled in with the
833
  pointers to the chunks.
834
 
835
  In either case, independent_comalloc returns this pointer array, or
836
  null if the allocation failed.  If n_elements is zero and chunks is
837
  null, it returns a chunk representing an array with zero elements
838
  (which should be freed if not wanted).
839
 
840
  Each element must be individually freed when it is no longer
841
  needed. If you'd like to instead be able to free all at once, you
842
  should instead use a single regular malloc, and assign pointers at
843
  particular offsets in the aggregate space. (In this case though, you
844
  cannot independently free elements.)
845
 
846
  independent_comallac differs from independent_calloc in that each
847
  element may have a different size, and also that it does not
848
  automatically clear elements.
849
 
850
  independent_comalloc can be used to speed up allocation in cases
851
  where several structs or objects must always be allocated at the
852
  same time.  For example:
853
 
854
  struct Head { ... }
855
  struct Foot { ... }
856
 
857
  void send_message(char* msg) {
858
    int msglen = strlen(msg);
859
    size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
860
    void* chunks[3];
861
    if (independent_comalloc(3, sizes, chunks) == 0)
862
      die();
863
    struct Head* head = (struct Head*)(chunks[0]);
864
    char*        body = (char*)(chunks[1]);
865
    struct Foot* foot = (struct Foot*)(chunks[2]);
866
    // ...
867
  }
868
 
869
  In general though, independent_comalloc is worth using only for
870
  larger values of n_elements. For small values, you probably won't
871
  detect enough difference from series of malloc calls to bother.
872
 
873
  Overuse of independent_comalloc can increase overall memory usage,
874
  since it cannot reuse existing noncontiguous small chunks that
875
  might be available for some of the elements.
876
*/
877
void** dlindependent_comalloc(size_t, size_t*, void**);
878
 
879
 
880
/*
881
  pvalloc(size_t n);
882
  Equivalent to valloc(minimum-page-that-holds(n)), that is,
883
  round up n to nearest pagesize.
884
 */
885
void*  dlpvalloc(size_t);
886
 
887
/*
888
  malloc_trim(size_t pad);
889
 
890
  If possible, gives memory back to the system (via negative arguments
891
  to sbrk) if there is unused memory at the `high' end of the malloc
892
  pool or in unused MMAP segments. You can call this after freeing
893
  large blocks of memory to potentially reduce the system-level memory
894
  requirements of a program. However, it cannot guarantee to reduce
895
  memory. Under some allocation patterns, some large free blocks of
896
  memory will be locked between two used chunks, so they cannot be
897
  given back to the system.
898
 
899
  The `pad' argument to malloc_trim represents the amount of free
900
  trailing space to leave untrimmed. If this argument is zero, only
901
  the minimum amount of memory to maintain internal data structures
902
  will be left. Non-zero arguments can be supplied to maintain enough
903
  trailing space to service future expected allocations without having
904
  to re-obtain memory from the system.
905
 
906
  Malloc_trim returns 1 if it actually released any memory, else 0.
907
*/
908
int  dlmalloc_trim(size_t);
909
 
910
/*
911
  malloc_usable_size(void* p);
912
 
913
  Returns the number of bytes you can actually use in
914
  an allocated chunk, which may be more than you requested (although
915
  often not) due to alignment and minimum size constraints.
916
  You can use this many bytes without worrying about
917
  overwriting other allocated objects. This is not a particularly great
918
  programming practice. malloc_usable_size can be more useful in
919
  debugging and assertions, for example:
920
 
921
  p = malloc(n);
922
  assert(malloc_usable_size(p) >= 256);
923
*/
924
size_t dlmalloc_usable_size(void*);
925
 
926
/*
927
  malloc_stats();
928
  Prints on stderr the amount of space obtained from the system (both
929
  via sbrk and mmap), the maximum amount (which may be more than
930
  current if malloc_trim and/or munmap got called), and the current
931
  number of bytes allocated via malloc (or realloc, etc) but not yet
932
  freed. Note that this is the number of bytes allocated, not the
933
  number requested. It will be larger than the number requested
934
  because of alignment and bookkeeping overhead. Because it includes
935
  alignment wastage as being in use, this figure may be greater than
936
  zero even when no user-level chunks are allocated.
937
 
938
  The reported current and maximum system memory can be inaccurate if
939
  a program makes other calls to system memory allocation functions
940
  (normally sbrk) outside of malloc.
941
 
942
  malloc_stats prints only the most commonly interesting statistics.
943
  More information can be obtained by calling mallinfo.
944
*/
945
void  dlmalloc_stats(void);
946
 
947
#endif /* ONLY_MSPACES */
948
 
949
#if MSPACES
950
#endif /* MSPACES */
951
 
952
#ifdef __cplusplus
953
};  /* end of extern "C" */
954
#endif /* __cplusplus */
955
 
956
/*
957
  ========================================================================
958
  To make a fully customizable malloc.h header file, cut everything
959
  above this line, put into file malloc.h, edit to suit, and #include it
960
  on the next line, as well as in programs that use this malloc.
961
  ========================================================================
962
*/
963
 
964
/* #include "malloc.h" */
965
 
966
/*------------------------------ internal #includes ---------------------- */
967
 
968
#ifdef WIN32
969
#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
970
#endif /* WIN32 */
971
 
972
//#include        /* for printing in malloc_stats */
973
 
974
#if 0
975
 
976
#ifndef LACKS_ERRNO_H
977
#include        /* for MALLOC_FAILURE_ACTION */
978
#endif /* LACKS_ERRNO_H */
979
#if FOOTERS
980
#include         /* for magic initialization */
981
#endif /* FOOTERS */
982
#ifndef LACKS_STDLIB_H
983
#include       /* for abort() */
984
#endif /* LACKS_STDLIB_H */
985
#ifdef DEBUG
986
#if ABORT_ON_ASSERT_FAILURE
987
#define assert(x) if(!(x)) ABORT
988
#else /* ABORT_ON_ASSERT_FAILURE */
989
#include 
990
#endif /* ABORT_ON_ASSERT_FAILURE */
991
#else  /* DEBUG */
992
#define assert(x)
993
#endif /* DEBUG */
994
#ifndef LACKS_STRING_H
995
#include       /* for memset etc */
996
#endif  /* LACKS_STRING_H */
997
#if USE_BUILTIN_FFS
998
#ifndef LACKS_STRINGS_H
999
#include      /* for ffs */
1000
#endif /* LACKS_STRINGS_H */
1001
#endif /* USE_BUILTIN_FFS */
1002
#if HAVE_MMAP
1003
#ifndef LACKS_SYS_MMAN_H
1004
#include     /* for mmap */
1005
#endif /* LACKS_SYS_MMAN_H */
1006
#ifndef LACKS_FCNTL_H
1007
#include 
1008
#endif /* LACKS_FCNTL_H */
1009
#endif /* HAVE_MMAP */
1010
#if HAVE_MORECORE
1011
#endif /* HAVE_MMAP */
1012
 
1013
#endif
1014
 
1015
#define assert(x)
1016
 
1017
#ifndef WIN32
1018
#endif
1019
 
1020
/* ------------------- size_t and alignment properties -------------------- */
1021
 
1022
/* The byte and bit size of a size_t */
1023
#define SIZE_T_SIZE         (sizeof(size_t))
1024
#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
1025
 
1026
/* Some constants coerced to size_t */
1027
/* Annoying but necessary to avoid errors on some plaftorms */
1028
#define SIZE_T_ZERO         ((size_t)0)
1029
#define SIZE_T_ONE          ((size_t)1)
1030
#define SIZE_T_TWO          ((size_t)2)
1031
#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
1032
#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
1033
#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1034
#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
1035
 
1036
/* The bit mask value corresponding to MALLOC_ALIGNMENT */
1037
#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
1038
 
1039
/* True if address a has acceptable alignment */
1040
#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1041
 
1042
/* the number of bytes to offset an address to align it */
1043
#define align_offset(A)\
1044
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1045
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1046
 
1047
/* -------------------------- MMAP preliminaries ------------------------- */
1048
 
1049
/*
1050
   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1051
   checks to fail so compiler optimizer can delete code rather than
1052
   using so many "#if"s.
1053
*/
1054
 
1055
 
1056
/* MORECORE and MMAP must return MFAIL on failure */
1057
#define MFAIL                ((void*)(MAX_SIZE_T))
1058
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
1059
 
1060
#if !HAVE_MMAP
1061
#else /* HAVE_MMAP */
1062
#define IS_MMAPPED_BIT       (SIZE_T_ONE)
1063
#define USE_MMAP_BIT         (SIZE_T_ONE)
1064
 
1065
 
1066
 
1067
#ifdef KOLIBRI
1068
 
1069
/* Win32 MMAP via VirtualAlloc */
1070
static void* win32mmap(size_t size) {
1071
  void* ptr = KernelAlloc(size);
1072
  return (ptr != 0)? ptr: MFAIL;
1073
}
1074
 
1075
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1076
static void* win32direct_mmap(size_t size) {
1077
  void* ptr = KernelAlloc(size);
1078
  return (ptr != 0)? ptr: MFAIL;
1079
}
1080
 
1081
/* This function supports releasing coalesed segments */
1082
static int win32munmap(void* ptr, size_t size) {
1083
  KernelFree(ptr);
1084
  return 0;
1085
}
1086
 
1087
#else
1088
 
1089
static void* win32mmap(size_t size) {
1090
    void* ptr = mem_alloc(size, 3);
1091
    return (ptr != 0)? ptr: MFAIL;
1092
}
1093
 
1094
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1095
static void* win32direct_mmap(size_t size) {
1096
    void* ptr = mem_alloc(size, 3);
1097
    return (ptr != 0)? ptr: MFAIL;
1098
}
1099
 
1100
/* This function supports releasing coalesed segments */
1101
static int win32munmap(void* ptr, size_t size) {
1102
    mem_free(ptr);
1103
    return 0;
1104
}
1105
 
1106
 
1107
#endif
1108
 
1109
#define CALL_MMAP(s)         win32mmap(s)
1110
#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
1111
#define DIRECT_MMAP(s)       win32direct_mmap(s)
1112
#endif /* HAVE_MMAP */
1113
 
1114
#if HAVE_MMAP && HAVE_MREMAP
1115
#else  /* HAVE_MMAP && HAVE_MREMAP */
1116
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1117
#endif /* HAVE_MMAP && HAVE_MREMAP */
1118
 
1119
#if HAVE_MORECORE
1120
#else  /* HAVE_MORECORE */
1121
#define CALL_MORECORE(S)     MFAIL
1122
#endif /* HAVE_MORECORE */
1123
 
1124
/* mstate bit set if continguous morecore disabled or failed */
1125
#define USE_NONCONTIGUOUS_BIT (4U)
1126
 
1127
/* segment bit set in create_mspace_with_base */
1128
#define EXTERN_BIT            (8U)
1129
 
1130
 
1131
/* --------------------------- Lock preliminaries ------------------------ */
1132
 
1133
#if USE_LOCKS
1134
#else  /* USE_LOCKS */
1135
#define USE_LOCK_BIT               (0U)
1136
#define INITIAL_LOCK(l)
1137
#endif /* USE_LOCKS */
1138
 
1139
#if USE_LOCKS && HAVE_MORECORE
1140
#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
1141
#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
1142
#else /* USE_LOCKS && HAVE_MORECORE */
1143
#define ACQUIRE_MORECORE_LOCK()
1144
#define RELEASE_MORECORE_LOCK()
1145
#endif /* USE_LOCKS && HAVE_MORECORE */
1146
 
1147
#if USE_LOCKS
1148
#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
1149
#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
1150
#else  /* USE_LOCKS */
1151
#define ACQUIRE_MAGIC_INIT_LOCK()
1152
#define RELEASE_MAGIC_INIT_LOCK()
1153
#endif /* USE_LOCKS */
1154
 
1155
 
1156
/* -----------------------  Chunk representations ------------------------ */
1157
 
1158
/*
1159
  (The following includes lightly edited explanations by Colin Plumb.)
1160
 
1161
  The malloc_chunk declaration below is misleading (but accurate and
1162
  necessary).  It declares a "view" into memory allowing access to
1163
  necessary fields at known offsets from a given base.
1164
 
1165
  Chunks of memory are maintained using a `boundary tag' method as
1166
  originally described by Knuth.  (See the paper by Paul Wilson
1167
  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1168
  techniques.)  Sizes of free chunks are stored both in the front of
1169
  each chunk and at the end.  This makes consolidating fragmented
1170
  chunks into bigger chunks fast.  The head fields also hold bits
1171
  representing whether chunks are free or in use.
1172
 
1173
  Here are some pictures to make it clearer.  They are "exploded" to
1174
  show that the state of a chunk can be thought of as extending from
1175
  the high 31 bits of the head field of its header through the
1176
  prev_foot and PINUSE_BIT bit of the following chunk header.
1177
 
1178
  A chunk that's in use looks like:
1179
 
1180
   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1181
           | Size of previous chunk (if P = 1)                             |
1182
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1183
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1184
         | Size of this chunk                                         1| +-+
1185
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1186
         |                                                               |
1187
         +-                                                             -+
1188
         |                                                               |
1189
         +-                                                             -+
1190
         |                                                               :
1191
         +-      size - sizeof(size_t) available payload bytes          -+
1192
         :                                                               |
1193
 chunk-> +-                                                             -+
1194
         |                                                               |
1195
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1196
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1197
       | Size of next chunk (may or may not be in use)               | +-+
1198
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1199
 
1200
    And if it's free, it looks like this:
1201
 
1202
   chunk-> +-                                                             -+
1203
           | User payload (must be in use, or we would have merged!)       |
1204
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1205
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1206
         | Size of this chunk                                         0| +-+
1207
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1208
         | Next pointer                                                  |
1209
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1210
         | Prev pointer                                                  |
1211
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1212
         |                                                               :
1213
         +-      size - sizeof(struct chunk) unused bytes               -+
1214
         :                                                               |
1215
 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1216
         | Size of this chunk                                            |
1217
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1218
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1219
       | Size of next chunk (must be in use, or we would have merged)| +-+
1220
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1221
       |                                                               :
1222
       +- User payload                                                -+
1223
       :                                                               |
1224
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1225
                                                                     |0|
1226
                                                                     +-+
1227
  Note that since we always merge adjacent free chunks, the chunks
1228
  adjacent to a free chunk must be in use.
1229
 
1230
  Given a pointer to a chunk (which can be derived trivially from the
1231
  payload pointer) we can, in O(1) time, find out whether the adjacent
1232
  chunks are free, and if so, unlink them from the lists that they
1233
  are on and merge them with the current chunk.
1234
 
1235
  Chunks always begin on even word boundaries, so the mem portion
1236
  (which is returned to the user) is also on an even word boundary, and
1237
  thus at least double-word aligned.
1238
 
1239
  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1240
  chunk size (which is always a multiple of two words), is an in-use
1241
  bit for the *previous* chunk.  If that bit is *clear*, then the
1242
  word before the current chunk size contains the previous chunk
1243
  size, and can be used to find the front of the previous chunk.
1244
  The very first chunk allocated always has this bit set, preventing
1245
  access to non-existent (or non-owned) memory. If pinuse is set for
1246
  any given chunk, then you CANNOT determine the size of the
1247
  previous chunk, and might even get a memory addressing fault when
1248
  trying to do so.
1249
 
1250
  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1251
  the chunk size redundantly records whether the current chunk is
1252
  inuse. This redundancy enables usage checks within free and realloc,
1253
  and reduces indirection when freeing and consolidating chunks.
1254
 
1255
  Each freshly allocated chunk must have both cinuse and pinuse set.
1256
  That is, each allocated chunk borders either a previously allocated
1257
  and still in-use chunk, or the base of its memory arena. This is
1258
  ensured by making all allocations from the the `lowest' part of any
1259
  found chunk.  Further, no free chunk physically borders another one,
1260
  so each free chunk is known to be preceded and followed by either
1261
  inuse chunks or the ends of memory.
1262
 
1263
  Note that the `foot' of the current chunk is actually represented
1264
  as the prev_foot of the NEXT chunk. This makes it easier to
1265
  deal with alignments etc but can be very confusing when trying
1266
  to extend or adapt this code.
1267
 
1268
  The exceptions to all this are
1269
 
1270
     1. The special chunk `top' is the top-most available chunk (i.e.,
1271
        the one bordering the end of available memory). It is treated
1272
        specially.  Top is never included in any bin, is used only if
1273
        no other chunk is available, and is released back to the
1274
        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
1275
        the top chunk is treated as larger (and thus less well
1276
        fitting) than any other available chunk.  The top chunk
1277
        doesn't update its trailing size field since there is no next
1278
        contiguous chunk that would have to index off it. However,
1279
        space is still allocated for it (TOP_FOOT_SIZE) to enable
1280
        separation or merging when space is extended.
1281
 
1282
     3. Chunks allocated via mmap, which have the lowest-order bit
1283
        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1284
        PINUSE_BIT in their head fields.  Because they are allocated
1285
        one-by-one, each must carry its own prev_foot field, which is
1286
        also used to hold the offset this chunk has within its mmapped
1287
        region, which is needed to preserve alignment. Each mmapped
1288
        chunk is trailed by the first two fields of a fake next-chunk
1289
        for sake of usage checks.
1290
 
1291
*/
1292
 
1293
struct malloc_chunk {
1294
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
1295
  size_t               head;       /* Size and inuse bits. */
1296
  struct malloc_chunk* fd;         /* double links -- used only if free. */
1297
  struct malloc_chunk* bk;
1298
};
1299
 
1300
typedef struct malloc_chunk  mchunk;
1301
typedef struct malloc_chunk* mchunkptr;
1302
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
1303
typedef unsigned int bindex_t;         /* Described below */
1304
typedef unsigned int binmap_t;         /* Described below */
1305
typedef unsigned int flag_t;           /* The type of various bit flag sets */
1306
 
1307
/* ------------------- Chunks sizes and alignments ----------------------- */
1308
 
1309
#define MCHUNK_SIZE         (sizeof(mchunk))
1310
 
1311
#if FOOTERS
1312
#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
1313
#else /* FOOTERS */
1314
#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
1315
#endif /* FOOTERS */
1316
 
1317
/* MMapped chunks need a second word of overhead ... */
1318
#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1319
/* ... and additional padding for fake next-chunk at foot */
1320
#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
1321
 
1322
/* The smallest size we can malloc is an aligned minimal chunk */
1323
#define MIN_CHUNK_SIZE\
1324
  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1325
 
1326
/* conversion from malloc headers to user pointers, and back */
1327
#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
1328
#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1329
/* chunk associated with aligned address A */
1330
#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
1331
 
1332
/* Bounds on request (not chunk) sizes. */
1333
#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
1334
#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1335
 
1336
/* pad request bytes into a usable size */
1337
#define pad_request(req) \
1338
   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1339
 
1340
/* pad request, checking for minimum (but not maximum) */
1341
#define request2size(req) \
1342
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1343
 
1344
 
1345
/* ------------------ Operations on head and foot fields ----------------- */
1346
 
1347
/*
1348
  The head field of a chunk is or'ed with PINUSE_BIT when previous
1349
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1350
  use. If the chunk was obtained with mmap, the prev_foot field has
1351
  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1352
  mmapped region to the base of the chunk.
1353
*/
1354
 
1355
#define PINUSE_BIT          (SIZE_T_ONE)
1356
#define CINUSE_BIT          (SIZE_T_TWO)
1357
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1358
 
1359
/* Head value for fenceposts */
1360
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1361
 
1362
/* extraction of fields from head words */
1363
#define cinuse(p)           ((p)->head & CINUSE_BIT)
1364
#define pinuse(p)           ((p)->head & PINUSE_BIT)
1365
#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1366
 
1367
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1368
#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1369
 
1370
/* Treat space at ptr +/- offset as a chunk */
1371
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1372
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1373
 
1374
/* Ptr to next or previous physical malloc_chunk. */
1375
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1376
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1377
 
1378
/* extract next chunk's pinuse bit */
1379
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1380
 
1381
/* Get/set size at footer */
1382
#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1383
#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1384
 
1385
/* Set size, pinuse bit, and foot */
1386
#define set_size_and_pinuse_of_free_chunk(p, s)\
1387
  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1388
 
1389
/* Set size, pinuse bit, foot, and clear next pinuse */
1390
#define set_free_with_pinuse(p, s, n)\
1391
  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1392
 
1393
#define is_mmapped(p)\
1394
  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1395
 
1396
/* Get the internal overhead associated with chunk p */
1397
#define overhead_for(p)\
1398
 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1399
 
1400
/* Return true if malloced space is not necessarily cleared */
1401
#if MMAP_CLEARS
1402
#define calloc_must_clear(p) (!is_mmapped(p))
1403
#else /* MMAP_CLEARS */
1404
#define calloc_must_clear(p) (1)
1405
#endif /* MMAP_CLEARS */
1406
 
1407
/* ---------------------- Overlaid data structures ----------------------- */
1408
 
1409
/*
1410
  When chunks are not in use, they are treated as nodes of either
1411
  lists or trees.
1412
 
1413
  "Small"  chunks are stored in circular doubly-linked lists, and look
1414
  like this:
1415
 
1416
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1417
            |             Size of previous chunk                            |
1418
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1419
    `head:' |             Size of chunk, in bytes                         |P|
1420
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1421
            |             Forward pointer to next chunk in list             |
1422
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1423
            |             Back pointer to previous chunk in list            |
1424
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1425
            |             Unused space (may be 0 bytes long)                .
1426
            .                                                               .
1427
            .                                                               |
1428
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1429
    `foot:' |             Size of chunk, in bytes                           |
1430
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1431
 
1432
  Larger chunks are kept in a form of bitwise digital trees (aka
1433
  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1434
  free chunks greater than 256 bytes, their size doesn't impose any
1435
  constraints on user chunk sizes.  Each node looks like:
1436
 
1437
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1438
            |             Size of previous chunk                            |
1439
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1440
    `head:' |             Size of chunk, in bytes                         |P|
1441
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1442
            |             Forward pointer to next chunk of same size        |
1443
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1444
            |             Back pointer to previous chunk of same size       |
1445
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1446
            |             Pointer to left child (child[0])                  |
1447
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1448
            |             Pointer to right child (child[1])                 |
1449
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1450
            |             Pointer to parent                                 |
1451
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1452
            |             bin index of this chunk                           |
1453
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1454
            |             Unused space                                      .
1455
            .                                                               |
1456
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1457
    `foot:' |             Size of chunk, in bytes                           |
1458
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1459
 
1460
  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1461
  of the same size are arranged in a circularly-linked list, with only
1462
  the oldest chunk (the next to be used, in our FIFO ordering)
1463
  actually in the tree.  (Tree members are distinguished by a non-null
1464
  parent pointer.)  If a chunk with the same size an an existing node
1465
  is inserted, it is linked off the existing node using pointers that
1466
  work in the same way as fd/bk pointers of small chunks.
1467
 
1468
  Each tree contains a power of 2 sized range of chunk sizes (the
1469
  smallest is 0x100 <= x < 0x180), which is is divided in half at each
1470
  tree level, with the chunks in the smaller half of the range (0x100
1471
  <= x < 0x140 for the top nose) in the left subtree and the larger
1472
  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1473
  done by inspecting individual bits.
1474
 
1475
  Using these rules, each node's left subtree contains all smaller
1476
  sizes than its right subtree.  However, the node at the root of each
1477
  subtree has no particular ordering relationship to either.  (The
1478
  dividing line between the subtree sizes is based on trie relation.)
1479
  If we remove the last chunk of a given size from the interior of the
1480
  tree, we need to replace it with a leaf node.  The tree ordering
1481
  rules permit a node to be replaced by any leaf below it.
1482
 
1483
  The smallest chunk in a tree (a common operation in a best-fit
1484
  allocator) can be found by walking a path to the leftmost leaf in
1485
  the tree.  Unlike a usual binary tree, where we follow left child
1486
  pointers until we reach a null, here we follow the right child
1487
  pointer any time the left one is null, until we reach a leaf with
1488
  both child pointers null. The smallest chunk in the tree will be
1489
  somewhere along that path.
1490
 
1491
  The worst case number of steps to add, find, or remove a node is
1492
  bounded by the number of bits differentiating chunks within
1493
  bins. Under current bin calculations, this ranges from 6 up to 21
1494
  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1495
  is of course much better.
1496
*/
1497
 
1498
struct malloc_tree_chunk {
1499
  /* The first four fields must be compatible with malloc_chunk */
1500
  size_t                    prev_foot;
1501
  size_t                    head;
1502
  struct malloc_tree_chunk* fd;
1503
  struct malloc_tree_chunk* bk;
1504
 
1505
  struct malloc_tree_chunk* child[2];
1506
  struct malloc_tree_chunk* parent;
1507
  bindex_t                  index;
1508
};
1509
 
1510
typedef struct malloc_tree_chunk  tchunk;
1511
typedef struct malloc_tree_chunk* tchunkptr;
1512
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1513
 
1514
/* A little helper macro for trees */
1515
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1516
 
1517
/* ----------------------------- Segments -------------------------------- */
1518
 
1519
/*
1520
  Each malloc space may include non-contiguous segments, held in a
1521
  list headed by an embedded malloc_segment record representing the
1522
  top-most space. Segments also include flags holding properties of
1523
  the space. Large chunks that are directly allocated by mmap are not
1524
  included in this list. They are instead independently created and
1525
  destroyed without otherwise keeping track of them.
1526
 
1527
  Segment management mainly comes into play for spaces allocated by
1528
  MMAP.  Any call to MMAP might or might not return memory that is
1529
  adjacent to an existing segment.  MORECORE normally contiguously
1530
  extends the current space, so this space is almost always adjacent,
1531
  which is simpler and faster to deal with. (This is why MORECORE is
1532
  used preferentially to MMAP when both are available -- see
1533
  sys_alloc.)  When allocating using MMAP, we don't use any of the
1534
  hinting mechanisms (inconsistently) supported in various
1535
  implementations of unix mmap, or distinguish reserving from
1536
  committing memory. Instead, we just ask for space, and exploit
1537
  contiguity when we get it.  It is probably possible to do
1538
  better than this on some systems, but no general scheme seems
1539
  to be significantly better.
1540
 
1541
  Management entails a simpler variant of the consolidation scheme
1542
  used for chunks to reduce fragmentation -- new adjacent memory is
1543
  normally prepended or appended to an existing segment. However,
1544
  there are limitations compared to chunk consolidation that mostly
1545
  reflect the fact that segment processing is relatively infrequent
1546
  (occurring only when getting memory from system) and that we
1547
  don't expect to have huge numbers of segments:
1548
 
1549
  * Segments are not indexed, so traversal requires linear scans.  (It
1550
    would be possible to index these, but is not worth the extra
1551
    overhead and complexity for most programs on most platforms.)
1552
  * New segments are only appended to old ones when holding top-most
1553
    memory; if they cannot be prepended to others, they are held in
1554
    different segments.
1555
 
1556
  Except for the top-most segment of an mstate, each segment record
1557
  is kept at the tail of its segment. Segments are added by pushing
1558
  segment records onto the list headed by &mstate.seg for the
1559
  containing mstate.
1560
 
1561
  Segment flags control allocation/merge/deallocation policies:
1562
  * If EXTERN_BIT set, then we did not allocate this segment,
1563
    and so should not try to deallocate or merge with others.
1564
    (This currently holds only for the initial segment passed
1565
    into create_mspace_with_base.)
1566
  * If IS_MMAPPED_BIT set, the segment may be merged with
1567
    other surrounding mmapped segments and trimmed/de-allocated
1568
    using munmap.
1569
  * If neither bit is set, then the segment was obtained using
1570
    MORECORE so can be merged with surrounding MORECORE'd segments
1571
    and deallocated/trimmed using MORECORE with negative arguments.
1572
*/
1573
 
1574
struct malloc_segment {
1575
  char*        base;             /* base address */
1576
  size_t       size;             /* allocated size */
1577
  struct malloc_segment* next;   /* ptr to next segment */
1578
  flag_t       sflags;           /* mmap and extern flag */
1579
};
1580
 
1581
#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1582
#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1583
 
1584
typedef struct malloc_segment  msegment;
1585
typedef struct malloc_segment* msegmentptr;
1586
 
1587
/* ---------------------------- malloc_state ----------------------------- */
1588
 
1589
/*
1590
   A malloc_state holds all of the bookkeeping for a space.
1591
   The main fields are:
1592
 
1593
  Top
1594
    The topmost chunk of the currently active segment. Its size is
1595
    cached in topsize.  The actual size of topmost space is
1596
    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1597
    fenceposts and segment records if necessary when getting more
1598
    space from the system.  The size at which to autotrim top is
1599
    cached from mparams in trim_check, except that it is disabled if
1600
    an autotrim fails.
1601
 
1602
  Designated victim (dv)
1603
    This is the preferred chunk for servicing small requests that
1604
    don't have exact fits.  It is normally the chunk split off most
1605
    recently to service another small request.  Its size is cached in
1606
    dvsize. The link fields of this chunk are not maintained since it
1607
    is not kept in a bin.
1608
 
1609
  SmallBins
1610
    An array of bin headers for free chunks.  These bins hold chunks
1611
    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1612
    chunks of all the same size, spaced 8 bytes apart.  To simplify
1613
    use in double-linked lists, each bin header acts as a malloc_chunk
1614
    pointing to the real first node, if it exists (else pointing to
1615
    itself).  This avoids special-casing for headers.  But to avoid
1616
    waste, we allocate only the fd/bk pointers of bins, and then use
1617
    repositioning tricks to treat these as the fields of a chunk.
1618
 
1619
  TreeBins
1620
    Treebins are pointers to the roots of trees holding a range of
1621
    sizes. There are 2 equally spaced treebins for each power of two
1622
    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1623
    larger.
1624
 
1625
  Bin maps
1626
    There is one bit map for small bins ("smallmap") and one for
1627
    treebins ("treemap).  Each bin sets its bit when non-empty, and
1628
    clears the bit when empty.  Bit operations are then used to avoid
1629
    bin-by-bin searching -- nearly all "search" is done without ever
1630
    looking at bins that won't be selected.  The bit maps
1631
    conservatively use 32 bits per map word, even if on 64bit system.
1632
    For a good description of some of the bit-based techniques used
1633
    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1634
    supplement at http://hackersdelight.org/). Many of these are
1635
    intended to reduce the branchiness of paths through malloc etc, as
1636
    well as to reduce the number of memory locations read or written.
1637
 
1638
  Segments
1639
    A list of segments headed by an embedded malloc_segment record
1640
    representing the initial space.
1641
 
1642
  Address check support
1643
    The least_addr field is the least address ever obtained from
1644
    MORECORE or MMAP. Attempted frees and reallocs of any address less
1645
    than this are trapped (unless INSECURE is defined).
1646
 
1647
  Magic tag
1648
    A cross-check field that should always hold same value as mparams.magic.
1649
 
1650
  Flags
1651
    Bits recording whether to use MMAP, locks, or contiguous MORECORE
1652
 
1653
  Statistics
1654
    Each space keeps track of current and maximum system memory
1655
    obtained via MORECORE or MMAP.
1656
 
1657
  Locking
1658
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
1659
    around every public call using this mspace.
1660
*/
1661
 
1662
/* Bin types, widths and sizes */
1663
#define NSMALLBINS        (32U)
1664
#define NTREEBINS         (32U)
1665
#define SMALLBIN_SHIFT    (3U)
1666
#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
1667
#define TREEBIN_SHIFT     (8U)
1668
#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
1669
#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
1670
#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1671
 
1672
struct malloc_state {
1673
  binmap_t   smallmap;
1674
  binmap_t   treemap;
1675
  size_t     dvsize;
1676
  size_t     topsize;
1677
  char*      least_addr;
1678
  mchunkptr  dv;
1679
  mchunkptr  top;
1680
  size_t     trim_check;
1681
  size_t     magic;
1682
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
1683
  tbinptr    treebins[NTREEBINS];
1684
  size_t     footprint;
1685
  size_t     max_footprint;
1686
  flag_t     mflags;
1687
#if USE_LOCKS
1688
  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
1689
#endif /* USE_LOCKS */
1690
  msegment   seg;
1691
};
1692
 
1693
typedef struct malloc_state*    mstate;
1694
 
1695
/* ------------- Global malloc_state and malloc_params ------------------- */
1696
 
1697
/*
1698
  malloc_params holds global properties, including those that can be
1699
  dynamically set using mallopt. There is a single instance, mparams,
1700
  initialized in init_mparams.
1701
*/
1702
 
1703
struct malloc_params {
1704
  size_t magic;
1705
  size_t page_size;
1706
  size_t granularity;
1707
  size_t mmap_threshold;
1708
  size_t trim_threshold;
1709
  flag_t default_mflags;
1710
};
1711
 
1712
static struct malloc_params mparams;
1713
 
1714
/* The global malloc_state used for all non-"mspace" calls */
1715
static struct malloc_state _gm_;
1716
#define gm                 (&_gm_)
1717
#define is_global(M)       ((M) == &_gm_)
1718
#define is_initialized(M)  ((M)->top != 0)
1719
 
1720
/* -------------------------- system alloc setup ------------------------- */
1721
 
1722
/* Operations on mflags */
1723
 
1724
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
1725
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
1726
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
1727
 
1728
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
1729
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
1730
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
1731
 
1732
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
1733
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
1734
 
1735
#define set_lock(M,L)\
1736
 ((M)->mflags = (L)?\
1737
  ((M)->mflags | USE_LOCK_BIT) :\
1738
  ((M)->mflags & ~USE_LOCK_BIT))
1739
 
1740
/* page-align a size */
1741
#define page_align(S)\
1742
 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
1743
 
1744
/* granularity-align a size */
1745
#define granularity_align(S)\
1746
  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
1747
 
1748
#define is_page_aligned(S)\
1749
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
1750
#define is_granularity_aligned(S)\
1751
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
1752
 
1753
/*  True if segment S holds address A */
1754
#define segment_holds(S, A)\
1755
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
1756
 
1757
/* Return segment holding given address */
1758
static msegmentptr segment_holding(mstate m, char* addr) {
1759
  msegmentptr sp = &m->seg;
1760
  for (;;) {
1761
    if (addr >= sp->base && addr < sp->base + sp->size)
1762
      return sp;
1763
    if ((sp = sp->next) == 0)
1764
      return 0;
1765
  }
1766
}
1767
 
1768
/* Return true if segment contains a segment link */
1769
static int has_segment_link(mstate m, msegmentptr ss) {
1770
  msegmentptr sp = &m->seg;
1771
  for (;;) {
1772
    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
1773
      return 1;
1774
    if ((sp = sp->next) == 0)
1775
      return 0;
1776
  }
1777
}
1778
 
1779
#ifndef MORECORE_CANNOT_TRIM
1780
#define should_trim(M,s)  ((s) > (M)->trim_check)
1781
#else  /* MORECORE_CANNOT_TRIM */
1782
#define should_trim(M,s)  (0)
1783
#endif /* MORECORE_CANNOT_TRIM */
1784
 
1785
/*
1786
  TOP_FOOT_SIZE is padding at the end of a segment, including space
1787
  that may be needed to place segment records and fenceposts when new
1788
  noncontiguous segments are added.
1789
*/
1790
#define TOP_FOOT_SIZE\
1791
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
1792
 
1793
 
1794
/* -------------------------------  Hooks -------------------------------- */
1795
 
1796
/*
1797
  PREACTION should be defined to return 0 on success, and nonzero on
1798
  failure. If you are not using locking, you can redefine these to do
1799
  anything you like.
1800
*/
1801
 
1802
#if USE_LOCKS
1803
 
1804
/* Ensure locks are initialized */
1805
#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
1806
 
1807
#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
1808
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
1809
#else /* USE_LOCKS */
1810
 
1811
#ifndef PREACTION
1812
#define PREACTION(M) (0)
1813
#endif  /* PREACTION */
1814
 
1815
#ifndef POSTACTION
1816
#define POSTACTION(M)
1817
#endif  /* POSTACTION */
1818
 
1819
#endif /* USE_LOCKS */
1820
 
1821
/*
1822
  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
1823
  USAGE_ERROR_ACTION is triggered on detected bad frees and
1824
  reallocs. The argument p is an address that might have triggered the
1825
  fault. It is ignored by the two predefined actions, but might be
1826
  useful in custom actions that try to help diagnose errors.
1827
*/
1828
 
1829
#if PROCEED_ON_ERROR
1830
 
1831
/* A count of the number of corruption errors causing resets */
1832
int malloc_corruption_error_count;
1833
 
1834
/* default corruption action */
1835
static void reset_on_error(mstate m);
1836
 
1837
#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
1838
#define USAGE_ERROR_ACTION(m, p)
1839
 
1840
#else /* PROCEED_ON_ERROR */
1841
 
1842
#ifndef CORRUPTION_ERROR_ACTION
1843
#define CORRUPTION_ERROR_ACTION(m) ABORT
1844
#endif /* CORRUPTION_ERROR_ACTION */
1845
 
1846
#ifndef USAGE_ERROR_ACTION
1847
#define USAGE_ERROR_ACTION(m,p) ABORT
1848
#endif /* USAGE_ERROR_ACTION */
1849
 
1850
#endif /* PROCEED_ON_ERROR */
1851
 
1852
/* -------------------------- Debugging setup ---------------------------- */
1853
 
1854
#if ! DEBUG
1855
 
1856
#define check_free_chunk(M,P)
1857
#define check_inuse_chunk(M,P)
1858
#define check_malloced_chunk(M,P,N)
1859
#define check_mmapped_chunk(M,P)
1860
#define check_malloc_state(M)
1861
#define check_top_chunk(M,P)
1862
 
1863
#else /* DEBUG */
1864
#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
1865
#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
1866
#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
1867
#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
1868
#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
1869
#define check_malloc_state(M)       do_check_malloc_state(M)
1870
 
1871
static void   do_check_any_chunk(mstate m, mchunkptr p);
1872
static void   do_check_top_chunk(mstate m, mchunkptr p);
1873
static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
1874
static void   do_check_inuse_chunk(mstate m, mchunkptr p);
1875
static void   do_check_free_chunk(mstate m, mchunkptr p);
1876
static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
1877
static void   do_check_tree(mstate m, tchunkptr t);
1878
static void   do_check_treebin(mstate m, bindex_t i);
1879
static void   do_check_smallbin(mstate m, bindex_t i);
1880
static void   do_check_malloc_state(mstate m);
1881
static int    bin_find(mstate m, mchunkptr x);
1882
static size_t traverse_and_check(mstate m);
1883
#endif /* DEBUG */
1884
 
1885
/* ---------------------------- Indexing Bins ---------------------------- */
1886
 
1887
#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
1888
#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
1889
#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
1890
#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
1891
 
1892
/* addressing by index. See above about smallbin repositioning */
1893
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
1894
#define treebin_at(M,i)     (&((M)->treebins[i]))
1895
 
1896
/* assign tree index for size S to variable I */
1897
#if defined(__GNUC__) && defined(i386)
1898
#define compute_tree_index(S, I)\
1899
{\
1900
  size_t X = S >> TREEBIN_SHIFT;\
1901
  if (X == 0)\
1902
    I = 0;\
1903
  else if (X > 0xFFFF)\
1904
    I = NTREEBINS-1;\
1905
  else {\
1906
    unsigned int K;\
1907
    __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
1908
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
1909
  }\
1910
}
1911
#else /* GNUC */
1912
#define compute_tree_index(S, I)\
1913
{\
1914
  size_t X = S >> TREEBIN_SHIFT;\
1915
  if (X == 0)\
1916
    I = 0;\
1917
  else if (X > 0xFFFF)\
1918
    I = NTREEBINS-1;\
1919
  else {\
1920
    unsigned int Y = (unsigned int)X;\
1921
    unsigned int N = ((Y - 0x100) >> 16) & 8;\
1922
    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
1923
    N += K;\
1924
    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
1925
    K = 14 - N + ((Y <<= K) >> 15);\
1926
    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
1927
  }\
1928
}
1929
#endif /* GNUC */
1930
 
1931
/* Bit representing maximum resolved size in a treebin at i */
1932
#define bit_for_tree_index(i) \
1933
   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
1934
 
1935
/* Shift placing maximum resolved bit in a treebin at i as sign bit */
1936
#define leftshift_for_tree_index(i) \
1937
   ((i == NTREEBINS-1)? 0 : \
1938
    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
1939
 
1940
/* The size of the smallest chunk held in bin with index i */
1941
#define minsize_for_tree_index(i) \
1942
   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
1943
   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
1944
 
1945
 
1946
/* ------------------------ Operations on bin maps ----------------------- */
1947
 
1948
/* bit corresponding to given index */
1949
#define idx2bit(i)              ((binmap_t)(1) << (i))
1950
 
1951
/* Mark/Clear bits with given index */
1952
#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
1953
#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
1954
#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
1955
 
1956
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
1957
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
1958
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
1959
 
1960
/* index corresponding to given bit */
1961
 
1962
#if defined(__GNUC__) && defined(i386)
1963
#define compute_bit2idx(X, I)\
1964
{\
1965
  unsigned int J;\
1966
  __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
1967
  I = (bindex_t)J;\
1968
}
1969
 
1970
#else /* GNUC */
1971
#if  USE_BUILTIN_FFS
1972
#define compute_bit2idx(X, I) I = ffs(X)-1
1973
 
1974
#else /* USE_BUILTIN_FFS */
1975
#define compute_bit2idx(X, I)\
1976
{\
1977
  unsigned int Y = X - 1;\
1978
  unsigned int K = Y >> (16-4) & 16;\
1979
  unsigned int N = K;        Y >>= K;\
1980
  N += K = Y >> (8-3) &  8;  Y >>= K;\
1981
  N += K = Y >> (4-2) &  4;  Y >>= K;\
1982
  N += K = Y >> (2-1) &  2;  Y >>= K;\
1983
  N += K = Y >> (1-0) &  1;  Y >>= K;\
1984
  I = (bindex_t)(N + Y);\
1985
}
1986
#endif /* USE_BUILTIN_FFS */
1987
#endif /* GNUC */
1988
 
1989
/* isolate the least set bit of a bitmap */
1990
#define least_bit(x)         ((x) & -(x))
1991
 
1992
/* mask with all bits to left of least bit of x on */
1993
#define left_bits(x)         ((x<<1) | -(x<<1))
1994
 
1995
/* mask with all bits to left of or equal to least bit of x on */
1996
#define same_or_left_bits(x) ((x) | -(x))
1997
 
1998
 
1999
/* ----------------------- Runtime Check Support ------------------------- */
2000
 
2001
/*
2002
  For security, the main invariant is that malloc/free/etc never
2003
  writes to a static address other than malloc_state, unless static
2004
  malloc_state itself has been corrupted, which cannot occur via
2005
  malloc (because of these checks). In essence this means that we
2006
  believe all pointers, sizes, maps etc held in malloc_state, but
2007
  check all of those linked or offsetted from other embedded data
2008
  structures.  These checks are interspersed with main code in a way
2009
  that tends to minimize their run-time cost.
2010
 
2011
  When FOOTERS is defined, in addition to range checking, we also
2012
  verify footer fields of inuse chunks, which can be used guarantee
2013
  that the mstate controlling malloc/free is intact.  This is a
2014
  streamlined version of the approach described by William Robertson
2015
  et al in "Run-time Detection of Heap-based Overflows" LISA'03
2016
  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2017
  of an inuse chunk holds the xor of its mstate and a random seed,
2018
  that is checked upon calls to free() and realloc().  This is
2019
  (probablistically) unguessable from outside the program, but can be
2020
  computed by any code successfully malloc'ing any chunk, so does not
2021
  itself provide protection against code that has already broken
2022
  security through some other means.  Unlike Robertson et al, we
2023
  always dynamically check addresses of all offset chunks (previous,
2024
  next, etc). This turns out to be cheaper than relying on hashes.
2025
*/
2026
 
2027
#if !INSECURE
2028
/* Check if address a is at least as high as any from MORECORE or MMAP */
2029
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2030
/* Check if address of next chunk n is higher than base chunk p */
2031
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
2032
/* Check if p has its cinuse bit on */
2033
#define ok_cinuse(p)     cinuse(p)
2034
/* Check if p has its pinuse bit on */
2035
#define ok_pinuse(p)     pinuse(p)
2036
 
2037
#else /* !INSECURE */
2038
#define ok_address(M, a) (1)
2039
#define ok_next(b, n)    (1)
2040
#define ok_cinuse(p)     (1)
2041
#define ok_pinuse(p)     (1)
2042
#endif /* !INSECURE */
2043
 
2044
#if (FOOTERS && !INSECURE)
2045
/* Check if (alleged) mstate m has expected magic field */
2046
#define ok_magic(M)      ((M)->magic == mparams.magic)
2047
#else  /* (FOOTERS && !INSECURE) */
2048
#define ok_magic(M)      (1)
2049
#endif /* (FOOTERS && !INSECURE) */
2050
 
2051
 
2052
/* In gcc, use __builtin_expect to minimize impact of checks */
2053
#if !INSECURE
2054
#if defined(__GNUC__) && __GNUC__ >= 3
2055
#define RTCHECK(e)  __builtin_expect(e, 1)
2056
#else /* GNUC */
2057
#define RTCHECK(e)  (e)
2058
#endif /* GNUC */
2059
#else /* !INSECURE */
2060
#define RTCHECK(e)  (1)
2061
#endif /* !INSECURE */
2062
 
2063
/* macros to set up inuse chunks with or without footers */
2064
 
2065
#if !FOOTERS
2066
 
2067
#define mark_inuse_foot(M,p,s)
2068
 
2069
/* Set cinuse bit and pinuse bit of next chunk */
2070
#define set_inuse(M,p,s)\
2071
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2072
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2073
 
2074
/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2075
#define set_inuse_and_pinuse(M,p,s)\
2076
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2077
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2078
 
2079
/* Set size, cinuse and pinuse bit of this chunk */
2080
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2081
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2082
 
2083
#else /* FOOTERS */
2084
 
2085
/* Set foot of inuse chunk to be xor of mstate and seed */
2086
#define mark_inuse_foot(M,p,s)\
2087
  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2088
 
2089
#define get_mstate_for(p)\
2090
  ((mstate)(((mchunkptr)((char*)(p) +\
2091
    (chunksize(p))))->prev_foot ^ mparams.magic))
2092
 
2093
#define set_inuse(M,p,s)\
2094
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2095
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2096
  mark_inuse_foot(M,p,s))
2097
 
2098
#define set_inuse_and_pinuse(M,p,s)\
2099
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2100
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2101
 mark_inuse_foot(M,p,s))
2102
 
2103
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2104
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2105
  mark_inuse_foot(M, p, s))
2106
 
2107
#endif /* !FOOTERS */
2108
 
2109
/* ---------------------------- setting mparams -------------------------- */
2110
 
2111
/* Initialize mparams */
2112
static int init_mparams(void) {
2113
  if (mparams.page_size == 0) {
2114
    size_t s;
2115
 
2116
    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2117
    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2118
#if MORECORE_CONTIGUOUS
2119
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2120
#else  /* MORECORE_CONTIGUOUS */
2121
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2122
#endif /* MORECORE_CONTIGUOUS */
2123
 
2124
#if (FOOTERS && !INSECURE)
2125
    {
2126
#if USE_DEV_RANDOM
2127
      int fd;
2128
      unsigned char buf[sizeof(size_t)];
2129
      /* Try to use /dev/urandom, else fall back on using time */
2130
      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2131
          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2132
        s = *((size_t *) buf);
2133
        close(fd);
2134
      }
2135
      else
2136
#endif /* USE_DEV_RANDOM */
2137
        s = (size_t)(time(0) ^ (size_t)0x55555555U);
2138
 
2139
      s |= (size_t)8U;    /* ensure nonzero */
2140
      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
2141
 
2142
    }
2143
#else /* (FOOTERS && !INSECURE) */
2144
    s = (size_t)0x58585858U;
2145
#endif /* (FOOTERS && !INSECURE) */
2146
    ACQUIRE_MAGIC_INIT_LOCK();
2147
    if (mparams.magic == 0) {
2148
      mparams.magic = s;
2149
      /* Set up lock for main malloc area */
2150
      INITIAL_LOCK(&gm->mutex);
2151
      gm->mflags = mparams.default_mflags;
2152
    }
2153
    RELEASE_MAGIC_INIT_LOCK();
2154
 
2155
    mparams.page_size = 4096;
2156
    mparams.granularity = 16384;
2157
 
2158
    /* Sanity-check configuration:
2159
       size_t must be unsigned and as wide as pointer type.
2160
       ints must be at least 4 bytes.
2161
       alignment must be at least 8.
2162
       Alignment, min chunk size, and page size must all be powers of 2.
2163
    */
2164
    if ((sizeof(size_t) != sizeof(char*)) ||
2165
        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
2166
        (sizeof(int) < 4)  ||
2167
        (MALLOC_ALIGNMENT < (size_t)8U) ||
2168
        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
2169
        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
2170
        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2171
        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
2172
      ABORT;
2173
  }
2174
  return 0;
2175
}
2176
 
2177
/* support for mallopt */
2178
static int change_mparam(int param_number, int value) {
2179
  size_t val = (size_t)value;
2180
  init_mparams();
2181
  switch(param_number) {
2182
  case M_TRIM_THRESHOLD:
2183
    mparams.trim_threshold = val;
2184
    return 1;
2185
  case M_GRANULARITY:
2186
    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2187
      mparams.granularity = val;
2188
      return 1;
2189
    }
2190
    else
2191
      return 0;
2192
  case M_MMAP_THRESHOLD:
2193
    mparams.mmap_threshold = val;
2194
    return 1;
2195
  default:
2196
    return 0;
2197
  }
2198
}
2199
 
2200
#if DEBUG
2201
#endif /* DEBUG */
2202
 
2203
/* ----------------------------- statistics ------------------------------ */
2204
 
2205
#if !NO_MALLINFO
2206
#endif /* !NO_MALLINFO */
2207
 
2208
/* ----------------------- Operations on smallbins ----------------------- */
2209
 
2210
/*
2211
  Various forms of linking and unlinking are defined as macros.  Even
2212
  the ones for trees, which are very long but have very short typical
2213
  paths.  This is ugly but reduces reliance on inlining support of
2214
  compilers.
2215
*/
2216
 
2217
/* Link a free chunk into a smallbin  */
2218
#define insert_small_chunk(M, P, S) {\
2219
  bindex_t I  = small_index(S);\
2220
  mchunkptr B = smallbin_at(M, I);\
2221
  mchunkptr F = B;\
2222
  assert(S >= MIN_CHUNK_SIZE);\
2223
  if (!smallmap_is_marked(M, I))\
2224
    mark_smallmap(M, I);\
2225
  else if (RTCHECK(ok_address(M, B->fd)))\
2226
    F = B->fd;\
2227
  else {\
2228
    CORRUPTION_ERROR_ACTION(M);\
2229
  }\
2230
  B->fd = P;\
2231
  F->bk = P;\
2232
  P->fd = F;\
2233
  P->bk = B;\
2234
}
2235
 
2236
/* Unlink a chunk from a smallbin  */
2237
#define unlink_small_chunk(M, P, S) {\
2238
  mchunkptr F = P->fd;\
2239
  mchunkptr B = P->bk;\
2240
  bindex_t I = small_index(S);\
2241
  assert(P != B);\
2242
  assert(P != F);\
2243
  assert(chunksize(P) == small_index2size(I));\
2244
  if (F == B)\
2245
    clear_smallmap(M, I);\
2246
  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2247
                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2248
    F->bk = B;\
2249
    B->fd = F;\
2250
  }\
2251
  else {\
2252
    CORRUPTION_ERROR_ACTION(M);\
2253
  }\
2254
}
2255
 
2256
/* Unlink the first chunk from a smallbin */
2257
#define unlink_first_small_chunk(M, B, P, I) {\
2258
  mchunkptr F = P->fd;\
2259
  assert(P != B);\
2260
  assert(P != F);\
2261
  assert(chunksize(P) == small_index2size(I));\
2262
  if (B == F)\
2263
    clear_smallmap(M, I);\
2264
  else if (RTCHECK(ok_address(M, F))) {\
2265
    B->fd = F;\
2266
    F->bk = B;\
2267
  }\
2268
  else {\
2269
    CORRUPTION_ERROR_ACTION(M);\
2270
  }\
2271
}
2272
 
2273
/* Replace dv node, binning the old one */
2274
/* Used only when dvsize known to be small */
2275
#define replace_dv(M, P, S) {\
2276
  size_t DVS = M->dvsize;\
2277
  if (DVS != 0) {\
2278
    mchunkptr DV = M->dv;\
2279
    assert(is_small(DVS));\
2280
    insert_small_chunk(M, DV, DVS);\
2281
  }\
2282
  M->dvsize = S;\
2283
  M->dv = P;\
2284
}
2285
 
2286
/* ------------------------- Operations on trees ------------------------- */
2287
 
2288
/* Insert chunk into tree */
2289
#define insert_large_chunk(M, X, S) {\
2290
  tbinptr* H;\
2291
  bindex_t I;\
2292
  compute_tree_index(S, I);\
2293
  H = treebin_at(M, I);\
2294
  X->index = I;\
2295
  X->child[0] = X->child[1] = 0;\
2296
  if (!treemap_is_marked(M, I)) {\
2297
    mark_treemap(M, I);\
2298
    *H = X;\
2299
    X->parent = (tchunkptr)H;\
2300
    X->fd = X->bk = X;\
2301
  }\
2302
  else {\
2303
    tchunkptr T = *H;\
2304
    size_t K = S << leftshift_for_tree_index(I);\
2305
    for (;;) {\
2306
      if (chunksize(T) != S) {\
2307
        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2308
        K <<= 1;\
2309
        if (*C != 0)\
2310
          T = *C;\
2311
        else if (RTCHECK(ok_address(M, C))) {\
2312
          *C = X;\
2313
          X->parent = T;\
2314
          X->fd = X->bk = X;\
2315
          break;\
2316
        }\
2317
        else {\
2318
          CORRUPTION_ERROR_ACTION(M);\
2319
          break;\
2320
        }\
2321
      }\
2322
      else {\
2323
        tchunkptr F = T->fd;\
2324
        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2325
          T->fd = F->bk = X;\
2326
          X->fd = F;\
2327
          X->bk = T;\
2328
          X->parent = 0;\
2329
          break;\
2330
        }\
2331
        else {\
2332
          CORRUPTION_ERROR_ACTION(M);\
2333
          break;\
2334
        }\
2335
      }\
2336
    }\
2337
  }\
2338
}
2339
 
2340
/*
2341
  Unlink steps:
2342
 
2343
  1. If x is a chained node, unlink it from its same-sized fd/bk links
2344
     and choose its bk node as its replacement.
2345
  2. If x was the last node of its size, but not a leaf node, it must
2346
     be replaced with a leaf node (not merely one with an open left or
2347
     right), to make sure that lefts and rights of descendents
2348
     correspond properly to bit masks.  We use the rightmost descendent
2349
     of x.  We could use any other leaf, but this is easy to locate and
2350
     tends to counteract removal of leftmosts elsewhere, and so keeps
2351
     paths shorter than minimally guaranteed.  This doesn't loop much
2352
     because on average a node in a tree is near the bottom.
2353
  3. If x is the base of a chain (i.e., has parent links) relink
2354
     x's parent and children to x's replacement (or null if none).
2355
*/
2356
 
2357
#define unlink_large_chunk(M, X) {\
2358
  tchunkptr XP = X->parent;\
2359
  tchunkptr R;\
2360
  if (X->bk != X) {\
2361
    tchunkptr F = X->fd;\
2362
    R = X->bk;\
2363
    if (RTCHECK(ok_address(M, F))) {\
2364
      F->bk = R;\
2365
      R->fd = F;\
2366
    }\
2367
    else {\
2368
      CORRUPTION_ERROR_ACTION(M);\
2369
    }\
2370
  }\
2371
  else {\
2372
    tchunkptr* RP;\
2373
    if (((R = *(RP = &(X->child[1]))) != 0) ||\
2374
        ((R = *(RP = &(X->child[0]))) != 0)) {\
2375
      tchunkptr* CP;\
2376
      while ((*(CP = &(R->child[1])) != 0) ||\
2377
             (*(CP = &(R->child[0])) != 0)) {\
2378
        R = *(RP = CP);\
2379
      }\
2380
      if (RTCHECK(ok_address(M, RP)))\
2381
        *RP = 0;\
2382
      else {\
2383
        CORRUPTION_ERROR_ACTION(M);\
2384
      }\
2385
    }\
2386
  }\
2387
  if (XP != 0) {\
2388
    tbinptr* H = treebin_at(M, X->index);\
2389
    if (X == *H) {\
2390
      if ((*H = R) == 0) \
2391
        clear_treemap(M, X->index);\
2392
    }\
2393
    else if (RTCHECK(ok_address(M, XP))) {\
2394
      if (XP->child[0] == X) \
2395
        XP->child[0] = R;\
2396
      else \
2397
        XP->child[1] = R;\
2398
    }\
2399
    else\
2400
      CORRUPTION_ERROR_ACTION(M);\
2401
    if (R != 0) {\
2402
      if (RTCHECK(ok_address(M, R))) {\
2403
        tchunkptr C0, C1;\
2404
        R->parent = XP;\
2405
        if ((C0 = X->child[0]) != 0) {\
2406
          if (RTCHECK(ok_address(M, C0))) {\
2407
            R->child[0] = C0;\
2408
            C0->parent = R;\
2409
          }\
2410
          else\
2411
            CORRUPTION_ERROR_ACTION(M);\
2412
        }\
2413
        if ((C1 = X->child[1]) != 0) {\
2414
          if (RTCHECK(ok_address(M, C1))) {\
2415
            R->child[1] = C1;\
2416
            C1->parent = R;\
2417
          }\
2418
          else\
2419
            CORRUPTION_ERROR_ACTION(M);\
2420
        }\
2421
      }\
2422
      else\
2423
        CORRUPTION_ERROR_ACTION(M);\
2424
    }\
2425
  }\
2426
}
2427
 
2428
/* Relays to large vs small bin operations */
2429
 
2430
#define insert_chunk(M, P, S)\
2431
  if (is_small(S)) insert_small_chunk(M, P, S)\
2432
  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
2433
 
2434
#define unlink_chunk(M, P, S)\
2435
  if (is_small(S)) unlink_small_chunk(M, P, S)\
2436
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
2437
 
2438
 
2439
/* Relays to internal calls to malloc/free from realloc, memalign etc */
2440
 
2441
#if ONLY_MSPACES
2442
#define internal_malloc(m, b) mspace_malloc(m, b)
2443
#define internal_free(m, mem) mspace_free(m,mem);
2444
#else /* ONLY_MSPACES */
2445
#if MSPACES
2446
#define internal_malloc(m, b)\
2447
   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
2448
#define internal_free(m, mem)\
2449
   if (m == gm) dlfree(mem); else mspace_free(m,mem);
2450
#else /* MSPACES */
2451
#define internal_malloc(m, b) dlmalloc(b)
2452
#define internal_free(m, mem) dlfree(mem)
2453
#endif /* MSPACES */
2454
#endif /* ONLY_MSPACES */
2455
 
2456
/* -----------------------  Direct-mmapping chunks ----------------------- */
2457
 
2458
/*
2459
  Directly mmapped chunks are set up with an offset to the start of
2460
  the mmapped region stored in the prev_foot field of the chunk. This
2461
  allows reconstruction of the required argument to MUNMAP when freed,
2462
  and also allows adjustment of the returned chunk to meet alignment
2463
  requirements (especially in memalign).  There is also enough space
2464
  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
2465
  the PINUSE bit so frees can be checked.
2466
*/
2467
 
2468
/* Malloc using mmap */
2469
static void* mmap_alloc(mstate m, size_t nb) {
2470
  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2471
  if (mmsize > nb) {     /* Check for wrap around 0 */
2472
    char* mm = (char*)(DIRECT_MMAP(mmsize));
2473
    if (mm != CMFAIL) {
2474
      size_t offset = align_offset(chunk2mem(mm));
2475
      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
2476
      mchunkptr p = (mchunkptr)(mm + offset);
2477
      p->prev_foot = offset | IS_MMAPPED_BIT;
2478
      (p)->head = (psize|CINUSE_BIT);
2479
      mark_inuse_foot(m, p, psize);
2480
      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
2481
      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
2482
 
2483
      if (mm < m->least_addr)
2484
        m->least_addr = mm;
2485
      if ((m->footprint += mmsize) > m->max_footprint)
2486
        m->max_footprint = m->footprint;
2487
      assert(is_aligned(chunk2mem(p)));
2488
      check_mmapped_chunk(m, p);
2489
      return chunk2mem(p);
2490
    }
2491
  }
2492
  return 0;
2493
}
2494
 
2495
/* Realloc using mmap */
2496
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
2497
  size_t oldsize = chunksize(oldp);
2498
  if (is_small(nb)) /* Can't shrink mmap regions below small size */
2499
    return 0;
2500
  /* Keep old chunk if big enough but not too big */
2501
  if (oldsize >= nb + SIZE_T_SIZE &&
2502
      (oldsize - nb) <= (mparams.granularity << 1))
2503
    return oldp;
2504
  else {
2505
    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
2506
    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
2507
    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
2508
                                         CHUNK_ALIGN_MASK);
2509
    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
2510
                                  oldmmsize, newmmsize, 1);
2511
    if (cp != CMFAIL) {
2512
      mchunkptr newp = (mchunkptr)(cp + offset);
2513
      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
2514
      newp->head = (psize|CINUSE_BIT);
2515
      mark_inuse_foot(m, newp, psize);
2516
      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
2517
      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
2518
 
2519
      if (cp < m->least_addr)
2520
        m->least_addr = cp;
2521
      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
2522
        m->max_footprint = m->footprint;
2523
      check_mmapped_chunk(m, newp);
2524
      return newp;
2525
    }
2526
  }
2527
  return 0;
2528
}
2529
 
2530
/* -------------------------- mspace management -------------------------- */
2531
 
2532
/* Initialize top chunk and its size */
2533
static void init_top(mstate m, mchunkptr p, size_t psize) {
2534
  /* Ensure alignment */
2535
  size_t offset = align_offset(chunk2mem(p));
2536
  p = (mchunkptr)((char*)p + offset);
2537
  psize -= offset;
2538
 
2539
  m->top = p;
2540
  m->topsize = psize;
2541
  p->head = psize | PINUSE_BIT;
2542
  /* set size of fake trailing chunk holding overhead space only once */
2543
  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
2544
  m->trim_check = mparams.trim_threshold; /* reset on each update */
2545
}
2546
 
2547
/* Initialize bins for a new mstate that is otherwise zeroed out */
2548
static void init_bins(mstate m) {
2549
  /* Establish circular links for smallbins */
2550
  bindex_t i;
2551
  for (i = 0; i < NSMALLBINS; ++i) {
2552
    sbinptr bin = smallbin_at(m,i);
2553
    bin->fd = bin->bk = bin;
2554
  }
2555
}
2556
 
2557
#if PROCEED_ON_ERROR
2558
 
2559
/* default corruption action */
2560
static void reset_on_error(mstate m) {
2561
  int i;
2562
  ++malloc_corruption_error_count;
2563
  /* Reinitialize fields to forget about all memory */
2564
  m->smallbins = m->treebins = 0;
2565
  m->dvsize = m->topsize = 0;
2566
  m->seg.base = 0;
2567
  m->seg.size = 0;
2568
  m->seg.next = 0;
2569
  m->top = m->dv = 0;
2570
  for (i = 0; i < NTREEBINS; ++i)
2571
    *treebin_at(m, i) = 0;
2572
  init_bins(m);
2573
}
2574
#endif /* PROCEED_ON_ERROR */
2575
 
2576
/* Allocate chunk and prepend remainder with chunk in successor base. */
2577
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
2578
                           size_t nb) {
2579
  mchunkptr p = align_as_chunk(newbase);
2580
  mchunkptr oldfirst = align_as_chunk(oldbase);
2581
  size_t psize = (char*)oldfirst - (char*)p;
2582
  mchunkptr q = chunk_plus_offset(p, nb);
2583
  size_t qsize = psize - nb;
2584
  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2585
 
2586
  assert((char*)oldfirst > (char*)q);
2587
  assert(pinuse(oldfirst));
2588
  assert(qsize >= MIN_CHUNK_SIZE);
2589
 
2590
  /* consolidate remainder with first chunk of old base */
2591
  if (oldfirst == m->top) {
2592
    size_t tsize = m->topsize += qsize;
2593
    m->top = q;
2594
    q->head = tsize | PINUSE_BIT;
2595
    check_top_chunk(m, q);
2596
  }
2597
  else if (oldfirst == m->dv) {
2598
    size_t dsize = m->dvsize += qsize;
2599
    m->dv = q;
2600
    set_size_and_pinuse_of_free_chunk(q, dsize);
2601
  }
2602
  else {
2603
    if (!cinuse(oldfirst)) {
2604
      size_t nsize = chunksize(oldfirst);
2605
      unlink_chunk(m, oldfirst, nsize);
2606
      oldfirst = chunk_plus_offset(oldfirst, nsize);
2607
      qsize += nsize;
2608
    }
2609
    set_free_with_pinuse(q, qsize, oldfirst);
2610
    insert_chunk(m, q, qsize);
2611
    check_free_chunk(m, q);
2612
  }
2613
 
2614
  check_malloced_chunk(m, chunk2mem(p), nb);
2615
  return chunk2mem(p);
2616
}
2617
 
2618
 
2619
/* Add a segment to hold a new noncontiguous region */
2620
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
2621
  /* Determine locations and sizes of segment, fenceposts, old top */
2622
  char* old_top = (char*)m->top;
2623
  msegmentptr oldsp = segment_holding(m, old_top);
2624
  char* old_end = oldsp->base + oldsp->size;
2625
  size_t ssize = pad_request(sizeof(struct malloc_segment));
2626
  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2627
  size_t offset = align_offset(chunk2mem(rawsp));
2628
  char* asp = rawsp + offset;
2629
  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
2630
  mchunkptr sp = (mchunkptr)csp;
2631
  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
2632
  mchunkptr tnext = chunk_plus_offset(sp, ssize);
2633
  mchunkptr p = tnext;
2634
  int nfences = 0;
2635
 
2636
  /* reset top to new space */
2637
  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2638
 
2639
  /* Set up segment record */
2640
  assert(is_aligned(ss));
2641
  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
2642
  *ss = m->seg; /* Push current record */
2643
  m->seg.base = tbase;
2644
  m->seg.size = tsize;
2645
  m->seg.sflags = mmapped;
2646
  m->seg.next = ss;
2647
 
2648
  /* Insert trailing fenceposts */
2649
  for (;;) {
2650
    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
2651
    p->head = FENCEPOST_HEAD;
2652
    ++nfences;
2653
    if ((char*)(&(nextp->head)) < old_end)
2654
      p = nextp;
2655
    else
2656
      break;
2657
  }
2658
  assert(nfences >= 2);
2659
 
2660
  /* Insert the rest of old top into a bin as an ordinary free chunk */
2661
  if (csp != old_top) {
2662
    mchunkptr q = (mchunkptr)old_top;
2663
    size_t psize = csp - old_top;
2664
    mchunkptr tn = chunk_plus_offset(q, psize);
2665
    set_free_with_pinuse(q, psize, tn);
2666
    insert_chunk(m, q, psize);
2667
  }
2668
 
2669
  check_top_chunk(m, m->top);
2670
}
2671
 
2672
/* -------------------------- System allocation -------------------------- */
2673
 
2674
/* Get memory from system using MORECORE or MMAP */
2675
static void* sys_alloc(mstate m, size_t nb) {
2676
  char* tbase = CMFAIL;
2677
  size_t tsize = 0;
2678
  flag_t mmap_flag = 0;
2679
 
2680
  init_mparams();
2681
 
2682
  /* Directly map large chunks */
2683
  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
2684
    void* mem = mmap_alloc(m, nb);
2685
    if (mem != 0)
2686
      return mem;
2687
  }
2688
 
2689
  /*
2690
    Try getting memory in any of three ways (in most-preferred to
2691
    least-preferred order):
2692
    1. A call to MORECORE that can normally contiguously extend memory.
2693
       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
2694
       or main space is mmapped or a previous contiguous call failed)
2695
    2. A call to MMAP new space (disabled if not HAVE_MMAP).
2696
       Note that under the default settings, if MORECORE is unable to
2697
       fulfill a request, and HAVE_MMAP is true, then mmap is
2698
       used as a noncontiguous system allocator. This is a useful backup
2699
       strategy for systems with holes in address spaces -- in this case
2700
       sbrk cannot contiguously expand the heap, but mmap may be able to
2701
       find space.
2702
    3. A call to MORECORE that cannot usually contiguously extend memory.
2703
       (disabled if not HAVE_MORECORE)
2704
  */
2705
 
2706
  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
2707
    char* br = CMFAIL;
2708
    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
2709
    size_t asize = 0;
2710
    ACQUIRE_MORECORE_LOCK();
2711
 
2712
    if (ss == 0) {  /* First time through or recovery */
2713
      char* base = (char*)CALL_MORECORE(0);
2714
      if (base != CMFAIL) {
2715
        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2716
        /* Adjust to end on a page boundary */
2717
        if (!is_page_aligned(base))
2718
          asize += (page_align((size_t)base) - (size_t)base);
2719
        /* Can't call MORECORE if size is negative when treated as signed */
2720
        if (asize < HALF_MAX_SIZE_T &&
2721
            (br = (char*)(CALL_MORECORE(asize))) == base) {
2722
          tbase = base;
2723
          tsize = asize;
2724
        }
2725
      }
2726
    }
2727
    else {
2728
      /* Subtract out existing available top space from MORECORE request. */
2729
      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
2730
      /* Use mem here only if it did continuously extend old space */
2731
      if (asize < HALF_MAX_SIZE_T &&
2732
          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
2733
        tbase = br;
2734
        tsize = asize;
2735
      }
2736
    }
2737
 
2738
    if (tbase == CMFAIL) {    /* Cope with partial failure */
2739
      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
2740
        if (asize < HALF_MAX_SIZE_T &&
2741
            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
2742
          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
2743
          if (esize < HALF_MAX_SIZE_T) {
2744
            char* end = (char*)CALL_MORECORE(esize);
2745
            if (end != CMFAIL)
2746
              asize += esize;
2747
            else {            /* Can't use; try to release */
2748
              CALL_MORECORE(-asize);
2749
              br = CMFAIL;
2750
            }
2751
          }
2752
        }
2753
      }
2754
      if (br != CMFAIL) {    /* Use the space we did get */
2755
        tbase = br;
2756
        tsize = asize;
2757
      }
2758
      else
2759
        disable_contiguous(m); /* Don't try contiguous path in the future */
2760
    }
2761
 
2762
    RELEASE_MORECORE_LOCK();
2763
  }
2764
 
2765
  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
2766
    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
2767
    size_t rsize = granularity_align(req);
2768
    if (rsize > nb) { /* Fail if wraps around zero */
2769
      char* mp = (char*)(CALL_MMAP(rsize));
2770
      if (mp != CMFAIL) {
2771
        tbase = mp;
2772
        tsize = rsize;
2773
        mmap_flag = IS_MMAPPED_BIT;
2774
      }
2775
    }
2776
  }
2777
 
2778
  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
2779
    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2780
    if (asize < HALF_MAX_SIZE_T) {
2781
      char* br = CMFAIL;
2782
      char* end = CMFAIL;
2783
      ACQUIRE_MORECORE_LOCK();
2784
      br = (char*)(CALL_MORECORE(asize));
2785
      end = (char*)(CALL_MORECORE(0));
2786
      RELEASE_MORECORE_LOCK();
2787
      if (br != CMFAIL && end != CMFAIL && br < end) {
2788
        size_t ssize = end - br;
2789
        if (ssize > nb + TOP_FOOT_SIZE) {
2790
          tbase = br;
2791
          tsize = ssize;
2792
        }
2793
      }
2794
    }
2795
  }
2796
 
2797
  if (tbase != CMFAIL) {
2798
 
2799
    if ((m->footprint += tsize) > m->max_footprint)
2800
      m->max_footprint = m->footprint;
2801
 
2802
    if (!is_initialized(m)) { /* first-time initialization */
2803
      m->seg.base = m->least_addr = tbase;
2804
      m->seg.size = tsize;
2805
      m->seg.sflags = mmap_flag;
2806
      m->magic = mparams.magic;
2807
      init_bins(m);
2808
      if (is_global(m))
2809
        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2810
      else {
2811
        /* Offset top by embedded malloc_state */
2812
        mchunkptr mn = next_chunk(mem2chunk(m));
2813
        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
2814
      }
2815
    }
2816
 
2817
    else {
2818
      /* Try to merge with an existing segment */
2819
      msegmentptr sp = &m->seg;
2820
      while (sp != 0 && tbase != sp->base + sp->size)
2821
        sp = sp->next;
2822
      if (sp != 0 &&
2823
          !is_extern_segment(sp) &&
2824
          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
2825
          segment_holds(sp, m->top)) { /* append */
2826
        sp->size += tsize;
2827
        init_top(m, m->top, m->topsize + tsize);
2828
      }
2829
      else {
2830
        if (tbase < m->least_addr)
2831
          m->least_addr = tbase;
2832
        sp = &m->seg;
2833
        while (sp != 0 && sp->base != tbase + tsize)
2834
          sp = sp->next;
2835
        if (sp != 0 &&
2836
            !is_extern_segment(sp) &&
2837
            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
2838
          char* oldbase = sp->base;
2839
          sp->base = tbase;
2840
          sp->size += tsize;
2841
          return prepend_alloc(m, tbase, oldbase, nb);
2842
        }
2843
        else
2844
          add_segment(m, tbase, tsize, mmap_flag);
2845
      }
2846
    }
2847
 
2848
    if (nb < m->topsize) { /* Allocate from new or extended top space */
2849
      size_t rsize = m->topsize -= nb;
2850
      mchunkptr p = m->top;
2851
      mchunkptr r = m->top = chunk_plus_offset(p, nb);
2852
      r->head = rsize | PINUSE_BIT;
2853
      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2854
      check_top_chunk(m, m->top);
2855
      check_malloced_chunk(m, chunk2mem(p), nb);
2856
      return chunk2mem(p);
2857
    }
2858
  }
2859
 
2860
  MALLOC_FAILURE_ACTION;
2861
  return 0;
2862
}
2863
 
2864
/* -----------------------  system deallocation -------------------------- */
2865
 
2866
/* Unmap and unlink any mmapped segments that don't contain used chunks */
2867
static size_t release_unused_segments(mstate m) {
2868
  size_t released = 0;
2869
  msegmentptr pred = &m->seg;
2870
  msegmentptr sp = pred->next;
2871
  while (sp != 0) {
2872
    char* base = sp->base;
2873
    size_t size = sp->size;
2874
    msegmentptr next = sp->next;
2875
    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
2876
      mchunkptr p = align_as_chunk(base);
2877
      size_t psize = chunksize(p);
2878
      /* Can unmap if first chunk holds entire segment and not pinned */
2879
      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
2880
        tchunkptr tp = (tchunkptr)p;
2881
        assert(segment_holds(sp, (char*)sp));
2882
        if (p == m->dv) {
2883
          m->dv = 0;
2884
          m->dvsize = 0;
2885
        }
2886
        else {
2887
          unlink_large_chunk(m, tp);
2888
        }
2889
        if (CALL_MUNMAP(base, size) == 0) {
2890
          released += size;
2891
          m->footprint -= size;
2892
          /* unlink obsoleted record */
2893
          sp = pred;
2894
          sp->next = next;
2895
        }
2896
        else { /* back out if cannot unmap */
2897
          insert_large_chunk(m, tp, psize);
2898
        }
2899
      }
2900
    }
2901
    pred = sp;
2902
    sp = next;
2903
  }
2904
  return released;
2905
}
2906
 
2907
static int sys_trim(mstate m, size_t pad) {
2908
  size_t released = 0;
2909
  if (pad < MAX_REQUEST && is_initialized(m)) {
2910
    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
2911
 
2912
    if (m->topsize > pad) {
2913
      /* Shrink top space in granularity-size units, keeping at least one */
2914
      size_t unit = mparams.granularity;
2915
      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
2916
                      SIZE_T_ONE) * unit;
2917
      msegmentptr sp = segment_holding(m, (char*)m->top);
2918
 
2919
      if (!is_extern_segment(sp)) {
2920
        if (is_mmapped_segment(sp)) {
2921
          if (HAVE_MMAP &&
2922
              sp->size >= extra &&
2923
              !has_segment_link(m, sp)) { /* can't shrink if pinned */
2924
            size_t newsize = sp->size - extra;
2925
            /* Prefer mremap, fall back to munmap */
2926
            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
2927
                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
2928
              released = extra;
2929
            }
2930
          }
2931
        }
2932
        else if (HAVE_MORECORE) {
2933
          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
2934
            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
2935
          ACQUIRE_MORECORE_LOCK();
2936
          {
2937
            /* Make sure end of memory is where we last set it. */
2938
            char* old_br = (char*)(CALL_MORECORE(0));
2939
            if (old_br == sp->base + sp->size) {
2940
              char* rel_br = (char*)(CALL_MORECORE(-extra));
2941
              char* new_br = (char*)(CALL_MORECORE(0));
2942
              if (rel_br != CMFAIL && new_br < old_br)
2943
                released = old_br - new_br;
2944
            }
2945
          }
2946
          RELEASE_MORECORE_LOCK();
2947
        }
2948
      }
2949
 
2950
      if (released != 0) {
2951
        sp->size -= released;
2952
        m->footprint -= released;
2953
        init_top(m, m->top, m->topsize - released);
2954
        check_top_chunk(m, m->top);
2955
      }
2956
    }
2957
 
2958
    /* Unmap any unused mmapped segments */
2959
    if (HAVE_MMAP)
2960
      released += release_unused_segments(m);
2961
 
2962
    /* On failure, disable autotrim to avoid repeated failed future calls */
2963
    if (released == 0)
2964
      m->trim_check = MAX_SIZE_T;
2965
  }
2966
 
2967
  return (released != 0)? 1 : 0;
2968
}
2969
 
2970
/* ---------------------------- malloc support --------------------------- */
2971
 
2972
/* allocate a large request from the best fitting chunk in a treebin */
2973
static void* tmalloc_large(mstate m, size_t nb) {
2974
  tchunkptr v = 0;
2975
  size_t rsize = -nb; /* Unsigned negation */
2976
  tchunkptr t;
2977
  bindex_t idx;
2978
  compute_tree_index(nb, idx);
2979
 
2980
  if ((t = *treebin_at(m, idx)) != 0) {
2981
    /* Traverse tree for this bin looking for node with size == nb */
2982
    size_t sizebits = nb << leftshift_for_tree_index(idx);
2983
    tchunkptr rst = 0;  /* The deepest untaken right subtree */
2984
    for (;;) {
2985
      tchunkptr rt;
2986
      size_t trem = chunksize(t) - nb;
2987
      if (trem < rsize) {
2988
        v = t;
2989
        if ((rsize = trem) == 0)
2990
          break;
2991
      }
2992
      rt = t->child[1];
2993
      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2994
      if (rt != 0 && rt != t)
2995
        rst = rt;
2996
      if (t == 0) {
2997
        t = rst; /* set t to least subtree holding sizes > nb */
2998
        break;
2999
      }
3000
      sizebits <<= 1;
3001
    }
3002
  }
3003
 
3004
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3005
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3006
    if (leftbits != 0) {
3007
      bindex_t i;
3008
      binmap_t leastbit = least_bit(leftbits);
3009
      compute_bit2idx(leastbit, i);
3010
      t = *treebin_at(m, i);
3011
    }
3012
  }
3013
 
3014
  while (t != 0) { /* find smallest of tree or subtree */
3015
    size_t trem = chunksize(t) - nb;
3016
    if (trem < rsize) {
3017
      rsize = trem;
3018
      v = t;
3019
    }
3020
    t = leftmost_child(t);
3021
  }
3022
 
3023
  /*  If dv is a better fit, return 0 so malloc will use it */
3024
  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3025
    if (RTCHECK(ok_address(m, v))) { /* split */
3026
      mchunkptr r = chunk_plus_offset(v, nb);
3027
      assert(chunksize(v) == rsize + nb);
3028
      if (RTCHECK(ok_next(v, r))) {
3029
        unlink_large_chunk(m, v);
3030
        if (rsize < MIN_CHUNK_SIZE)
3031
          set_inuse_and_pinuse(m, v, (rsize + nb));
3032
        else {
3033
          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3034
          set_size_and_pinuse_of_free_chunk(r, rsize);
3035
          insert_chunk(m, r, rsize);
3036
        }
3037
        return chunk2mem(v);
3038
      }
3039
    }
3040
    CORRUPTION_ERROR_ACTION(m);
3041
  }
3042
  return 0;
3043
}
3044
 
3045
/* allocate a small request from the best fitting chunk in a treebin */
3046
static void* tmalloc_small(mstate m, size_t nb) {
3047
  tchunkptr t, v;
3048
  size_t rsize;
3049
  bindex_t i;
3050
  binmap_t leastbit = least_bit(m->treemap);
3051
  compute_bit2idx(leastbit, i);
3052
 
3053
  v = t = *treebin_at(m, i);
3054
  rsize = chunksize(t) - nb;
3055
 
3056
  while ((t = leftmost_child(t)) != 0) {
3057
    size_t trem = chunksize(t) - nb;
3058
    if (trem < rsize) {
3059
      rsize = trem;
3060
      v = t;
3061
    }
3062
  }
3063
 
3064
  if (RTCHECK(ok_address(m, v))) {
3065
    mchunkptr r = chunk_plus_offset(v, nb);
3066
    assert(chunksize(v) == rsize + nb);
3067
    if (RTCHECK(ok_next(v, r))) {
3068
      unlink_large_chunk(m, v);
3069
      if (rsize < MIN_CHUNK_SIZE)
3070
        set_inuse_and_pinuse(m, v, (rsize + nb));
3071
      else {
3072
        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3073
        set_size_and_pinuse_of_free_chunk(r, rsize);
3074
        replace_dv(m, r, rsize);
3075
      }
3076
      return chunk2mem(v);
3077
    }
3078
  }
3079
 
3080
  CORRUPTION_ERROR_ACTION(m);
3081
  return 0;
3082
}
3083
 
3084
/* --------------------------- realloc support --------------------------- */
3085
 
3086
static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3087
  if (bytes >= MAX_REQUEST) {
3088
    MALLOC_FAILURE_ACTION;
3089
    return 0;
3090
  }
3091
  if (!PREACTION(m)) {
3092
    mchunkptr oldp = mem2chunk(oldmem);
3093
    size_t oldsize = chunksize(oldp);
3094
    mchunkptr next = chunk_plus_offset(oldp, oldsize);
3095
    mchunkptr newp = 0;
3096
    void* extra = 0;
3097
 
3098
    /* Try to either shrink or extend into top. Else malloc-copy-free */
3099
 
3100
    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3101
                ok_next(oldp, next) && ok_pinuse(next))) {
3102
      size_t nb = request2size(bytes);
3103
      if (is_mmapped(oldp))
3104
        newp = mmap_resize(m, oldp, nb);
3105
      else if (oldsize >= nb) { /* already big enough */
3106
        size_t rsize = oldsize - nb;
3107
        newp = oldp;
3108
        if (rsize >= MIN_CHUNK_SIZE) {
3109
          mchunkptr remainder = chunk_plus_offset(newp, nb);
3110
          set_inuse(m, newp, nb);
3111
          set_inuse(m, remainder, rsize);
3112
          extra = chunk2mem(remainder);
3113
        }
3114
      }
3115
      else if (next == m->top && oldsize + m->topsize > nb) {
3116
        /* Expand into top */
3117
        size_t newsize = oldsize + m->topsize;
3118
        size_t newtopsize = newsize - nb;
3119
        mchunkptr newtop = chunk_plus_offset(oldp, nb);
3120
        set_inuse(m, oldp, nb);
3121
        newtop->head = newtopsize |PINUSE_BIT;
3122
        m->top = newtop;
3123
        m->topsize = newtopsize;
3124
        newp = oldp;
3125
      }
3126
    }
3127
    else {
3128
      USAGE_ERROR_ACTION(m, oldmem);
3129
      POSTACTION(m);
3130
      return 0;
3131
    }
3132
 
3133
    POSTACTION(m);
3134
 
3135
    if (newp != 0) {
3136
      if (extra != 0) {
3137
        internal_free(m, extra);
3138
      }
3139
      check_inuse_chunk(m, newp);
3140
      return chunk2mem(newp);
3141
    }
3142
    else {
3143
      void* newmem = internal_malloc(m, bytes);
3144
      if (newmem != 0) {
3145
        size_t oc = oldsize - overhead_for(oldp);
3146
        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3147
        internal_free(m, oldmem);
3148
      }
3149
      return newmem;
3150
    }
3151
  }
3152
  return 0;
3153
}
3154
 
3155
/* --------------------------- memalign support -------------------------- */
3156
 
3157
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3158
  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3159
    return internal_malloc(m, bytes);
3160
  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3161
    alignment = MIN_CHUNK_SIZE;
3162
  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3163
    size_t a = MALLOC_ALIGNMENT << 1;
3164
    while (a < alignment) a <<= 1;
3165
    alignment = a;
3166
  }
3167
 
3168
  if (bytes >= MAX_REQUEST - alignment) {
3169
    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3170
      MALLOC_FAILURE_ACTION;
3171
    }
3172
  }
3173
  else {
3174
    size_t nb = request2size(bytes);
3175
    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3176
    char* mem = (char*)internal_malloc(m, req);
3177
    if (mem != 0) {
3178
      void* leader = 0;
3179
      void* trailer = 0;
3180
      mchunkptr p = mem2chunk(mem);
3181
 
3182
      if (PREACTION(m)) return 0;
3183
      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3184
        /*
3185
          Find an aligned spot inside chunk.  Since we need to give
3186
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3187
          the first calculation places us at a spot with less than
3188
          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3189
          We've allocated enough total room so that this is always
3190
          possible.
3191
        */
3192
        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3193
                                                       alignment -
3194
                                                       SIZE_T_ONE)) &
3195
                                             -alignment));
3196
        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3197
          br : br+alignment;
3198
        mchunkptr newp = (mchunkptr)pos;
3199
        size_t leadsize = pos - (char*)(p);
3200
        size_t newsize = chunksize(p) - leadsize;
3201
 
3202
        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3203
          newp->prev_foot = p->prev_foot + leadsize;
3204
          newp->head = (newsize|CINUSE_BIT);
3205
        }
3206
        else { /* Otherwise, give back leader, use the rest */
3207
          set_inuse(m, newp, newsize);
3208
          set_inuse(m, p, leadsize);
3209
          leader = chunk2mem(p);
3210
        }
3211
        p = newp;
3212
      }
3213
 
3214
      /* Give back spare room at the end */
3215
      if (!is_mmapped(p)) {
3216
        size_t size = chunksize(p);
3217
        if (size > nb + MIN_CHUNK_SIZE) {
3218
          size_t remainder_size = size - nb;
3219
          mchunkptr remainder = chunk_plus_offset(p, nb);
3220
          set_inuse(m, p, nb);
3221
          set_inuse(m, remainder, remainder_size);
3222
          trailer = chunk2mem(remainder);
3223
        }
3224
      }
3225
 
3226
      assert (chunksize(p) >= nb);
3227
      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3228
      check_inuse_chunk(m, p);
3229
      POSTACTION(m);
3230
      if (leader != 0) {
3231
        internal_free(m, leader);
3232
      }
3233
      if (trailer != 0) {
3234
        internal_free(m, trailer);
3235
      }
3236
      return chunk2mem(p);
3237
    }
3238
  }
3239
  return 0;
3240
}
3241
 
3242
/* ------------------------ comalloc/coalloc support --------------------- */
3243
 
3244
static void** ialloc(mstate m,
3245
                     size_t n_elements,
3246
                     size_t* sizes,
3247
                     int opts,
3248
                     void* chunks[]) {
3249
  /*
3250
    This provides common support for independent_X routines, handling
3251
    all of the combinations that can result.
3252
 
3253
    The opts arg has:
3254
    bit 0 set if all elements are same size (using sizes[0])
3255
    bit 1 set if elements should be zeroed
3256
  */
3257
 
3258
  size_t    element_size;   /* chunksize of each element, if all same */
3259
  size_t    contents_size;  /* total size of elements */
3260
  size_t    array_size;     /* request size of pointer array */
3261
  void*     mem;            /* malloced aggregate space */
3262
  mchunkptr p;              /* corresponding chunk */
3263
  size_t    remainder_size; /* remaining bytes while splitting */
3264
  void**    marray;         /* either "chunks" or malloced ptr array */
3265
  mchunkptr array_chunk;    /* chunk for malloced ptr array */
3266
  flag_t    was_enabled;    /* to disable mmap */
3267
  size_t    size;
3268
  size_t    i;
3269
 
3270
  /* compute array length, if needed */
3271
  if (chunks != 0) {
3272
    if (n_elements == 0)
3273
      return chunks; /* nothing to do */
3274
    marray = chunks;
3275
    array_size = 0;
3276
  }
3277
  else {
3278
    /* if empty req, must still return chunk representing empty array */
3279
    if (n_elements == 0)
3280
      return (void**)internal_malloc(m, 0);
3281
    marray = 0;
3282
    array_size = request2size(n_elements * (sizeof(void*)));
3283
  }
3284
 
3285
  /* compute total element size */
3286
  if (opts & 0x1) { /* all-same-size */
3287
    element_size = request2size(*sizes);
3288
    contents_size = n_elements * element_size;
3289
  }
3290
  else { /* add up all the sizes */
3291
    element_size = 0;
3292
    contents_size = 0;
3293
    for (i = 0; i != n_elements; ++i)
3294
      contents_size += request2size(sizes[i]);
3295
  }
3296
 
3297
  size = contents_size + array_size;
3298
 
3299
  /*
3300
     Allocate the aggregate chunk.  First disable direct-mmapping so
3301
     malloc won't use it, since we would not be able to later
3302
     free/realloc space internal to a segregated mmap region.
3303
  */
3304
  was_enabled = use_mmap(m);
3305
  disable_mmap(m);
3306
  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3307
  if (was_enabled)
3308
    enable_mmap(m);
3309
  if (mem == 0)
3310
    return 0;
3311
 
3312
  if (PREACTION(m)) return 0;
3313
  p = mem2chunk(mem);
3314
  remainder_size = chunksize(p);
3315
 
3316
  assert(!is_mmapped(p));
3317
 
3318
  if (opts & 0x2) {       /* optionally clear the elements */
3319
    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3320
  }
3321
 
3322
  /* If not provided, allocate the pointer array as final part of chunk */
3323
  if (marray == 0) {
3324
    size_t  array_chunk_size;
3325
    array_chunk = chunk_plus_offset(p, contents_size);
3326
    array_chunk_size = remainder_size - contents_size;
3327
    marray = (void**) (chunk2mem(array_chunk));
3328
    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
3329
    remainder_size = contents_size;
3330
  }
3331
 
3332
  /* split out elements */
3333
  for (i = 0; ; ++i) {
3334
    marray[i] = chunk2mem(p);
3335
    if (i != n_elements-1) {
3336
      if (element_size != 0)
3337
        size = element_size;
3338
      else
3339
        size = request2size(sizes[i]);
3340
      remainder_size -= size;
3341
      set_size_and_pinuse_of_inuse_chunk(m, p, size);
3342
      p = chunk_plus_offset(p, size);
3343
    }
3344
    else { /* the final element absorbs any overallocation slop */
3345
      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
3346
      break;
3347
    }
3348
  }
3349
 
3350
#if DEBUG
3351
  if (marray != chunks) {
3352
    /* final element must have exactly exhausted chunk */
3353
    if (element_size != 0) {
3354
      assert(remainder_size == element_size);
3355
    }
3356
    else {
3357
      assert(remainder_size == request2size(sizes[i]));
3358
    }
3359
    check_inuse_chunk(m, mem2chunk(marray));
3360
  }
3361
  for (i = 0; i != n_elements; ++i)
3362
    check_inuse_chunk(m, mem2chunk(marray[i]));
3363
 
3364
#endif /* DEBUG */
3365
 
3366
  POSTACTION(m);
3367
  return marray;
3368
}
3369
 
3370
 
3371
/* -------------------------- public routines ---------------------------- */
3372
 
3373
#if !ONLY_MSPACES
3374
 
3375
void* dlmalloc(size_t bytes) {
3376
  /*
3377
     Basic algorithm:
3378
     If a small request (< 256 bytes minus per-chunk overhead):
3379
       1. If one exists, use a remainderless chunk in associated smallbin.
3380
          (Remainderless means that there are too few excess bytes to
3381
          represent as a chunk.)
3382
       2. If it is big enough, use the dv chunk, which is normally the
3383
          chunk adjacent to the one used for the most recent small request.
3384
       3. If one exists, split the smallest available chunk in a bin,
3385
          saving remainder in dv.
3386
       4. If it is big enough, use the top chunk.
3387
       5. If available, get memory from system and use it
3388
     Otherwise, for a large request:
3389
       1. Find the smallest available binned chunk that fits, and use it
3390
          if it is better fitting than dv chunk, splitting if necessary.
3391
       2. If better fitting than any binned chunk, use the dv chunk.
3392
       3. If it is big enough, use the top chunk.
3393
       4. If request size >= mmap threshold, try to directly mmap this chunk.
3394
       5. If available, get memory from system and use it
3395
 
3396
     The ugly goto's here ensure that postaction occurs along all paths.
3397
  */
3398
 
3399
  if (!PREACTION(gm)) {
3400
    void* mem;
3401
    size_t nb;
3402
    if (bytes <= MAX_SMALL_REQUEST) {
3403
      bindex_t idx;
3404
      binmap_t smallbits;
3405
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3406
      idx = small_index(nb);
3407
      smallbits = gm->smallmap >> idx;
3408
 
3409
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3410
        mchunkptr b, p;
3411
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3412
        b = smallbin_at(gm, idx);
3413
        p = b->fd;
3414
        assert(chunksize(p) == small_index2size(idx));
3415
        unlink_first_small_chunk(gm, b, p, idx);
3416
        set_inuse_and_pinuse(gm, p, small_index2size(idx));
3417
        mem = chunk2mem(p);
3418
        check_malloced_chunk(gm, mem, nb);
3419
        goto postaction;
3420
      }
3421
 
3422
      else if (nb > gm->dvsize) {
3423
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3424
          mchunkptr b, p, r;
3425
          size_t rsize;
3426
          bindex_t i;
3427
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3428
          binmap_t leastbit = least_bit(leftbits);
3429
          compute_bit2idx(leastbit, i);
3430
          b = smallbin_at(gm, i);
3431
          p = b->fd;
3432
          assert(chunksize(p) == small_index2size(i));
3433
          unlink_first_small_chunk(gm, b, p, i);
3434
          rsize = small_index2size(i) - nb;
3435
          /* Fit here cannot be remainderless if 4byte sizes */
3436
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3437
            set_inuse_and_pinuse(gm, p, small_index2size(i));
3438
          else {
3439
            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3440
            r = chunk_plus_offset(p, nb);
3441
            set_size_and_pinuse_of_free_chunk(r, rsize);
3442
            replace_dv(gm, r, rsize);
3443
          }
3444
          mem = chunk2mem(p);
3445
          check_malloced_chunk(gm, mem, nb);
3446
          goto postaction;
3447
        }
3448
 
3449
        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
3450
          check_malloced_chunk(gm, mem, nb);
3451
          goto postaction;
3452
        }
3453
      }
3454
    }
3455
    else if (bytes >= MAX_REQUEST)
3456
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3457
    else {
3458
      nb = pad_request(bytes);
3459
      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
3460
        check_malloced_chunk(gm, mem, nb);
3461
        goto postaction;
3462
      }
3463
    }
3464
 
3465
    if (nb <= gm->dvsize) {
3466
      size_t rsize = gm->dvsize - nb;
3467
      mchunkptr p = gm->dv;
3468
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3469
        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
3470
        gm->dvsize = rsize;
3471
        set_size_and_pinuse_of_free_chunk(r, rsize);
3472
        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3473
      }
3474
      else { /* exhaust dv */
3475
        size_t dvs = gm->dvsize;
3476
        gm->dvsize = 0;
3477
        gm->dv = 0;
3478
        set_inuse_and_pinuse(gm, p, dvs);
3479
      }
3480
      mem = chunk2mem(p);
3481
      check_malloced_chunk(gm, mem, nb);
3482
      goto postaction;
3483
    }
3484
 
3485
    else if (nb < gm->topsize) { /* Split top */
3486
      size_t rsize = gm->topsize -= nb;
3487
      mchunkptr p = gm->top;
3488
      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
3489
      r->head = rsize | PINUSE_BIT;
3490
      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3491
      mem = chunk2mem(p);
3492
      check_top_chunk(gm, gm->top);
3493
      check_malloced_chunk(gm, mem, nb);
3494
      goto postaction;
3495
    }
3496
 
3497
    mem = sys_alloc(gm, nb);
3498
 
3499
  postaction:
3500
    POSTACTION(gm);
3501
    return mem;
3502
  }
3503
 
3504
  return 0;
3505
}
3506
 
3507
void dlfree(void* mem) {
3508
  /*
3509
     Consolidate freed chunks with preceeding or succeeding bordering
3510
     free chunks, if they exist, and then place in a bin.  Intermixed
3511
     with special cases for top, dv, mmapped chunks, and usage errors.
3512
  */
3513
 
3514
  if (mem != 0) {
3515
    mchunkptr p  = mem2chunk(mem);
3516
#if FOOTERS
3517
    mstate fm = get_mstate_for(p);
3518
    if (!ok_magic(fm)) {
3519
      USAGE_ERROR_ACTION(fm, p);
3520
      return;
3521
    }
3522
#else /* FOOTERS */
3523
#define fm gm
3524
#endif /* FOOTERS */
3525
    if (!PREACTION(fm)) {
3526
      check_inuse_chunk(fm, p);
3527
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3528
        size_t psize = chunksize(p);
3529
        mchunkptr next = chunk_plus_offset(p, psize);
3530
        if (!pinuse(p)) {
3531
          size_t prevsize = p->prev_foot;
3532
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3533
            prevsize &= ~IS_MMAPPED_BIT;
3534
            psize += prevsize + MMAP_FOOT_PAD;
3535
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
3536
              fm->footprint -= psize;
3537
            goto postaction;
3538
          }
3539
          else {
3540
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3541
            psize += prevsize;
3542
            p = prev;
3543
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3544
              if (p != fm->dv) {
3545
                unlink_chunk(fm, p, prevsize);
3546
              }
3547
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3548
                fm->dvsize = psize;
3549
                set_free_with_pinuse(p, psize, next);
3550
                goto postaction;
3551
              }
3552
            }
3553
            else
3554
              goto erroraction;
3555
          }
3556
        }
3557
 
3558
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3559
          if (!cinuse(next)) {  /* consolidate forward */
3560
            if (next == fm->top) {
3561
              size_t tsize = fm->topsize += psize;
3562
              fm->top = p;
3563
              p->head = tsize | PINUSE_BIT;
3564
              if (p == fm->dv) {
3565
                fm->dv = 0;
3566
                fm->dvsize = 0;
3567
              }
3568
              if (should_trim(fm, tsize))
3569
                sys_trim(fm, 0);
3570
              goto postaction;
3571
            }
3572
            else if (next == fm->dv) {
3573
              size_t dsize = fm->dvsize += psize;
3574
              fm->dv = p;
3575
              set_size_and_pinuse_of_free_chunk(p, dsize);
3576
              goto postaction;
3577
            }
3578
            else {
3579
              size_t nsize = chunksize(next);
3580
              psize += nsize;
3581
              unlink_chunk(fm, next, nsize);
3582
              set_size_and_pinuse_of_free_chunk(p, psize);
3583
              if (p == fm->dv) {
3584
                fm->dvsize = psize;
3585
                goto postaction;
3586
              }
3587
            }
3588
          }
3589
          else
3590
            set_free_with_pinuse(p, psize, next);
3591
          insert_chunk(fm, p, psize);
3592
          check_free_chunk(fm, p);
3593
          goto postaction;
3594
        }
3595
      }
3596
    erroraction:
3597
      USAGE_ERROR_ACTION(fm, p);
3598
    postaction:
3599
      POSTACTION(fm);
3600
    }
3601
  }
3602
#if !FOOTERS
3603
#undef fm
3604
#endif /* FOOTERS */
3605
}
3606
 
3607
void* dlcalloc(size_t n_elements, size_t elem_size) {
3608
  void* mem;
3609
  size_t req = 0;
3610
  if (n_elements != 0) {
3611
    req = n_elements * elem_size;
3612
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
3613
        (req / n_elements != elem_size))
3614
      req = MAX_SIZE_T; /* force downstream failure on overflow */
3615
  }
3616
  mem = dlmalloc(req);
3617
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
3618
    memset(mem, 0, req);
3619
  return mem;
3620
}
3621
 
3622
void* dlrealloc(void* oldmem, size_t bytes) {
3623
  if (oldmem == 0)
3624
    return dlmalloc(bytes);
3625
#ifdef REALLOC_ZERO_BYTES_FREES
3626
  if (bytes == 0) {
3627
    dlfree(oldmem);
3628
    return 0;
3629
  }
3630
#endif /* REALLOC_ZERO_BYTES_FREES */
3631
  else {
3632
#if ! FOOTERS
3633
    mstate m = gm;
3634
#else /* FOOTERS */
3635
    mstate m = get_mstate_for(mem2chunk(oldmem));
3636
    if (!ok_magic(m)) {
3637
      USAGE_ERROR_ACTION(m, oldmem);
3638
      return 0;
3639
    }
3640
#endif /* FOOTERS */
3641
    return internal_realloc(m, oldmem, bytes);
3642
  }
3643
}
3644
 
3645
void* dlmemalign(size_t alignment, size_t bytes) {
3646
  return internal_memalign(gm, alignment, bytes);
3647
}
3648
 
3649
void** dlindependent_calloc(size_t n_elements, size_t elem_size,
3650
                                 void* chunks[]) {
3651
  size_t sz = elem_size; /* serves as 1-element array */
3652
  return ialloc(gm, n_elements, &sz, 3, chunks);
3653
}
3654
 
3655
void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
3656
                                   void* chunks[]) {
3657
  return ialloc(gm, n_elements, sizes, 0, chunks);
3658
}
3659
 
3660
void* dlvalloc(size_t bytes) {
3661
  size_t pagesz;
3662
  init_mparams();
3663
  pagesz = mparams.page_size;
3664
  return dlmemalign(pagesz, bytes);
3665
}
3666
 
3667
void* dlpvalloc(size_t bytes) {
3668
  size_t pagesz;
3669
  init_mparams();
3670
  pagesz = mparams.page_size;
3671
  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
3672
}
3673
 
3674
int dlmalloc_trim(size_t pad) {
3675
  int result = 0;
3676
  if (!PREACTION(gm)) {
3677
    result = sys_trim(gm, pad);
3678
    POSTACTION(gm);
3679
  }
3680
  return result;
3681
}
3682
 
3683
size_t dlmalloc_footprint(void) {
3684
  return gm->footprint;
3685
}
3686
 
3687
size_t dlmalloc_max_footprint(void) {
3688
  return gm->max_footprint;
3689
}
3690
 
3691
#if !NO_MALLINFO
3692
struct mallinfo dlmallinfo(void) {
3693
  return internal_mallinfo(gm);
3694
}
3695
#endif /* NO_MALLINFO */
3696
 
3697
//void dlmalloc_stats() {
3698
//  internal_malloc_stats(gm);
3699
//}
3700
 
3701
size_t dlmalloc_usable_size(void* mem) {
3702
  if (mem != 0) {
3703
    mchunkptr p = mem2chunk(mem);
3704
    if (cinuse(p))
3705
      return chunksize(p) - overhead_for(p);
3706
  }
3707
  return 0;
3708
}
3709
 
3710
int dlmallopt(int param_number, int value) {
3711
  return change_mparam(param_number, value);
3712
}
3713
 
3714
#endif /* !ONLY_MSPACES */
3715
 
3716
/* ----------------------------- user mspaces ---------------------------- */
3717
 
3718
#if MSPACES
3719
#endif /* MSPACES */
3720
 
3721
/* -------------------- Alternative MORECORE functions ------------------- */
3722
 
3723
/*
3724
  Guidelines for creating a custom version of MORECORE:
3725
 
3726
  * For best performance, MORECORE should allocate in multiples of pagesize.
3727
  * MORECORE may allocate more memory than requested. (Or even less,
3728
      but this will usually result in a malloc failure.)
3729
  * MORECORE must not allocate memory when given argument zero, but
3730
      instead return one past the end address of memory from previous
3731
      nonzero call.
3732
  * For best performance, consecutive calls to MORECORE with positive
3733
      arguments should return increasing addresses, indicating that
3734
      space has been contiguously extended.
3735
  * Even though consecutive calls to MORECORE need not return contiguous
3736
      addresses, it must be OK for malloc'ed chunks to span multiple
3737
      regions in those cases where they do happen to be contiguous.
3738
  * MORECORE need not handle negative arguments -- it may instead
3739
      just return MFAIL when given negative arguments.
3740
      Negative arguments are always multiples of pagesize. MORECORE
3741
      must not misinterpret negative args as large positive unsigned
3742
      args. You can suppress all such calls from even occurring by defining
3743
      MORECORE_CANNOT_TRIM,
3744
 
3745
  As an example alternative MORECORE, here is a custom allocator
3746
  kindly contributed for pre-OSX macOS.  It uses virtually but not
3747
  necessarily physically contiguous non-paged memory (locked in,
3748
  present and won't get swapped out).  You can use it by uncommenting
3749
  this section, adding some #includes, and setting up the appropriate
3750
  defines above:
3751
 
3752
      #define MORECORE osMoreCore
3753
 
3754
  There is also a shutdown routine that should somehow be called for
3755
  cleanup upon program exit.
3756
 
3757
  #define MAX_POOL_ENTRIES 100
3758
  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
3759
  static int next_os_pool;
3760
  void *our_os_pools[MAX_POOL_ENTRIES];
3761
 
3762
  void *osMoreCore(int size)
3763
  {
3764
    void *ptr = 0;
3765
    static void *sbrk_top = 0;
3766
 
3767
    if (size > 0)
3768
    {
3769
      if (size < MINIMUM_MORECORE_SIZE)
3770
         size = MINIMUM_MORECORE_SIZE;
3771
      if (CurrentExecutionLevel() == kTaskLevel)
3772
         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
3773
      if (ptr == 0)
3774
      {
3775
        return (void *) MFAIL;
3776
      }
3777
      // save ptrs so they can be freed during cleanup
3778
      our_os_pools[next_os_pool] = ptr;
3779
      next_os_pool++;
3780
      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
3781
      sbrk_top = (char *) ptr + size;
3782
      return ptr;
3783
    }
3784
    else if (size < 0)
3785
    {
3786
      // we don't currently support shrink behavior
3787
      return (void *) MFAIL;
3788
    }
3789
    else
3790
    {
3791
      return sbrk_top;
3792
    }
3793
  }
3794
 
3795
  // cleanup any allocated memory pools
3796
  // called as last thing before shutting down driver
3797
 
3798
  void osCleanupMem(void)
3799
  {
3800
    void **ptr;
3801
 
3802
    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
3803
      if (*ptr)
3804
      {
3805
         PoolDeallocate(*ptr);
3806
         *ptr = 0;
3807
      }
3808
  }
3809
 
3810
*/
3811
 
3812
 
3813
/* -----------------------------------------------------------------------
3814
History:
3815
    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
3816
      * Add max_footprint functions
3817
      * Ensure all appropriate literals are size_t
3818
      * Fix conditional compilation problem for some #define settings
3819
      * Avoid concatenating segments with the one provided
3820
        in create_mspace_with_base
3821
      * Rename some variables to avoid compiler shadowing warnings
3822
      * Use explicit lock initialization.
3823
      * Better handling of sbrk interference.
3824
      * Simplify and fix segment insertion, trimming and mspace_destroy
3825
      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
3826
      * Thanks especially to Dennis Flanagan for help on these.
3827
 
3828
    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
3829
      * Fix memalign brace error.
3830
 
3831
    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
3832
      * Fix improper #endif nesting in C++
3833
      * Add explicit casts needed for C++
3834
 
3835
    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
3836
      * Use trees for large bins
3837
      * Support mspaces
3838
      * Use segments to unify sbrk-based and mmap-based system allocation,
3839
        removing need for emulation on most platforms without sbrk.
3840
      * Default safety checks
3841
      * Optional footer checks. Thanks to William Robertson for the idea.
3842
      * Internal code refactoring
3843
      * Incorporate suggestions and platform-specific changes.
3844
        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
3845
        Aaron Bachmann,  Emery Berger, and others.
3846
      * Speed up non-fastbin processing enough to remove fastbins.
3847
      * Remove useless cfree() to avoid conflicts with other apps.
3848
      * Remove internal memcpy, memset. Compilers handle builtins better.
3849
      * Remove some options that no one ever used and rename others.
3850
 
3851
    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
3852
      * Fix malloc_state bitmap array misdeclaration
3853
 
3854
    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
3855
      * Allow tuning of FIRST_SORTED_BIN_SIZE
3856
      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
3857
      * Better detection and support for non-contiguousness of MORECORE.
3858
        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
3859
      * Bypass most of malloc if no frees. Thanks To Emery Berger.
3860
      * Fix freeing of old top non-contiguous chunk im sysmalloc.
3861
      * Raised default trim and map thresholds to 256K.
3862
      * Fix mmap-related #defines. Thanks to Lubos Lunak.
3863
      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
3864
      * Branch-free bin calculation
3865
      * Default trim and mmap thresholds now 256K.
3866
 
3867
    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
3868
      * Introduce independent_comalloc and independent_calloc.
3869
        Thanks to Michael Pachos for motivation and help.
3870
      * Make optional .h file available
3871
      * Allow > 2GB requests on 32bit systems.
3872
      * new WIN32 sbrk, mmap, munmap, lock code from .
3873
        Thanks also to Andreas Mueller ,
3874
        and Anonymous.
3875
      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
3876
        helping test this.)
3877
      * memalign: check alignment arg
3878
      * realloc: don't try to shift chunks backwards, since this
3879
        leads to  more fragmentation in some programs and doesn't
3880
        seem to help in any others.
3881
      * Collect all cases in malloc requiring system memory into sysmalloc
3882
      * Use mmap as backup to sbrk
3883
      * Place all internal state in malloc_state
3884
      * Introduce fastbins (although similar to 2.5.1)
3885
      * Many minor tunings and cosmetic improvements
3886
      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
3887
      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
3888
        Thanks to Tony E. Bennett  and others.
3889
      * Include errno.h to support default failure action.
3890
 
3891
    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
3892
      * return null for negative arguments
3893
      * Added Several WIN32 cleanups from Martin C. Fong 
3894
         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
3895
          (e.g. WIN32 platforms)
3896
         * Cleanup header file inclusion for WIN32 platforms
3897
         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
3898
         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
3899
           memory allocation routines
3900
         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
3901
         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
3902
           usage of 'assert' in non-WIN32 code
3903
         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
3904
           avoid infinite loop
3905
      * Always call 'fREe()' rather than 'free()'
3906
 
3907
    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
3908
      * Fixed ordering problem with boundary-stamping
3909
 
3910
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
3911
      * Added pvalloc, as recommended by H.J. Liu
3912
      * Added 64bit pointer support mainly from Wolfram Gloger
3913
      * Added anonymously donated WIN32 sbrk emulation
3914
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3915
      * malloc_extend_top: fix mask error that caused wastage after
3916
        foreign sbrks
3917
      * Add linux mremap support code from HJ Liu
3918
 
3919
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
3920
      * Integrated most documentation with the code.
3921
      * Add support for mmap, with help from
3922
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3923
      * Use last_remainder in more cases.
3924
      * Pack bins using idea from  colin@nyx10.cs.du.edu
3925
      * Use ordered bins instead of best-fit threshhold
3926
      * Eliminate block-local decls to simplify tracing and debugging.
3927
      * Support another case of realloc via move into top
3928
      * Fix error occuring when initial sbrk_base not word-aligned.
3929
      * Rely on page size for units instead of SBRK_UNIT to
3930
        avoid surprises about sbrk alignment conventions.
3931
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3932
        (raymond@es.ele.tue.nl) for the suggestion.
3933
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3934
      * More precautions for cases where other routines call sbrk,
3935
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3936
      * Added macros etc., allowing use in linux libc from
3937
        H.J. Lu (hjl@gnu.ai.mit.edu)
3938
      * Inverted this history list
3939
 
3940
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
3941
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3942
      * Removed all preallocation code since under current scheme
3943
        the work required to undo bad preallocations exceeds
3944
        the work saved in good cases for most test programs.
3945
      * No longer use return list or unconsolidated bins since
3946
        no scheme using them consistently outperforms those that don't
3947
        given above changes.
3948
      * Use best fit for very large chunks to prevent some worst-cases.
3949
      * Added some support for debugging
3950
 
3951
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
3952
      * Removed footers when chunks are in use. Thanks to
3953
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3954
 
3955
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
3956
      * Added malloc_trim, with help from Wolfram Gloger
3957
        (wmglo@Dent.MED.Uni-Muenchen.DE).
3958
 
3959
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
3960
 
3961
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
3962
      * realloc: try to expand in both directions
3963
      * malloc: swap order of clean-bin strategy;
3964
      * realloc: only conditionally expand backwards
3965
      * Try not to scavenge used bins
3966
      * Use bin counts as a guide to preallocation
3967
      * Occasionally bin return list chunks in first scan
3968
      * Add a few optimizations from colin@nyx10.cs.du.edu
3969
 
3970
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
3971
      * faster bin computation & slightly different binning
3972
      * merged all consolidations to one part of malloc proper
3973
         (eliminating old malloc_find_space & malloc_clean_bin)
3974
      * Scan 2 returns chunks (not just 1)
3975
      * Propagate failure in realloc if malloc returns 0
3976
      * Add stuff to allow compilation on non-ANSI compilers
3977
          from kpv@research.att.com
3978
 
3979
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
3980
      * removed potential for odd address access in prev_chunk
3981
      * removed dependency on getpagesize.h
3982
      * misc cosmetics and a bit more internal documentation
3983
      * anticosmetics: mangled names in macros to evade debugger strangeness
3984
      * tested on sparc, hp-700, dec-mips, rs6000
3985
          with gcc & native cc (hp, dec only) allowing
3986
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3987
 
3988
    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
3989
      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3990
         structure of old version,  but most details differ.)
3991
 
3992
*/