Subversion Repositories Kolibri OS

Rev

Rev 1408 | Rev 3039 | Go to most recent revision | Show entire file | Regard whitespace | Details | Blame | Last modification | View Log | RSS feed

Rev 1408 Rev 1616
Line 2... Line 2...
2
  This is a version (aka dlmalloc) of malloc/free/realloc written by
2
  This is a version (aka dlmalloc) of malloc/free/realloc written by
3
  Doug Lea and released to the public domain, as explained at
3
  Doug Lea and released to the public domain, as explained at
4
  http://creativecommons.org/licenses/publicdomain.  Send questions,
4
  http://creativecommons.org/licenses/publicdomain.  Send questions,
5
  comments, complaints, performance data, etc to dl@cs.oswego.edu
5
  comments, complaints, performance data, etc to dl@cs.oswego.edu
Line 6... Line 6...
6
 
6
 
Line 7... Line 7...
7
* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
7
* Version 2.8.4 Wed May 27 09:56:23 2009  Doug Lea  (dl at gee)
8
 
8
 
9
   Note: There may be an updated version of this malloc obtainable at
9
   Note: There may be an updated version of this malloc obtainable at
Line 17... Line 17...
17
  the compile-time options default to reasonable values for use on
17
  the compile-time options default to reasonable values for use on
18
  most platforms.  You might later want to step through various
18
  most platforms.  You might later want to step through various
19
  compile-time and dynamic tuning options.
19
  compile-time and dynamic tuning options.
Line 20... Line 20...
20
 
20
 
21
  For convenience, an include file for code using this malloc is at:
21
  For convenience, an include file for code using this malloc is at:
22
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
22
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.4.h
23
  You don't really need this .h file unless you call functions not
23
  You don't really need this .h file unless you call functions not
24
  defined in your system include files.  The .h file contains only the
24
  defined in your system include files.  The .h file contains only the
25
  excerpts from this file needed for using this malloc on ANSI C/C++
25
  excerpts from this file needed for using this malloc on ANSI C/C++
26
  systems, so long as you haven't changed compile-time options about
26
  systems, so long as you haven't changed compile-time options about
Line 30... Line 30...
30
  library containing a malloc that is based on some version of this
30
  library containing a malloc that is based on some version of this
31
  malloc (for example in linux). You might still want to use the one
31
  malloc (for example in linux). You might still want to use the one
32
  in this file to customize settings or to avoid overheads associated
32
  in this file to customize settings or to avoid overheads associated
33
  with library versions.
33
  with library versions.
Line 34... Line -...
34
 
-
 
35
* Vital statistics:
-
 
36
 
-
 
37
  Supported pointer/size_t representation:       4 or 8 bytes
-
 
38
       size_t MUST be an unsigned type of the same width as
-
 
39
       pointers. (If you are using an ancient system that declares
-
 
40
       size_t as a signed type, or need it to be a different width
-
 
41
       than pointers, you can use a previous release of this malloc
-
 
42
       (e.g. 2.7.2) supporting these.)
-
 
43
 
-
 
44
  Alignment:                                     8 bytes (default)
-
 
45
       This suffices for nearly all current machines and C compilers.
-
 
46
       However, you can define MALLOC_ALIGNMENT to be wider than this
-
 
47
       if necessary (up to 128bytes), at the expense of using more space.
-
 
48
 
-
 
49
  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
-
 
50
                                          8 or 16 bytes (if 8byte sizes)
-
 
51
       Each malloced chunk has a hidden word of overhead holding size
-
 
52
       and status information, and additional cross-check word
-
 
53
       if FOOTERS is defined.
-
 
54
 
-
 
55
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
-
 
56
                          8-byte ptrs:  32 bytes    (including overhead)
-
 
57
 
-
 
58
       Even a request for zero bytes (i.e., malloc(0)) returns a
-
 
59
       pointer to something of the minimum allocatable size.
-
 
60
       The maximum overhead wastage (i.e., number of extra bytes
-
 
61
       allocated than were requested in malloc) is less than or equal
-
 
62
       to the minimum size, except for requests >= mmap_threshold that
-
 
63
       are serviced via mmap(), where the worst case wastage is about
-
 
64
       32 bytes plus the remainder from a system page (the minimal
-
 
65
       mmap unit); typically 4096 or 8192 bytes.
-
 
66
 
-
 
67
  Security: static-safe; optionally more or less
-
 
68
       The "security" of malloc refers to the ability of malicious
-
 
69
       code to accentuate the effects of errors (for example, freeing
-
 
70
       space that is not currently malloc'ed or overwriting past the
-
 
71
       ends of chunks) in code that calls malloc.  This malloc
-
 
72
       guarantees not to modify any memory locations below the base of
-
 
73
       heap, i.e., static variables, even in the presence of usage
-
 
74
       errors.  The routines additionally detect most improper frees
-
 
75
       and reallocs.  All this holds as long as the static bookkeeping
-
 
76
       for malloc itself is not corrupted by some other means.  This
-
 
77
       is only one aspect of security -- these checks do not, and
-
 
78
       cannot, detect all possible programming errors.
-
 
79
 
-
 
80
       If FOOTERS is defined nonzero, then each allocated chunk
-
 
81
       carries an additional check word to verify that it was malloced
-
 
82
       from its space.  These check words are the same within each
-
 
83
       execution of a program using malloc, but differ across
-
 
84
       executions, so externally crafted fake chunks cannot be
-
 
85
       freed. This improves security by rejecting frees/reallocs that
-
 
86
       could corrupt heap memory, in addition to the checks preventing
-
 
87
       writes to statics that are always on.  This may further improve
-
 
88
       security at the expense of time and space overhead.  (Note that
-
 
89
       FOOTERS may also be worth using with MSPACES.)
-
 
90
 
-
 
91
       By default detected errors cause the program to abort (calling
-
 
92
       "abort()"). You can override this to instead proceed past
-
 
93
       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
-
 
94
       has no effect, and a malloc that encounters a bad address
-
 
95
       caused by user overwrites will ignore the bad address by
-
 
96
       dropping pointers and indices to all known memory. This may
-
 
97
       be appropriate for programs that should continue if at all
-
 
98
       possible in the face of programming errors, although they may
-
 
99
       run out of memory because dropped memory is never reclaimed.
-
 
100
 
-
 
101
       If you don't like either of these options, you can define
-
 
102
       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
-
 
103
       else. And if if you are sure that your program using malloc has
-
 
104
       no errors or vulnerabilities, you can define INSECURE to 1,
-
 
105
       which might (or might not) provide a small performance improvement.
-
 
106
 
-
 
107
  Thread-safety: NOT thread-safe unless USE_LOCKS defined
-
 
108
       When USE_LOCKS is defined, each public call to malloc, free,
-
 
109
       etc is surrounded with either a pthread mutex or a win32
-
 
110
       spinlock (depending on WIN32). This is not especially fast, and
-
 
111
       can be a major bottleneck.  It is designed only to provide
-
 
112
       minimal protection in concurrent environments, and to provide a
-
 
113
       basis for extensions.  If you are using malloc in a concurrent
-
 
114
       program, consider instead using ptmalloc, which is derived from
-
 
115
       a version of this malloc. (See http://www.malloc.de).
-
 
116
 
-
 
117
  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
-
 
118
       This malloc can use unix sbrk or any emulation (invoked using
-
 
119
       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
-
 
120
       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
-
 
121
       memory.  On most unix systems, it tends to work best if both
-
 
122
       MORECORE and MMAP are enabled.  On Win32, it uses emulations
-
 
123
       based on VirtualAlloc. It also uses common C library functions
-
 
124
       like memset.
-
 
125
 
-
 
126
  Compliance: I believe it is compliant with the Single Unix Specification
-
 
127
       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
-
 
128
       others as well.
-
 
129
 
-
 
130
* Overview of algorithms
-
 
131
 
-
 
132
  This is not the fastest, most space-conserving, most portable, or
-
 
133
  most tunable malloc ever written. However it is among the fastest
-
 
134
  while also being among the most space-conserving, portable and
-
 
135
  tunable.  Consistent balance across these factors results in a good
-
 
136
  general-purpose allocator for malloc-intensive programs.
-
 
137
 
-
 
138
  In most ways, this malloc is a best-fit allocator. Generally, it
-
 
139
  chooses the best-fitting existing chunk for a request, with ties
-
 
140
  broken in approximately least-recently-used order. (This strategy
-
 
141
  normally maintains low fragmentation.) However, for requests less
-
 
142
  than 256bytes, it deviates from best-fit when there is not an
-
 
143
  exactly fitting available chunk by preferring to use space adjacent
-
 
144
  to that used for the previous small request, as well as by breaking
-
 
145
  ties in approximately most-recently-used order. (These enhance
-
 
146
  locality of series of small allocations.)  And for very large requests
-
 
147
  (>= 256Kb by default), it relies on system memory mapping
-
 
148
  facilities, if supported.  (This helps avoid carrying around and
-
 
149
  possibly fragmenting memory used only for large chunks.)
-
 
150
 
-
 
151
  All operations (except malloc_stats and mallinfo) have execution
-
 
152
  times that are bounded by a constant factor of the number of bits in
-
 
153
  a size_t, not counting any clearing in calloc or copying in realloc,
-
 
154
  or actions surrounding MORECORE and MMAP that have times
-
 
155
  proportional to the number of non-contiguous regions returned by
-
 
156
  system allocation routines, which is often just 1.
-
 
157
 
-
 
158
  The implementation is not very modular and seriously overuses
-
 
159
  macros. Perhaps someday all C compilers will do as good a job
-
 
160
  inlining modular code as can now be done by brute-force expansion,
-
 
161
  but now, enough of them seem not to.
-
 
162
 
-
 
163
  Some compilers issue a lot of warnings about code that is
-
 
164
  dead/unreachable only on some platforms, and also about intentional
-
 
165
  uses of negation on unsigned types. All known cases of each can be
-
 
166
  ignored.
-
 
167
 
-
 
168
  For a longer but out of date high-level description, see
-
 
169
     http://gee.cs.oswego.edu/dl/html/malloc.html
-
 
170
 
-
 
171
* MSPACES
-
 
172
  If MSPACES is defined, then in addition to malloc, free, etc.,
-
 
173
  this file also defines mspace_malloc, mspace_free, etc. These
-
 
174
  are versions of malloc routines that take an "mspace" argument
-
 
175
  obtained using create_mspace, to control all internal bookkeeping.
-
 
176
  If ONLY_MSPACES is defined, only these versions are compiled.
-
 
177
  So if you would like to use this allocator for only some allocations,
-
 
178
  and your system malloc for others, you can compile with
-
 
179
  ONLY_MSPACES and then do something like...
-
 
180
    static mspace mymspace = create_mspace(0,0); // for example
-
 
181
    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
-
 
182
 
-
 
183
  (Note: If you only need one instance of an mspace, you can instead
-
 
184
  use "USE_DL_PREFIX" to relabel the global malloc.)
-
 
185
 
-
 
186
  You can similarly create thread-local allocators by storing
-
 
187
  mspaces as thread-locals. For example:
-
 
188
    static __thread mspace tlms = 0;
-
 
189
    void*  tlmalloc(size_t bytes) {
-
 
190
      if (tlms == 0) tlms = create_mspace(0, 0);
-
 
191
      return mspace_malloc(tlms, bytes);
-
 
192
    }
-
 
193
    void  tlfree(void* mem) { mspace_free(tlms, mem); }
-
 
194
 
-
 
195
  Unless FOOTERS is defined, each mspace is completely independent.
-
 
196
  You cannot allocate from one and free to another (although
-
 
197
  conformance is only weakly checked, so usage errors are not always
-
 
198
  caught). If FOOTERS is defined, then each chunk carries around a tag
-
 
199
  indicating its originating mspace, and frees are directed to their
-
 
200
  originating spaces.
-
 
201
 
-
 
202
 -------------------------  Compile-time options ---------------------------
-
 
203
 
-
 
204
Be careful in setting #define values for numerical constants of type
-
 
205
size_t. On some systems, literal values are not automatically extended
-
 
206
to size_t precision unless they are explicitly casted.
-
 
207
 
-
 
208
WIN32                    default: defined if _WIN32 defined
-
 
209
  Defining WIN32 sets up defaults for MS environment and compilers.
-
 
210
  Otherwise defaults are for unix.
-
 
211
 
-
 
212
MALLOC_ALIGNMENT         default: (size_t)8
-
 
213
  Controls the minimum alignment for malloc'ed chunks.  It must be a
-
 
214
  power of two and at least 8, even on machines for which smaller
-
 
215
  alignments would suffice. It may be defined as larger than this
-
 
216
  though. Note however that code and data structures are optimized for
-
 
217
  the case of 8-byte alignment.
-
 
218
 
-
 
219
MSPACES                  default: 0 (false)
-
 
220
  If true, compile in support for independent allocation spaces.
-
 
221
  This is only supported if HAVE_MMAP is true.
-
 
222
 
-
 
223
ONLY_MSPACES             default: 0 (false)
-
 
224
  If true, only compile in mspace versions, not regular versions.
-
 
225
 
-
 
226
USE_LOCKS                default: 0 (false)
-
 
227
  Causes each call to each public routine to be surrounded with
-
 
228
  pthread or WIN32 mutex lock/unlock. (If set true, this can be
-
 
229
  overridden on a per-mspace basis for mspace versions.)
-
 
230
 
-
 
231
FOOTERS                  default: 0
-
 
232
  If true, provide extra checking and dispatching by placing
-
 
233
  information in the footers of allocated chunks. This adds
-
 
234
  space and time overhead.
-
 
235
 
-
 
236
INSECURE                 default: 0
-
 
237
  If true, omit checks for usage errors and heap space overwrites.
-
 
238
 
-
 
239
USE_DL_PREFIX            default: NOT defined
-
 
240
  Causes compiler to prefix all public routines with the string 'dl'.
-
 
241
  This can be useful when you only want to use this malloc in one part
-
 
242
  of a program, using your regular system malloc elsewhere.
-
 
243
 
-
 
244
ABORT                    default: defined as abort()
-
 
245
  Defines how to abort on failed checks.  On most systems, a failed
-
 
246
  check cannot die with an "assert" or even print an informative
-
 
247
  message, because the underlying print routines in turn call malloc,
-
 
248
  which will fail again.  Generally, the best policy is to simply call
-
 
249
  abort(). It's not very useful to do more than this because many
-
 
250
  errors due to overwriting will show up as address faults (null, odd
-
 
251
  addresses etc) rather than malloc-triggered checks, so will also
-
 
252
  abort.  Also, most compilers know that abort() does not return, so
-
 
253
  can better optimize code conditionally calling it.
-
 
254
 
-
 
255
PROCEED_ON_ERROR           default: defined as 0 (false)
-
 
256
  Controls whether detected bad addresses cause them to bypassed
-
 
257
  rather than aborting. If set, detected bad arguments to free and
-
 
258
  realloc are ignored. And all bookkeeping information is zeroed out
-
 
259
  upon a detected overwrite of freed heap space, thus losing the
-
 
260
  ability to ever return it from malloc again, but enabling the
-
 
261
  application to proceed. If PROCEED_ON_ERROR is defined, the
-
 
262
  static variable malloc_corruption_error_count is compiled in
-
 
263
  and can be examined to see if errors have occurred. This option
-
 
264
  generates slower code than the default abort policy.
-
 
265
 
-
 
266
DEBUG                    default: NOT defined
-
 
267
  The DEBUG setting is mainly intended for people trying to modify
-
 
268
  this code or diagnose problems when porting to new platforms.
-
 
269
  However, it may also be able to better isolate user errors than just
-
 
270
  using runtime checks.  The assertions in the check routines spell
-
 
271
  out in more detail the assumptions and invariants underlying the
-
 
272
  algorithms.  The checking is fairly extensive, and will slow down
-
 
273
  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
-
 
274
  set will attempt to check every non-mmapped allocated and free chunk
-
 
275
  in the course of computing the summaries.
-
 
276
 
-
 
277
ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
-
 
278
  Debugging assertion failures can be nearly impossible if your
-
 
279
  version of the assert macro causes malloc to be called, which will
-
 
280
  lead to a cascade of further failures, blowing the runtime stack.
-
 
281
  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
-
 
282
  which will usually make debugging easier.
-
 
283
 
-
 
284
MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
-
 
285
  The action to take before "return 0" when malloc fails to be able to
-
 
286
  return memory because there is none available.
-
 
287
 
-
 
288
HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
-
 
289
  True if this system supports sbrk or an emulation of it.
-
 
290
 
-
 
291
MORECORE                  default: sbrk
-
 
292
  The name of the sbrk-style system routine to call to obtain more
-
 
293
  memory.  See below for guidance on writing custom MORECORE
-
 
294
  functions. The type of the argument to sbrk/MORECORE varies across
-
 
295
  systems.  It cannot be size_t, because it supports negative
-
 
296
  arguments, so it is normally the signed type of the same width as
-
 
297
  size_t (sometimes declared as "intptr_t").  It doesn't much matter
-
 
298
  though. Internally, we only call it with arguments less than half
-
 
299
  the max value of a size_t, which should work across all reasonable
-
 
300
  possibilities, although sometimes generating compiler warnings.  See
-
 
301
  near the end of this file for guidelines for creating a custom
-
 
302
  version of MORECORE.
-
 
303
 
-
 
304
MORECORE_CONTIGUOUS       default: 1 (true)
-
 
305
  If true, take advantage of fact that consecutive calls to MORECORE
-
 
306
  with positive arguments always return contiguous increasing
-
 
307
  addresses.  This is true of unix sbrk. It does not hurt too much to
-
 
308
  set it true anyway, since malloc copes with non-contiguities.
-
 
309
  Setting it false when definitely non-contiguous saves time
-
 
310
  and possibly wasted space it would take to discover this though.
-
 
311
 
-
 
312
MORECORE_CANNOT_TRIM      default: NOT defined
-
 
313
  True if MORECORE cannot release space back to the system when given
-
 
314
  negative arguments. This is generally necessary only if you are
-
 
315
  using a hand-crafted MORECORE function that cannot handle negative
-
 
316
  arguments.
-
 
317
 
-
 
318
HAVE_MMAP                 default: 1 (true)
-
 
319
  True if this system supports mmap or an emulation of it.  If so, and
-
 
320
  HAVE_MORECORE is not true, MMAP is used for all system
-
 
321
  allocation. If set and HAVE_MORECORE is true as well, MMAP is
-
 
322
  primarily used to directly allocate very large blocks. It is also
-
 
323
  used as a backup strategy in cases where MORECORE fails to provide
-
 
324
  space from system. Note: A single call to MUNMAP is assumed to be
-
 
325
  able to unmap memory that may have be allocated using multiple calls
-
 
326
  to MMAP, so long as they are adjacent.
-
 
327
 
-
 
328
HAVE_MREMAP               default: 1 on linux, else 0
-
 
329
  If true realloc() uses mremap() to re-allocate large blocks and
-
 
330
  extend or shrink allocation spaces.
-
 
331
 
-
 
332
MMAP_CLEARS               default: 1 on unix
-
 
333
  True if mmap clears memory so calloc doesn't need to. This is true
-
 
334
  for standard unix mmap using /dev/zero.
-
 
335
 
-
 
336
USE_BUILTIN_FFS            default: 0 (i.e., not used)
-
 
337
  Causes malloc to use the builtin ffs() function to compute indices.
-
 
338
  Some compilers may recognize and intrinsify ffs to be faster than the
-
 
339
  supplied C version. Also, the case of x86 using gcc is special-cased
-
 
340
  to an asm instruction, so is already as fast as it can be, and so
-
 
341
  this setting has no effect. (On most x86s, the asm version is only
-
 
342
  slightly faster than the C version.)
-
 
343
 
-
 
344
malloc_getpagesize         default: derive from system includes, or 4096.
-
 
345
  The system page size. To the extent possible, this malloc manages
-
 
346
  memory from the system in page-size units.  This may be (and
-
 
347
  usually is) a function rather than a constant. This is ignored
-
 
348
  if WIN32, where page size is determined using getSystemInfo during
-
 
349
  initialization.
-
 
350
 
-
 
351
USE_DEV_RANDOM             default: 0 (i.e., not used)
-
 
352
  Causes malloc to use /dev/random to initialize secure magic seed for
-
 
353
  stamping footers. Otherwise, the current time is used.
-
 
354
 
-
 
355
NO_MALLINFO                default: 0
-
 
356
  If defined, don't compile "mallinfo". This can be a simple way
-
 
357
  of dealing with mismatches between system declarations and
-
 
358
  those in this file.
-
 
359
 
-
 
360
MALLINFO_FIELD_TYPE        default: size_t
-
 
361
  The type of the fields in the mallinfo struct. This was originally
-
 
362
  defined as "int" in SVID etc, but is more usefully defined as
-
 
363
  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
-
 
364
 
-
 
365
REALLOC_ZERO_BYTES_FREES    default: not defined
-
 
366
  This should be set if a call to realloc with zero bytes should
-
 
367
  be the same as a call to free. Some people think it should. Otherwise,
-
 
368
  since this malloc returns a unique pointer for malloc(0), so does
-
 
369
  realloc(p, 0).
-
 
370
 
-
 
371
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
-
 
372
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
-
 
373
LACKS_STDLIB_H                default: NOT defined unless on WIN32
-
 
374
  Define these if your system does not have these header files.
-
 
375
  You might need to manually insert some of the declarations they provide.
-
 
376
 
-
 
377
DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
-
 
378
                                system_info.dwAllocationGranularity in WIN32,
-
 
379
                                otherwise 64K.
-
 
380
      Also settable using mallopt(M_GRANULARITY, x)
-
 
381
  The unit for allocating and deallocating memory from the system.  On
-
 
382
  most systems with contiguous MORECORE, there is no reason to
-
 
383
  make this more than a page. However, systems with MMAP tend to
-
 
384
  either require or encourage larger granularities.  You can increase
-
 
385
  this value to prevent system allocation functions to be called so
-
 
386
  often, especially if they are slow.  The value must be at least one
-
 
387
  page and must be a power of two.  Setting to 0 causes initialization
-
 
388
  to either page size or win32 region size.  (Note: In previous
-
 
389
  versions of malloc, the equivalent of this option was called
-
 
390
  "TOP_PAD")
-
 
391
 
-
 
392
DEFAULT_TRIM_THRESHOLD    default: 2MB
-
 
393
      Also settable using mallopt(M_TRIM_THRESHOLD, x)
-
 
394
  The maximum amount of unused top-most memory to keep before
-
 
395
  releasing via malloc_trim in free().  Automatic trimming is mainly
-
 
396
  useful in long-lived programs using contiguous MORECORE.  Because
-
 
397
  trimming via sbrk can be slow on some systems, and can sometimes be
-
 
398
  wasteful (in cases where programs immediately afterward allocate
-
 
399
  more large chunks) the value should be high enough so that your
-
 
400
  overall system performance would improve by releasing this much
-
 
401
  memory.  As a rough guide, you might set to a value close to the
-
 
402
  average size of a process (program) running on your system.
-
 
403
  Releasing this much memory would allow such a process to run in
-
 
404
  memory.  Generally, it is worth tuning trim thresholds when a
-
 
405
  program undergoes phases where several large chunks are allocated
-
 
406
  and released in ways that can reuse each other's storage, perhaps
-
 
407
  mixed with phases where there are no such chunks at all. The trim
-
 
408
  value must be greater than page size to have any useful effect.  To
-
 
409
  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
-
 
410
  some people use of mallocing a huge space and then freeing it at
-
 
411
  program startup, in an attempt to reserve system memory, doesn't
-
 
412
  have the intended effect under automatic trimming, since that memory
-
 
413
  will immediately be returned to the system.
-
 
414
 
-
 
415
DEFAULT_MMAP_THRESHOLD       default: 256K
-
 
416
      Also settable using mallopt(M_MMAP_THRESHOLD, x)
-
 
417
  The request size threshold for using MMAP to directly service a
-
 
418
  request. Requests of at least this size that cannot be allocated
-
 
419
  using already-existing space will be serviced via mmap.  (If enough
-
 
420
  normal freed space already exists it is used instead.)  Using mmap
-
 
421
  segregates relatively large chunks of memory so that they can be
-
 
422
  individually obtained and released from the host system. A request
-
 
423
  serviced through mmap is never reused by any other request (at least
-
 
424
  not directly; the system may just so happen to remap successive
-
 
425
  requests to the same locations).  Segregating space in this way has
-
 
426
  the benefits that: Mmapped space can always be individually released
-
 
427
  back to the system, which helps keep the system level memory demands
-
 
428
  of a long-lived program low.  Also, mapped memory doesn't become
-
 
429
  `locked' between other chunks, as can happen with normally allocated
-
 
430
  chunks, which means that even trimming via malloc_trim would not
-
 
431
  release them.  However, it has the disadvantage that the space
-
 
432
  cannot be reclaimed, consolidated, and then used to service later
-
 
433
  requests, as happens with normal chunks.  The advantages of mmap
-
 
434
  nearly always outweigh disadvantages for "large" chunks, but the
-
 
435
  value of "large" may vary across systems.  The default is an
-
 
436
  empirically derived value that works well in most systems. You can
-
 
437
  disable mmap by setting to MAX_SIZE_T.
-
 
438
 
34
 
Line 439... Line 35...
439
*/
35
*/
440
 
-
 
441
#ifdef KOLIBRI
-
 
442
 
-
 
443
#define IMPORT __attribute__ ((stdcall)) __attribute__ ((dllimport))
-
 
444
 
-
 
445
void*  IMPORT KernelAlloc(unsigned size)__asm__("KernelAlloc");
-
 
446
void   IMPORT KernelFree(void *mem)__asm__("KernelFree");
36
 
447
 
-
 
448
#else
37
#include 
449
 
-
 
450
#define IMPORT __attribute__ ((dllimport))
-
 
451
 
-
 
Line 452... Line 38...
452
void* __fastcall IMPORT  mem_alloc(unsigned size, unsigned flags)__asm__("MemAlloc");
38
#include 
453
void  __fastcall IMPORT  mem_free(void *mem)__asm__("MemFree");
-
 
454
 
39
#include 
455
#endif
40
 
456
 
41
struct malloc_chunk {
457
#define MALLOC_ALIGNMENT ((size_t)8U)
42
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
458
#define DEFAULT_MMAP_THRESHOLD  ((size_t)32U * (size_t)1024U)
-
 
459
#define NO_MALLINFO  1
43
  size_t               head;       /* Size and inuse bits. */
Line 460... Line 44...
460
#define MORECORE_CANNOT_TRIM
44
  struct malloc_chunk* fd;         /* double links -- used only if free. */
461
#define FOOTERS 0
45
  struct malloc_chunk* bk;
-
 
46
};
-
 
47
 
-
 
48
typedef struct malloc_chunk  mchunk;
-
 
49
typedef struct malloc_chunk* mchunkptr;
Line 462... Line -...
462
#define ABORT
-
 
Line 463... Line -...
463
 
-
 
464
#undef  WIN32
-
 
465
#undef  _WIN32
-
 
466
 
-
 
467
typedef unsigned int size_t;
-
 
468
 
-
 
469
#define HAVE_MMAP 1
-
 
470
#define HAVE_MORECORE 0
-
 
471
#define LACKS_UNISTD_H
-
 
472
#define LACKS_SYS_PARAM_H
-
 
473
#define LACKS_SYS_MMAN_H
-
 
474
#define LACKS_STRING_H
-
 
475
#define LACKS_STRINGS_H
-
 
476
#define LACKS_SYS_TYPES_H
-
 
477
#define LACKS_ERRNO_H
-
 
478
#define MALLOC_FAILURE_ACTION
-
 
479
#define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
-
 
480
 
-
 
481
#if defined(DARWIN) || defined(_DARWIN)
-
 
Line 482... Line -...
482
/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
-
 
483
#ifndef HAVE_MORECORE
50
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
484
#define HAVE_MORECORE 0
-
 
Line 485... Line 51...
485
#define HAVE_MMAP 1
51
typedef unsigned int bindex_t;         /* Described below */
486
#endif  /* HAVE_MORECORE */
52
typedef unsigned int binmap_t;         /* Described below */
Line 487... Line -...
487
#endif  /* DARWIN */
-
 
488
 
-
 
489
#ifndef LACKS_SYS_TYPES_H
-
 
490
#include   /* For size_t */
-
 
491
#endif  /* LACKS_SYS_TYPES_H */
-
 
492
 
-
 
493
/* The maximum possible size_t value has all bits set */
-
 
494
#define MAX_SIZE_T           (~(size_t)0)
-
 
495
 
-
 
496
#ifndef ONLY_MSPACES
-
 
497
#define ONLY_MSPACES 0
-
 
498
#endif  /* ONLY_MSPACES */
-
 
499
#ifndef MSPACES
-
 
500
#if ONLY_MSPACES
-
 
501
#define MSPACES 1
-
 
502
#else   /* ONLY_MSPACES */
-
 
503
#define MSPACES 0
-
 
504
#endif  /* ONLY_MSPACES */
-
 
505
#endif  /* MSPACES */
-
 
506
#ifndef MALLOC_ALIGNMENT
-
 
507
#define MALLOC_ALIGNMENT ((size_t)8U)
-
 
508
#endif  /* MALLOC_ALIGNMENT */
-
 
509
#ifndef FOOTERS
-
 
510
#define FOOTERS 0
-
 
511
#endif  /* FOOTERS */
-
 
512
#ifndef ABORT
-
 
513
#define ABORT  abort()
-
 
514
#endif  /* ABORT */
-
 
515
#ifndef ABORT_ON_ASSERT_FAILURE
-
 
516
#define ABORT_ON_ASSERT_FAILURE 1
-
 
517
#endif  /* ABORT_ON_ASSERT_FAILURE */
-
 
518
#ifndef PROCEED_ON_ERROR
-
 
519
#define PROCEED_ON_ERROR 0
-
 
520
#endif  /* PROCEED_ON_ERROR */
-
 
521
#ifndef USE_LOCKS
-
 
522
#define USE_LOCKS 0
-
 
523
#endif  /* USE_LOCKS */
-
 
524
#ifndef INSECURE
-
 
525
#define INSECURE 0
-
 
526
#endif  /* INSECURE */
-
 
527
#ifndef HAVE_MMAP
-
 
528
#define HAVE_MMAP 1
-
 
529
#endif  /* HAVE_MMAP */
-
 
530
#ifndef MMAP_CLEARS
-
 
531
#define MMAP_CLEARS 1
-
 
532
#endif  /* MMAP_CLEARS */
-
 
533
#ifndef HAVE_MREMAP
-
 
534
#ifdef linux
-
 
535
#define HAVE_MREMAP 1
-
 
536
#else   /* linux */
-
 
537
#define HAVE_MREMAP 0
-
 
538
#endif  /* linux */
-
 
539
#endif  /* HAVE_MREMAP */
-
 
540
#ifndef MALLOC_FAILURE_ACTION
-
 
541
#define MALLOC_FAILURE_ACTION  errno = ENOMEM;
-
 
542
#endif  /* MALLOC_FAILURE_ACTION */
-
 
543
#ifndef HAVE_MORECORE
-
 
544
#if ONLY_MSPACES
-
 
545
#define HAVE_MORECORE 0
-
 
546
#else   /* ONLY_MSPACES */
-
 
547
#define HAVE_MORECORE 1
-
 
548
#endif  /* ONLY_MSPACES */
-
 
549
#endif  /* HAVE_MORECORE */
-
 
550
#if !HAVE_MORECORE
-
 
551
#define MORECORE_CONTIGUOUS 0
-
 
552
#else   /* !HAVE_MORECORE */
-
 
553
#ifndef MORECORE
-
 
554
#define MORECORE sbrk
-
 
555
#endif  /* MORECORE */
-
 
556
#ifndef MORECORE_CONTIGUOUS
-
 
557
#define MORECORE_CONTIGUOUS 1
-
 
558
#endif  /* MORECORE_CONTIGUOUS */
-
 
559
#endif  /* HAVE_MORECORE */
-
 
560
#ifndef DEFAULT_GRANULARITY
-
 
561
#if MORECORE_CONTIGUOUS
-
 
562
#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
-
 
563
#else   /* MORECORE_CONTIGUOUS */
-
 
564
#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
-
 
565
#endif  /* MORECORE_CONTIGUOUS */
-
 
566
#endif  /* DEFAULT_GRANULARITY */
-
 
567
#ifndef DEFAULT_TRIM_THRESHOLD
-
 
568
#ifndef MORECORE_CANNOT_TRIM
-
 
569
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
-
 
570
#else   /* MORECORE_CANNOT_TRIM */
-
 
571
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
-
 
572
#endif  /* MORECORE_CANNOT_TRIM */
-
 
573
#endif  /* DEFAULT_TRIM_THRESHOLD */
-
 
574
#ifndef DEFAULT_MMAP_THRESHOLD
-
 
575
#if HAVE_MMAP
-
 
576
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
-
 
577
#else   /* HAVE_MMAP */
-
 
578
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
-
 
579
#endif  /* HAVE_MMAP */
-
 
580
#endif  /* DEFAULT_MMAP_THRESHOLD */
-
 
581
#ifndef USE_BUILTIN_FFS
-
 
582
#define USE_BUILTIN_FFS 0
-
 
583
#endif  /* USE_BUILTIN_FFS */
-
 
584
#ifndef USE_DEV_RANDOM
-
 
585
#define USE_DEV_RANDOM 0
-
 
586
#endif  /* USE_DEV_RANDOM */
-
 
587
#ifndef NO_MALLINFO
-
 
588
#define NO_MALLINFO 0
-
 
589
#endif  /* NO_MALLINFO */
-
 
590
#ifndef MALLINFO_FIELD_TYPE
-
 
591
#define MALLINFO_FIELD_TYPE size_t
-
 
592
#endif  /* MALLINFO_FIELD_TYPE */
-
 
593
 
-
 
594
 
-
 
595
/*
-
 
596
  mallopt tuning options.  SVID/XPG defines four standard parameter
-
 
597
  numbers for mallopt, normally defined in malloc.h.  None of these
-
 
598
  are used in this malloc, so setting them has no effect. But this
-
 
599
  malloc does support the following options.
-
 
600
*/
-
 
601
 
-
 
602
#define M_TRIM_THRESHOLD     (-1)
-
 
603
#define M_GRANULARITY        (-2)
-
 
604
#define M_MMAP_THRESHOLD     (-3)
-
 
605
 
-
 
606
/* ------------------------ Mallinfo declarations ------------------------ */
-
 
607
 
-
 
608
#if !NO_MALLINFO
-
 
609
#endif /* NO_MALLINFO */
-
 
610
 
-
 
611
#ifdef __cplusplus
-
 
612
extern "C" {
-
 
613
#endif /* __cplusplus */
-
 
614
 
-
 
615
#if !ONLY_MSPACES
-
 
616
 
-
 
617
/* ------------------- Declarations of public routines ------------------- */
-
 
618
 
-
 
619
#ifndef USE_DL_PREFIX
-
 
620
#define dlcalloc               calloc
-
 
621
#define dlfree                 free
-
 
622
#define dlmalloc               malloc
-
 
623
#define dlmemalign             memalign
-
 
624
#define dlrealloc              realloc
-
 
625
#define dlvalloc               valloc
-
 
626
#define dlpvalloc              pvalloc
-
 
627
#define dlmallinfo             mallinfo
-
 
628
#define dlmallopt              mallopt
-
 
629
#define dlmalloc_trim          malloc_trim
-
 
630
#define dlmalloc_stats         malloc_stats
-
 
631
#define dlmalloc_usable_size   malloc_usable_size
-
 
632
#define dlmalloc_footprint     malloc_footprint
-
 
633
#define dlmalloc_max_footprint malloc_max_footprint
-
 
634
#define dlindependent_calloc   independent_calloc
-
 
635
#define dlindependent_comalloc independent_comalloc
-
 
636
#endif /* USE_DL_PREFIX */
-
 
637
 
-
 
638
 
-
 
639
/*
-
 
640
  malloc(size_t n)
-
 
641
  Returns a pointer to a newly allocated chunk of at least n bytes, or
-
 
642
  null if no space is available, in which case errno is set to ENOMEM
-
 
643
  on ANSI C systems.
-
 
644
 
-
 
645
  If n is zero, malloc returns a minimum-sized chunk. (The minimum
-
 
646
  size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
-
 
647
  systems.)  Note that size_t is an unsigned type, so calls with
-
 
648
  arguments that would be negative if signed are interpreted as
-
 
649
  requests for huge amounts of space, which will often fail. The
-
 
650
  maximum supported value of n differs across systems, but is in all
-
 
651
  cases less than the maximum representable value of a size_t.
-
 
652
*/
-
 
653
void* dlmalloc(size_t);
-
 
654
 
-
 
655
/*
-
 
656
  free(void* p)
-
 
657
  Releases the chunk of memory pointed to by p, that had been previously
-
 
658
  allocated using malloc or a related routine such as realloc.
-
 
659
  It has no effect if p is null. If p was not malloced or already
-
 
660
  freed, free(p) will by default cause the current program to abort.
-
 
661
*/
-
 
662
void  dlfree(void*);
-
 
663
 
-
 
664
/*
-
 
665
  calloc(size_t n_elements, size_t element_size);
-
 
666
  Returns a pointer to n_elements * element_size bytes, with all locations
-
 
667
  set to zero.
-
 
668
*/
-
 
669
void* dlcalloc(size_t, size_t);
-
 
670
 
-
 
671
/*
-
 
672
  realloc(void* p, size_t n)
-
 
673
  Returns a pointer to a chunk of size n that contains the same data
-
 
674
  as does chunk p up to the minimum of (n, p's size) bytes, or null
-
 
675
  if no space is available.
-
 
676
 
-
 
677
  The returned pointer may or may not be the same as p. The algorithm
-
 
678
  prefers extending p in most cases when possible, otherwise it
-
 
679
  employs the equivalent of a malloc-copy-free sequence.
-
 
680
 
-
 
681
  If p is null, realloc is equivalent to malloc.
-
 
682
 
-
 
683
  If space is not available, realloc returns null, errno is set (if on
-
 
684
  ANSI) and p is NOT freed.
-
 
685
 
-
 
686
  if n is for fewer bytes than already held by p, the newly unused
-
 
687
  space is lopped off and freed if possible.  realloc with a size
-
 
688
  argument of zero (re)allocates a minimum-sized chunk.
-
 
689
 
-
 
690
  The old unix realloc convention of allowing the last-free'd chunk
-
 
691
  to be used as an argument to realloc is not supported.
-
 
692
*/
-
 
693
 
-
 
694
void* dlrealloc(void*, size_t);
-
 
695
 
-
 
696
/*
-
 
697
  memalign(size_t alignment, size_t n);
-
 
698
  Returns a pointer to a newly allocated chunk of n bytes, aligned
-
 
699
  in accord with the alignment argument.
-
 
700
 
-
 
701
  The alignment argument should be a power of two. If the argument is
-
 
702
  not a power of two, the nearest greater power is used.
-
 
703
  8-byte alignment is guaranteed by normal malloc calls, so don't
-
 
704
  bother calling memalign with an argument of 8 or less.
-
 
705
 
-
 
706
  Overreliance on memalign is a sure way to fragment space.
-
 
707
*/
-
 
708
void* dlmemalign(size_t, size_t);
-
 
709
 
-
 
710
/*
-
 
711
  valloc(size_t n);
-
 
712
  Equivalent to memalign(pagesize, n), where pagesize is the page
-
 
713
  size of the system. If the pagesize is unknown, 4096 is used.
-
 
714
*/
-
 
715
void* dlvalloc(size_t);
-
 
716
 
-
 
717
/*
-
 
718
  mallopt(int parameter_number, int parameter_value)
-
 
719
  Sets tunable parameters The format is to provide a
-
 
720
  (parameter-number, parameter-value) pair.  mallopt then sets the
-
 
721
  corresponding parameter to the argument value if it can (i.e., so
-
 
722
  long as the value is meaningful), and returns 1 if successful else
-
 
723
  0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
-
 
724
  normally defined in malloc.h.  None of these are use in this malloc,
-
 
725
  so setting them has no effect. But this malloc also supports other
-
 
726
  options in mallopt. See below for details.  Briefly, supported
-
 
727
  parameters are as follows (listed defaults are for "typical"
-
 
728
  configurations).
-
 
729
 
-
 
730
  Symbol            param #  default    allowed param values
-
 
731
  M_TRIM_THRESHOLD     -1   2*1024*1024   any   (MAX_SIZE_T disables)
-
 
732
  M_GRANULARITY        -2     page size   any power of 2 >= page size
-
 
733
  M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
-
 
734
*/
-
 
735
int dlmallopt(int, int);
-
 
736
 
-
 
737
/*
-
 
738
  malloc_footprint();
-
 
739
  Returns the number of bytes obtained from the system.  The total
-
 
740
  number of bytes allocated by malloc, realloc etc., is less than this
-
 
741
  value. Unlike mallinfo, this function returns only a precomputed
-
 
742
  result, so can be called frequently to monitor memory consumption.
-
 
743
  Even if locks are otherwise defined, this function does not use them,
-
 
744
  so results might not be up to date.
-
 
745
*/
-
 
746
size_t dlmalloc_footprint(void);
-
 
747
 
-
 
748
/*
-
 
749
  malloc_max_footprint();
-
 
750
  Returns the maximum number of bytes obtained from the system. This
-
 
751
  value will be greater than current footprint if deallocated space
-
 
752
  has been reclaimed by the system. The peak number of bytes allocated
-
 
753
  by malloc, realloc etc., is less than this value. Unlike mallinfo,
-
 
754
  this function returns only a precomputed result, so can be called
-
 
755
  frequently to monitor memory consumption.  Even if locks are
-
 
756
  otherwise defined, this function does not use them, so results might
-
 
757
  not be up to date.
-
 
758
*/
-
 
759
size_t dlmalloc_max_footprint(void);
-
 
760
 
-
 
761
#if !NO_MALLINFO
-
 
762
#endif /* NO_MALLINFO */
-
 
763
 
-
 
764
/*
-
 
765
  independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
-
 
766
 
-
 
767
  independent_calloc is similar to calloc, but instead of returning a
-
 
768
  single cleared space, it returns an array of pointers to n_elements
-
 
769
  independent elements that can hold contents of size elem_size, each
-
 
770
  of which starts out cleared, and can be independently freed,
-
 
771
  realloc'ed etc. The elements are guaranteed to be adjacently
-
 
772
  allocated (this is not guaranteed to occur with multiple callocs or
-
 
773
  mallocs), which may also improve cache locality in some
-
 
774
  applications.
-
 
775
 
-
 
776
  The "chunks" argument is optional (i.e., may be null, which is
-
 
777
  probably the most typical usage). If it is null, the returned array
-
 
778
  is itself dynamically allocated and should also be freed when it is
-
 
779
  no longer needed. Otherwise, the chunks array must be of at least
-
 
780
  n_elements in length. It is filled in with the pointers to the
-
 
781
  chunks.
-
 
782
 
-
 
783
  In either case, independent_calloc returns this pointer array, or
-
 
784
  null if the allocation failed.  If n_elements is zero and "chunks"
-
 
785
  is null, it returns a chunk representing an array with zero elements
-
 
786
  (which should be freed if not wanted).
-
 
787
 
-
 
788
  Each element must be individually freed when it is no longer
-
 
789
  needed. If you'd like to instead be able to free all at once, you
-
 
790
  should instead use regular calloc and assign pointers into this
-
 
791
  space to represent elements.  (In this case though, you cannot
-
 
792
  independently free elements.)
-
 
793
 
-
 
794
  independent_calloc simplifies and speeds up implementations of many
-
 
795
  kinds of pools.  It may also be useful when constructing large data
-
 
796
  structures that initially have a fixed number of fixed-sized nodes,
-
 
797
  but the number is not known at compile time, and some of the nodes
-
 
798
  may later need to be freed. For example:
-
 
799
 
-
 
800
  struct Node { int item; struct Node* next; };
-
 
801
 
-
 
802
  struct Node* build_list() {
-
 
803
    struct Node** pool;
-
 
804
    int n = read_number_of_nodes_needed();
-
 
805
    if (n <= 0) return 0;
-
 
806
    pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
-
 
807
    if (pool == 0) die();
-
 
808
    // organize into a linked list...
-
 
809
    struct Node* first = pool[0];
-
 
810
    for (i = 0; i < n-1; ++i)
-
 
811
      pool[i]->next = pool[i+1];
-
 
812
    free(pool);     // Can now free the array (or not, if it is needed later)
-
 
813
    return first;
-
 
814
  }
-
 
815
*/
-
 
816
void** dlindependent_calloc(size_t, size_t, void**);
-
 
817
 
-
 
818
/*
-
 
819
  independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
-
 
820
 
-
 
821
  independent_comalloc allocates, all at once, a set of n_elements
-
 
822
  chunks with sizes indicated in the "sizes" array.    It returns
-
 
823
  an array of pointers to these elements, each of which can be
-
 
824
  independently freed, realloc'ed etc. The elements are guaranteed to
-
 
825
  be adjacently allocated (this is not guaranteed to occur with
-
 
826
  multiple callocs or mallocs), which may also improve cache locality
-
 
827
  in some applications.
-
 
828
 
-
 
829
  The "chunks" argument is optional (i.e., may be null). If it is null
-
 
830
  the returned array is itself dynamically allocated and should also
-
 
831
  be freed when it is no longer needed. Otherwise, the chunks array
-
 
832
  must be of at least n_elements in length. It is filled in with the
-
 
833
  pointers to the chunks.
-
 
834
 
-
 
835
  In either case, independent_comalloc returns this pointer array, or
-
 
836
  null if the allocation failed.  If n_elements is zero and chunks is
-
 
837
  null, it returns a chunk representing an array with zero elements
-
 
838
  (which should be freed if not wanted).
-
 
839
 
-
 
840
  Each element must be individually freed when it is no longer
-
 
841
  needed. If you'd like to instead be able to free all at once, you
-
 
842
  should instead use a single regular malloc, and assign pointers at
-
 
843
  particular offsets in the aggregate space. (In this case though, you
-
 
844
  cannot independently free elements.)
-
 
845
 
-
 
846
  independent_comallac differs from independent_calloc in that each
-
 
847
  element may have a different size, and also that it does not
-
 
848
  automatically clear elements.
-
 
849
 
-
 
850
  independent_comalloc can be used to speed up allocation in cases
-
 
851
  where several structs or objects must always be allocated at the
-
 
852
  same time.  For example:
-
 
853
 
-
 
854
  struct Head { ... }
-
 
855
  struct Foot { ... }
-
 
856
 
-
 
857
  void send_message(char* msg) {
-
 
858
    int msglen = strlen(msg);
-
 
859
    size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
-
 
860
    void* chunks[3];
-
 
861
    if (independent_comalloc(3, sizes, chunks) == 0)
-
 
862
      die();
-
 
863
    struct Head* head = (struct Head*)(chunks[0]);
-
 
864
    char*        body = (char*)(chunks[1]);
-
 
865
    struct Foot* foot = (struct Foot*)(chunks[2]);
-
 
866
    // ...
-
 
867
  }
-
 
868
 
-
 
869
  In general though, independent_comalloc is worth using only for
-
 
870
  larger values of n_elements. For small values, you probably won't
-
 
871
  detect enough difference from series of malloc calls to bother.
-
 
872
 
-
 
873
  Overuse of independent_comalloc can increase overall memory usage,
-
 
874
  since it cannot reuse existing noncontiguous small chunks that
-
 
875
  might be available for some of the elements.
-
 
876
*/
-
 
877
void** dlindependent_comalloc(size_t, size_t*, void**);
-
 
878
 
-
 
879
 
-
 
880
/*
-
 
881
  pvalloc(size_t n);
-
 
882
  Equivalent to valloc(minimum-page-that-holds(n)), that is,
-
 
883
  round up n to nearest pagesize.
-
 
884
 */
-
 
885
void*  dlpvalloc(size_t);
-
 
886
 
-
 
887
/*
-
 
888
  malloc_trim(size_t pad);
-
 
889
 
-
 
890
  If possible, gives memory back to the system (via negative arguments
-
 
891
  to sbrk) if there is unused memory at the `high' end of the malloc
-
 
892
  pool or in unused MMAP segments. You can call this after freeing
-
 
893
  large blocks of memory to potentially reduce the system-level memory
-
 
894
  requirements of a program. However, it cannot guarantee to reduce
-
 
895
  memory. Under some allocation patterns, some large free blocks of
-
 
896
  memory will be locked between two used chunks, so they cannot be
-
 
897
  given back to the system.
-
 
898
 
-
 
899
  The `pad' argument to malloc_trim represents the amount of free
-
 
900
  trailing space to leave untrimmed. If this argument is zero, only
-
 
901
  the minimum amount of memory to maintain internal data structures
-
 
902
  will be left. Non-zero arguments can be supplied to maintain enough
-
 
903
  trailing space to service future expected allocations without having
-
 
904
  to re-obtain memory from the system.
-
 
905
 
-
 
906
  Malloc_trim returns 1 if it actually released any memory, else 0.
-
 
907
*/
-
 
908
int  dlmalloc_trim(size_t);
-
 
909
 
-
 
910
/*
-
 
911
  malloc_usable_size(void* p);
-
 
912
 
-
 
913
  Returns the number of bytes you can actually use in
-
 
914
  an allocated chunk, which may be more than you requested (although
-
 
915
  often not) due to alignment and minimum size constraints.
-
 
916
  You can use this many bytes without worrying about
-
 
917
  overwriting other allocated objects. This is not a particularly great
-
 
918
  programming practice. malloc_usable_size can be more useful in
-
 
919
  debugging and assertions, for example:
-
 
920
 
-
 
921
  p = malloc(n);
-
 
922
  assert(malloc_usable_size(p) >= 256);
-
 
923
*/
-
 
924
size_t dlmalloc_usable_size(void*);
-
 
925
 
-
 
926
/*
-
 
927
  malloc_stats();
-
 
928
  Prints on stderr the amount of space obtained from the system (both
-
 
929
  via sbrk and mmap), the maximum amount (which may be more than
-
 
930
  current if malloc_trim and/or munmap got called), and the current
-
 
931
  number of bytes allocated via malloc (or realloc, etc) but not yet
-
 
932
  freed. Note that this is the number of bytes allocated, not the
-
 
933
  number requested. It will be larger than the number requested
-
 
934
  because of alignment and bookkeeping overhead. Because it includes
-
 
935
  alignment wastage as being in use, this figure may be greater than
-
 
936
  zero even when no user-level chunks are allocated.
-
 
937
 
-
 
938
  The reported current and maximum system memory can be inaccurate if
-
 
939
  a program makes other calls to system memory allocation functions
-
 
940
  (normally sbrk) outside of malloc.
-
 
941
 
-
 
942
  malloc_stats prints only the most commonly interesting statistics.
-
 
943
  More information can be obtained by calling mallinfo.
-
 
944
*/
-
 
945
void  dlmalloc_stats(void);
-
 
946
 
-
 
947
#endif /* ONLY_MSPACES */
-
 
948
 
-
 
949
#if MSPACES
-
 
950
#endif /* MSPACES */
-
 
951
 
-
 
952
#ifdef __cplusplus
-
 
953
};  /* end of extern "C" */
-
 
954
#endif /* __cplusplus */
-
 
955
 
-
 
956
/*
-
 
957
  ========================================================================
-
 
958
  To make a fully customizable malloc.h header file, cut everything
-
 
959
  above this line, put into file malloc.h, edit to suit, and #include it
-
 
960
  on the next line, as well as in programs that use this malloc.
-
 
961
  ========================================================================
-
 
962
*/
-
 
963
 
-
 
964
/* #include "malloc.h" */
-
 
965
 
-
 
966
/*------------------------------ internal #includes ---------------------- */
-
 
967
 
-
 
968
#ifdef WIN32
-
 
969
#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
-
 
970
#endif /* WIN32 */
-
 
971
 
-
 
972
//#include        /* for printing in malloc_stats */
-
 
973
 
-
 
974
#if 0
-
 
975
 
-
 
976
#ifndef LACKS_ERRNO_H
-
 
977
#include        /* for MALLOC_FAILURE_ACTION */
-
 
978
#endif /* LACKS_ERRNO_H */
-
 
979
#if FOOTERS
-
 
980
#include         /* for magic initialization */
-
 
981
#endif /* FOOTERS */
-
 
982
#ifndef LACKS_STDLIB_H
-
 
983
#include       /* for abort() */
-
 
984
#endif /* LACKS_STDLIB_H */
-
 
985
#ifdef DEBUG
-
 
986
#if ABORT_ON_ASSERT_FAILURE
-
 
987
#define assert(x) if(!(x)) ABORT
-
 
988
#else /* ABORT_ON_ASSERT_FAILURE */
-
 
989
#include 
-
 
990
#endif /* ABORT_ON_ASSERT_FAILURE */
-
 
991
#else  /* DEBUG */
-
 
992
#define assert(x)
-
 
993
#endif /* DEBUG */
-
 
994
#ifndef LACKS_STRING_H
-
 
995
#include       /* for memset etc */
-
 
996
#endif  /* LACKS_STRING_H */
-
 
997
#if USE_BUILTIN_FFS
-
 
998
#ifndef LACKS_STRINGS_H
-
 
999
#include      /* for ffs */
-
 
1000
#endif /* LACKS_STRINGS_H */
-
 
1001
#endif /* USE_BUILTIN_FFS */
-
 
1002
#if HAVE_MMAP
-
 
1003
#ifndef LACKS_SYS_MMAN_H
-
 
1004
#include     /* for mmap */
-
 
1005
#endif /* LACKS_SYS_MMAN_H */
-
 
1006
#ifndef LACKS_FCNTL_H
-
 
1007
#include 
-
 
1008
#endif /* LACKS_FCNTL_H */
-
 
1009
#endif /* HAVE_MMAP */
-
 
1010
#if HAVE_MORECORE
-
 
1011
#endif /* HAVE_MMAP */
-
 
1012
 
-
 
1013
#endif
53
typedef unsigned int flag_t;           /* The type of various bit flag sets */
1014
 
54
 
1015
#define assert(x)
55
 
Line 1016... Line 56...
1016
 
56
 
1017
#ifndef WIN32
57
/* ------------------- size_t and alignment properties -------------------- */
1018
#endif
58
 
1019
 
59
/* The maximum possible size_t value has all bits set */
1020
/* ------------------- size_t and alignment properties -------------------- */
60
#define MAX_SIZE_T           (~(size_t)0)
-
 
61
 
1021
 
62
/* The byte and bit size of a size_t */
1022
/* The byte and bit size of a size_t */
63
#define SIZE_T_SIZE             (sizeof(size_t))
1023
#define SIZE_T_SIZE         (sizeof(size_t))
64
#define SIZE_T_BITSIZE          (sizeof(size_t) << 3)
1024
#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
65
 
Line -... Line 66...
-
 
66
/* Some constants coerced to size_t */
-
 
67
/* Annoying but necessary to avoid errors on some platforms */
-
 
68
#define SIZE_T_ZERO             ((size_t)0)
-
 
69
#define SIZE_T_ONE              ((size_t)1)
-
 
70
#define SIZE_T_TWO              ((size_t)2)
-
 
71
#define SIZE_T_FOUR             ((size_t)4)
-
 
72
#define TWO_SIZE_T_SIZES        (SIZE_T_SIZE<<1)
-
 
73
#define FOUR_SIZE_T_SIZES       (SIZE_T_SIZE<<2)
-
 
74
#define SIX_SIZE_T_SIZES        (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
-
 
75
#define HALF_MAX_SIZE_T         (MAX_SIZE_T / 2U)
-
 
76
 
-
 
77
#define USE_LOCK_BIT            (2U)
-
 
78
#define USE_MMAP_BIT            (SIZE_T_ONE)
-
 
79
#define USE_NONCONTIGUOUS_BIT   (4U)
-
 
80
 
-
 
81
/* segment bit set in create_mspace_with_base */
-
 
82
#define EXTERN_BIT              (8U)
-
 
83
 
-
 
84
#define HAVE_MMAP               1
1025
 
85
#define CALL_MMAP(s)            MMAP_DEFAULT(s)
1026
/* Some constants coerced to size_t */
86
#define CALL_MUNMAP(a, s)       MUNMAP_DEFAULT((a), (s))
Line 1027... Line 87...
1027
/* Annoying but necessary to avoid errors on some plaftorms */
87
#define CALL_MREMAP(addr, osz, nsz, mv)     MFAIL
1028
#define SIZE_T_ZERO         ((size_t)0)
88
#define MAX_RELEASE_CHECK_RATE  4095
Line 1042... Line 102...
1042
/* the number of bytes to offset an address to align it */
102
/* the number of bytes to offset an address to align it */
1043
#define align_offset(A)\
103
#define align_offset(A)\
1044
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
104
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1045
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
105
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
Line 1046... Line -...
1046
 
-
 
Line 1047... Line -...
1047
/* -------------------------- MMAP preliminaries ------------------------- */
-
 
1048
 
-
 
1049
/*
-
 
1050
   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
-
 
1051
   checks to fail so compiler optimizer can delete code rather than
-
 
1052
   using so many "#if"s.
-
 
1053
*/
-
 
1054
 
-
 
1055
 
106
 
1056
/* MORECORE and MMAP must return MFAIL on failure */
107
 
Line 1057... Line -...
1057
#define MFAIL                ((void*)(MAX_SIZE_T))
-
 
1058
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
-
 
1059
 
-
 
1060
#if !HAVE_MMAP
-
 
1061
#else /* HAVE_MMAP */
-
 
1062
#define IS_MMAPPED_BIT       (SIZE_T_ONE)
-
 
1063
#define USE_MMAP_BIT         (SIZE_T_ONE)
-
 
1064
 
-
 
1065
 
-
 
1066
 
-
 
1067
#ifdef KOLIBRI
-
 
1068
 
-
 
1069
/* Win32 MMAP via VirtualAlloc */
-
 
1070
static void* win32mmap(size_t size) {
-
 
1071
  void* ptr = KernelAlloc(size);
-
 
1072
  return (ptr != 0)? ptr: MFAIL;
-
 
1073
}
-
 
1074
 
-
 
1075
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
-
 
1076
static void* win32direct_mmap(size_t size) {
-
 
1077
  void* ptr = KernelAlloc(size);
-
 
1078
  return (ptr != 0)? ptr: MFAIL;
-
 
1079
}
-
 
1080
 
-
 
1081
/* This function supports releasing coalesed segments */
-
 
1082
static int win32munmap(void* ptr, size_t size) {
-
 
1083
  KernelFree(ptr);
-
 
1084
  return 0;
-
 
1085
}
-
 
1086
 
-
 
1087
#else
-
 
1088
 
-
 
1089
static void* win32mmap(size_t size) {
-
 
1090
    void* ptr = mem_alloc(size, 3);
-
 
1091
    return (ptr != 0)? ptr: MFAIL;
-
 
1092
}
-
 
1093
 
-
 
1094
/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
-
 
1095
static void* win32direct_mmap(size_t size) {
-
 
1096
    void* ptr = mem_alloc(size, 3);
-
 
1097
    return (ptr != 0)? ptr: MFAIL;
-
 
1098
}
-
 
1099
 
-
 
1100
/* This function supports releasing coalesed segments */
-
 
1101
static int win32munmap(void* ptr, size_t size) {
-
 
1102
    mem_free(ptr);
-
 
1103
    return 0;
-
 
1104
}
-
 
1105
 
-
 
1106
 
-
 
1107
#endif
-
 
1108
 
-
 
1109
#define CALL_MMAP(s)         win32mmap(s)
-
 
1110
#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
-
 
1111
#define DIRECT_MMAP(s)       win32direct_mmap(s)
-
 
1112
#endif /* HAVE_MMAP */
-
 
1113
 
-
 
1114
#if HAVE_MMAP && HAVE_MREMAP
-
 
1115
#else  /* HAVE_MMAP && HAVE_MREMAP */
-
 
1116
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
-
 
1117
#endif /* HAVE_MMAP && HAVE_MREMAP */
-
 
1118
 
-
 
1119
#if HAVE_MORECORE
-
 
1120
#else  /* HAVE_MORECORE */
-
 
1121
#define CALL_MORECORE(S)     MFAIL
108
#define MFAIL                ((void*)(MAX_SIZE_T))
1122
#endif /* HAVE_MORECORE */
-
 
1123
 
-
 
1124
/* mstate bit set if continguous morecore disabled or failed */
-
 
1125
#define USE_NONCONTIGUOUS_BIT (4U)
-
 
1126
 
-
 
1127
/* segment bit set in create_mspace_with_base */
-
 
1128
#define EXTERN_BIT            (8U)
-
 
1129
 
-
 
1130
 
-
 
1131
/* --------------------------- Lock preliminaries ------------------------ */
-
 
1132
 
-
 
1133
#if USE_LOCKS
-
 
1134
#else  /* USE_LOCKS */
-
 
1135
#define USE_LOCK_BIT               (0U)
-
 
1136
#define INITIAL_LOCK(l)
-
 
1137
#endif /* USE_LOCKS */
-
 
1138
 
-
 
1139
#if USE_LOCKS && HAVE_MORECORE
-
 
1140
#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
-
 
1141
#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
-
 
1142
#else /* USE_LOCKS && HAVE_MORECORE */
-
 
1143
#define ACQUIRE_MORECORE_LOCK()
-
 
1144
#define RELEASE_MORECORE_LOCK()
-
 
1145
#endif /* USE_LOCKS && HAVE_MORECORE */
-
 
1146
 
109
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
1147
#if USE_LOCKS
-
 
1148
#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
-
 
1149
#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
-
 
1150
#else  /* USE_LOCKS */
-
 
1151
#define ACQUIRE_MAGIC_INIT_LOCK()
-
 
1152
#define RELEASE_MAGIC_INIT_LOCK()
-
 
1153
#endif /* USE_LOCKS */
-
 
Line 1154... Line 110...
1154
 
110
 
1155
 
-
 
1156
/* -----------------------  Chunk representations ------------------------ */
-
 
1157
 
-
 
1158
/*
-
 
1159
  (The following includes lightly edited explanations by Colin Plumb.)
-
 
1160
 
-
 
1161
  The malloc_chunk declaration below is misleading (but accurate and
-
 
1162
  necessary).  It declares a "view" into memory allowing access to
-
 
1163
  necessary fields at known offsets from a given base.
-
 
1164
 
-
 
1165
  Chunks of memory are maintained using a `boundary tag' method as
111
/* For sys_alloc, enough padding to ensure can malloc request on success */
1166
  originally described by Knuth.  (See the paper by Paul Wilson
-
 
1167
  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
-
 
1168
  techniques.)  Sizes of free chunks are stored both in the front of
-
 
1169
  each chunk and at the end.  This makes consolidating fragmented
-
 
1170
  chunks into bigger chunks fast.  The head fields also hold bits
-
 
1171
  representing whether chunks are free or in use.
-
 
1172
 
-
 
1173
  Here are some pictures to make it clearer.  They are "exploded" to
-
 
1174
  show that the state of a chunk can be thought of as extending from
-
 
1175
  the high 31 bits of the head field of its header through the
-
 
1176
  prev_foot and PINUSE_BIT bit of the following chunk header.
-
 
1177
 
-
 
1178
  A chunk that's in use looks like:
-
 
1179
 
-
 
1180
   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1181
           | Size of previous chunk (if P = 1)                             |
-
 
1182
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1183
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
-
 
1184
         | Size of this chunk                                         1| +-+
-
 
1185
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1186
         |                                                               |
-
 
1187
         +-                                                             -+
-
 
1188
         |                                                               |
-
 
1189
         +-                                                             -+
-
 
1190
         |                                                               :
-
 
1191
         +-      size - sizeof(size_t) available payload bytes          -+
-
 
1192
         :                                                               |
-
 
1193
 chunk-> +-                                                             -+
-
 
1194
         |                                                               |
-
 
1195
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1196
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
-
 
1197
       | Size of next chunk (may or may not be in use)               | +-+
-
 
1198
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1199
 
-
 
1200
    And if it's free, it looks like this:
-
 
1201
 
-
 
1202
   chunk-> +-                                                             -+
-
 
1203
           | User payload (must be in use, or we would have merged!)       |
-
 
1204
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1205
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
-
 
1206
         | Size of this chunk                                         0| +-+
-
 
1207
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1208
         | Next pointer                                                  |
-
 
1209
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1210
         | Prev pointer                                                  |
-
 
1211
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1212
         |                                                               :
-
 
1213
         +-      size - sizeof(struct chunk) unused bytes               -+
-
 
1214
         :                                                               |
-
 
1215
 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1216
         | Size of this chunk                                            |
-
 
1217
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1218
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
-
 
1219
       | Size of next chunk (must be in use, or we would have merged)| +-+
-
 
1220
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1221
       |                                                               :
-
 
1222
       +- User payload                                                -+
-
 
1223
       :                                                               |
-
 
1224
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1225
                                                                     |0|
-
 
1226
                                                                     +-+
-
 
1227
  Note that since we always merge adjacent free chunks, the chunks
-
 
1228
  adjacent to a free chunk must be in use.
-
 
1229
 
-
 
1230
  Given a pointer to a chunk (which can be derived trivially from the
-
 
1231
  payload pointer) we can, in O(1) time, find out whether the adjacent
-
 
1232
  chunks are free, and if so, unlink them from the lists that they
112
#define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
1233
  are on and merge them with the current chunk.
-
 
1234
 
-
 
1235
  Chunks always begin on even word boundaries, so the mem portion
-
 
1236
  (which is returned to the user) is also on an even word boundary, and
-
 
1237
  thus at least double-word aligned.
-
 
1238
 
-
 
1239
  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
-
 
1240
  chunk size (which is always a multiple of two words), is an in-use
-
 
1241
  bit for the *previous* chunk.  If that bit is *clear*, then the
-
 
1242
  word before the current chunk size contains the previous chunk
-
 
1243
  size, and can be used to find the front of the previous chunk.
-
 
1244
  The very first chunk allocated always has this bit set, preventing
-
 
1245
  access to non-existent (or non-owned) memory. If pinuse is set for
-
 
1246
  any given chunk, then you CANNOT determine the size of the
-
 
1247
  previous chunk, and might even get a memory addressing fault when
-
 
1248
  trying to do so.
-
 
1249
 
-
 
1250
  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
-
 
1251
  the chunk size redundantly records whether the current chunk is
-
 
1252
  inuse. This redundancy enables usage checks within free and realloc,
-
 
1253
  and reduces indirection when freeing and consolidating chunks.
-
 
1254
 
-
 
1255
  Each freshly allocated chunk must have both cinuse and pinuse set.
-
 
1256
  That is, each allocated chunk borders either a previously allocated
-
 
1257
  and still in-use chunk, or the base of its memory arena. This is
113
 
1258
  ensured by making all allocations from the the `lowest' part of any
-
 
1259
  found chunk.  Further, no free chunk physically borders another one,
-
 
1260
  so each free chunk is known to be preceded and followed by either
-
 
1261
  inuse chunks or the ends of memory.
-
 
1262
 
-
 
1263
  Note that the `foot' of the current chunk is actually represented
-
 
1264
  as the prev_foot of the NEXT chunk. This makes it easier to
-
 
1265
  deal with alignments etc but can be very confusing when trying
-
 
1266
  to extend or adapt this code.
-
 
1267
 
-
 
1268
  The exceptions to all this are
-
 
1269
 
-
 
1270
     1. The special chunk `top' is the top-most available chunk (i.e.,
-
 
1271
        the one bordering the end of available memory). It is treated
-
 
1272
        specially.  Top is never included in any bin, is used only if
-
 
1273
        no other chunk is available, and is released back to the
-
 
1274
        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
-
 
1275
        the top chunk is treated as larger (and thus less well
-
 
1276
        fitting) than any other available chunk.  The top chunk
-
 
1277
        doesn't update its trailing size field since there is no next
-
 
1278
        contiguous chunk that would have to index off it. However,
-
 
1279
        space is still allocated for it (TOP_FOOT_SIZE) to enable
-
 
1280
        separation or merging when space is extended.
-
 
1281
 
-
 
1282
     3. Chunks allocated via mmap, which have the lowest-order bit
-
 
1283
        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
-
 
1284
        PINUSE_BIT in their head fields.  Because they are allocated
-
 
1285
        one-by-one, each must carry its own prev_foot field, which is
-
 
1286
        also used to hold the offset this chunk has within its mmapped
-
 
1287
        region, which is needed to preserve alignment. Each mmapped
114
/*
1288
        chunk is trailed by the first two fields of a fake next-chunk
-
 
1289
        for sake of usage checks.
115
  TOP_FOOT_SIZE is padding at the end of a segment, including space
1290
 
116
  that may be needed to place segment records and fenceposts when new
1291
*/
-
 
1292
 
-
 
1293
struct malloc_chunk {
-
 
1294
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
-
 
1295
  size_t               head;       /* Size and inuse bits. */
-
 
1296
  struct malloc_chunk* fd;         /* double links -- used only if free. */
-
 
1297
  struct malloc_chunk* bk;
-
 
1298
};
-
 
1299
 
-
 
1300
typedef struct malloc_chunk  mchunk;
-
 
1301
typedef struct malloc_chunk* mchunkptr;
-
 
Line 1302... Line 117...
1302
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
117
  noncontiguous segments are added.
Line 1303... Line 118...
1303
typedef unsigned int bindex_t;         /* Described below */
118
*/
Line 1304... Line -...
1304
typedef unsigned int binmap_t;         /* Described below */
-
 
1305
typedef unsigned int flag_t;           /* The type of various bit flag sets */
-
 
1306
 
-
 
1307
/* ------------------- Chunks sizes and alignments ----------------------- */
-
 
1308
 
-
 
1309
#define MCHUNK_SIZE         (sizeof(mchunk))
-
 
1310
 
119
#define TOP_FOOT_SIZE\
1311
#if FOOTERS
120
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
1312
#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
121
 
1313
#else /* FOOTERS */
122
/* ------------------- Chunks sizes and alignments ----------------------- */
Line 1339... Line 148...
1339
 
148
 
1340
/* pad request, checking for minimum (but not maximum) */
149
/* pad request, checking for minimum (but not maximum) */
1341
#define request2size(req) \
150
#define request2size(req) \
Line 1342... Line -...
1342
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
-
 
1343
 
151
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
Line 1344... Line 152...
1344
 
152
 
1345
/* ------------------ Operations on head and foot fields ----------------- */
153
/* ------------------ Operations on head and foot fields ----------------- */
1346
 
154
 
1347
/*
155
/*
-
 
156
  The head field of a chunk is or'ed with PINUSE_BIT when previous
1348
  The head field of a chunk is or'ed with PINUSE_BIT when previous
157
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1349
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
-
 
1350
  use. If the chunk was obtained with mmap, the prev_foot field has
158
  use, unless mmapped, in which case both bits are cleared.
Line 1351... Line 159...
1351
  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
159
 
1352
  mmapped region to the base of the chunk.
160
  FLAG4_BIT is not used by this malloc, but might be useful in extensions.
-
 
161
*/
1353
*/
162
 
-
 
163
#define PINUSE_BIT          (SIZE_T_ONE)
Line 1354... Line 164...
1354
 
164
#define CINUSE_BIT          (SIZE_T_TWO)
1355
#define PINUSE_BIT          (SIZE_T_ONE)
165
#define FLAG4_BIT           (SIZE_T_FOUR)
Line 1356... Line 166...
1356
#define CINUSE_BIT          (SIZE_T_TWO)
166
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1357
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
167
#define FLAG_BITS           (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
1358
 
168
 
-
 
169
/* Head value for fenceposts */
-
 
170
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
-
 
171
 
1359
/* Head value for fenceposts */
172
/* extraction of fields from head words */
Line 1360... Line 173...
1360
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
173
#define cinuse(p)           ((p)->head & CINUSE_BIT)
1361
 
-
 
Line 1362... Line 174...
1362
/* extraction of fields from head words */
174
#define pinuse(p)           ((p)->head & PINUSE_BIT)
1363
#define cinuse(p)           ((p)->head & CINUSE_BIT)
175
#define is_inuse(p)         (((p)->head & INUSE_BITS) != PINUSE_BIT)
1364
#define pinuse(p)           ((p)->head & PINUSE_BIT)
176
#define is_mmapped(p)       (((p)->head & INUSE_BITS) == 0)
Line 1365... Line 177...
1365
#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
177
 
1366
 
178
#define chunksize(p)        ((p)->head & ~(FLAG_BITS))
1367
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
179
 
Line 1368... Line 180...
1368
#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
180
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1369
 
181
 
Line 1370... Line -...
1370
/* Treat space at ptr +/- offset as a chunk */
-
 
1371
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
-
 
1372
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
-
 
1373
 
-
 
1374
/* Ptr to next or previous physical malloc_chunk. */
182
/* Treat space at ptr +/- offset as a chunk */
1375
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
183
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1376
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
184
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
Line 1377... Line 185...
1377
 
185
 
1378
/* extract next chunk's pinuse bit */
186
/* Ptr to next or previous physical malloc_chunk. */
1379
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
187
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
Line 1380... Line -...
1380
 
-
 
1381
/* Get/set size at footer */
-
 
1382
#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
-
 
1383
#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
188
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1384
 
189
 
1385
/* Set size, pinuse bit, and foot */
190
/* extract next chunk's pinuse bit */
Line 1386... Line -...
1386
#define set_size_and_pinuse_of_free_chunk(p, s)\
-
 
1387
  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
-
 
1388
 
-
 
1389
/* Set size, pinuse bit, foot, and clear next pinuse */
-
 
1390
#define set_free_with_pinuse(p, s, n)\
-
 
1391
  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
-
 
1392
 
-
 
1393
#define is_mmapped(p)\
-
 
1394
  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
-
 
1395
 
-
 
1396
/* Get the internal overhead associated with chunk p */
-
 
1397
#define overhead_for(p)\
-
 
1398
 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
-
 
1399
 
-
 
1400
/* Return true if malloced space is not necessarily cleared */
-
 
1401
#if MMAP_CLEARS
-
 
1402
#define calloc_must_clear(p) (!is_mmapped(p))
-
 
1403
#else /* MMAP_CLEARS */
-
 
1404
#define calloc_must_clear(p) (1)
-
 
1405
#endif /* MMAP_CLEARS */
-
 
1406
 
-
 
1407
/* ---------------------- Overlaid data structures ----------------------- */
-
 
1408
 
-
 
1409
/*
-
 
1410
  When chunks are not in use, they are treated as nodes of either
-
 
1411
  lists or trees.
-
 
1412
 
-
 
1413
  "Small"  chunks are stored in circular doubly-linked lists, and look
-
 
1414
  like this:
-
 
1415
 
-
 
1416
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1417
            |             Size of previous chunk                            |
-
 
1418
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1419
    `head:' |             Size of chunk, in bytes                         |P|
-
 
1420
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1421
            |             Forward pointer to next chunk in list             |
-
 
1422
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1423
            |             Back pointer to previous chunk in list            |
-
 
1424
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1425
            |             Unused space (may be 0 bytes long)                .
-
 
1426
            .                                                               .
-
 
1427
            .                                                               |
-
 
1428
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1429
    `foot:' |             Size of chunk, in bytes                           |
-
 
1430
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1431
 
-
 
1432
  Larger chunks are kept in a form of bitwise digital trees (aka
-
 
1433
  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
-
 
1434
  free chunks greater than 256 bytes, their size doesn't impose any
-
 
1435
  constraints on user chunk sizes.  Each node looks like:
-
 
1436
 
-
 
1437
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1438
            |             Size of previous chunk                            |
-
 
1439
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1440
    `head:' |             Size of chunk, in bytes                         |P|
-
 
1441
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1442
            |             Forward pointer to next chunk of same size        |
-
 
1443
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1444
            |             Back pointer to previous chunk of same size       |
-
 
1445
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1446
            |             Pointer to left child (child[0])                  |
-
 
1447
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1448
            |             Pointer to right child (child[1])                 |
-
 
1449
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1450
            |             Pointer to parent                                 |
-
 
1451
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1452
            |             bin index of this chunk                           |
-
 
1453
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1454
            |             Unused space                                      .
-
 
1455
            .                                                               |
-
 
1456
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1457
    `foot:' |             Size of chunk, in bytes                           |
-
 
1458
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
-
 
1459
 
-
 
1460
  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
-
 
1461
  of the same size are arranged in a circularly-linked list, with only
-
 
1462
  the oldest chunk (the next to be used, in our FIFO ordering)
-
 
1463
  actually in the tree.  (Tree members are distinguished by a non-null
-
 
1464
  parent pointer.)  If a chunk with the same size an an existing node
-
 
1465
  is inserted, it is linked off the existing node using pointers that
-
 
1466
  work in the same way as fd/bk pointers of small chunks.
-
 
1467
 
-
 
1468
  Each tree contains a power of 2 sized range of chunk sizes (the
-
 
1469
  smallest is 0x100 <= x < 0x180), which is is divided in half at each
-
 
1470
  tree level, with the chunks in the smaller half of the range (0x100
-
 
1471
  <= x < 0x140 for the top nose) in the left subtree and the larger
-
 
1472
  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
-
 
1473
  done by inspecting individual bits.
-
 
1474
 
-
 
1475
  Using these rules, each node's left subtree contains all smaller
-
 
1476
  sizes than its right subtree.  However, the node at the root of each
-
 
1477
  subtree has no particular ordering relationship to either.  (The
-
 
1478
  dividing line between the subtree sizes is based on trie relation.)
-
 
1479
  If we remove the last chunk of a given size from the interior of the
-
 
1480
  tree, we need to replace it with a leaf node.  The tree ordering
-
 
1481
  rules permit a node to be replaced by any leaf below it.
-
 
1482
 
-
 
Line 1483... Line 191...
1483
  The smallest chunk in a tree (a common operation in a best-fit
191
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1484
  allocator) can be found by walking a path to the leftmost leaf in
192
 
1485
  the tree.  Unlike a usual binary tree, where we follow left child
193
/* Set size, pinuse bit, and foot */
1486
  pointers until we reach a null, here we follow the right child
194
#define set_size_and_pinuse_of_free_chunk(p, s)\
Line 1512... Line 220...
1512
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
220
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
Line 1513... Line 221...
1513
 
221
 
1514
/* A little helper macro for trees */
222
/* A little helper macro for trees */
Line 1515... Line -...
1515
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
-
 
1516
 
-
 
1517
/* ----------------------------- Segments -------------------------------- */
-
 
1518
 
-
 
1519
/*
-
 
1520
  Each malloc space may include non-contiguous segments, held in a
-
 
1521
  list headed by an embedded malloc_segment record representing the
-
 
1522
  top-most space. Segments also include flags holding properties of
-
 
1523
  the space. Large chunks that are directly allocated by mmap are not
-
 
1524
  included in this list. They are instead independently created and
-
 
1525
  destroyed without otherwise keeping track of them.
-
 
1526
 
-
 
1527
  Segment management mainly comes into play for spaces allocated by
-
 
1528
  MMAP.  Any call to MMAP might or might not return memory that is
-
 
1529
  adjacent to an existing segment.  MORECORE normally contiguously
-
 
1530
  extends the current space, so this space is almost always adjacent,
-
 
1531
  which is simpler and faster to deal with. (This is why MORECORE is
-
 
1532
  used preferentially to MMAP when both are available -- see
-
 
1533
  sys_alloc.)  When allocating using MMAP, we don't use any of the
-
 
1534
  hinting mechanisms (inconsistently) supported in various
-
 
1535
  implementations of unix mmap, or distinguish reserving from
-
 
1536
  committing memory. Instead, we just ask for space, and exploit
-
 
1537
  contiguity when we get it.  It is probably possible to do
-
 
1538
  better than this on some systems, but no general scheme seems
-
 
1539
  to be significantly better.
-
 
1540
 
-
 
1541
  Management entails a simpler variant of the consolidation scheme
-
 
1542
  used for chunks to reduce fragmentation -- new adjacent memory is
-
 
1543
  normally prepended or appended to an existing segment. However,
-
 
1544
  there are limitations compared to chunk consolidation that mostly
-
 
1545
  reflect the fact that segment processing is relatively infrequent
-
 
1546
  (occurring only when getting memory from system) and that we
-
 
1547
  don't expect to have huge numbers of segments:
-
 
1548
 
-
 
1549
  * Segments are not indexed, so traversal requires linear scans.  (It
-
 
1550
    would be possible to index these, but is not worth the extra
-
 
1551
    overhead and complexity for most programs on most platforms.)
-
 
1552
  * New segments are only appended to old ones when holding top-most
-
 
1553
    memory; if they cannot be prepended to others, they are held in
-
 
1554
    different segments.
-
 
1555
 
-
 
1556
  Except for the top-most segment of an mstate, each segment record
-
 
1557
  is kept at the tail of its segment. Segments are added by pushing
-
 
1558
  segment records onto the list headed by &mstate.seg for the
-
 
1559
  containing mstate.
-
 
1560
 
-
 
1561
  Segment flags control allocation/merge/deallocation policies:
-
 
1562
  * If EXTERN_BIT set, then we did not allocate this segment,
-
 
1563
    and so should not try to deallocate or merge with others.
-
 
1564
    (This currently holds only for the initial segment passed
-
 
1565
    into create_mspace_with_base.)
-
 
1566
  * If IS_MMAPPED_BIT set, the segment may be merged with
-
 
1567
    other surrounding mmapped segments and trimmed/de-allocated
-
 
1568
    using munmap.
-
 
1569
  * If neither bit is set, then the segment was obtained using
-
 
1570
    MORECORE so can be merged with surrounding MORECORE'd segments
-
 
Line 1571... Line 223...
1571
    and deallocated/trimmed using MORECORE with negative arguments.
223
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1572
*/
224
 
1573
 
225
 
1574
struct malloc_segment {
226
struct malloc_segment {
1575
  char*        base;             /* base address */
227
  char*        base;             /* base address */
1576
  size_t       size;             /* allocated size */
228
  size_t       size;             /* allocated size */
Line 1577... Line 229...
1577
  struct malloc_segment* next;   /* ptr to next segment */
229
  struct malloc_segment* next;   /* ptr to next segment */
1578
  flag_t       sflags;           /* mmap and extern flag */
230
  flag_t       sflags;           /* mmap and extern flag */
Line 1579... Line 231...
1579
};
231
};
1580
 
232
 
Line 1652... Line 304...
1652
 
304
 
1653
  Statistics
305
  Statistics
1654
    Each space keeps track of current and maximum system memory
306
    Each space keeps track of current and maximum system memory
Line -... Line 307...
-
 
307
    obtained via MORECORE or MMAP.
-
 
308
 
-
 
309
  Trim support
-
 
310
    Fields holding the amount of unused topmost memory that should trigger
-
 
311
    timming, and a counter to force periodic scanning to release unused
1655
    obtained via MORECORE or MMAP.
312
    non-topmost segments.
1656
 
313
 
1657
  Locking
314
  Locking
-
 
315
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
-
 
316
    around every public call using this mspace.
-
 
317
 
-
 
318
  Extension support
1658
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
319
    A void* pointer and a size_t field that can be used to help implement
Line 1659... Line 320...
1659
    around every public call using this mspace.
320
    extensions to this malloc.
1660
*/
321
*/
1661
 
322
 
Line 1676... Line 337...
1676
  size_t     topsize;
337
  size_t     topsize;
1677
  char*      least_addr;
338
  char*      least_addr;
1678
  mchunkptr  dv;
339
  mchunkptr  dv;
1679
  mchunkptr  top;
340
  mchunkptr  top;
1680
  size_t     trim_check;
341
  size_t     trim_check;
-
 
342
  size_t     release_checks;
1681
  size_t     magic;
343
  size_t     magic;
1682
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
344
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
1683
  tbinptr    treebins[NTREEBINS];
345
  tbinptr    treebins[NTREEBINS];
1684
  size_t     footprint;
346
  size_t     footprint;
1685
  size_t     max_footprint;
347
  size_t     max_footprint;
1686
  flag_t     mflags;
348
  flag_t     mflags;
1687
#if USE_LOCKS
-
 
1688
  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
349
  struct mutex  lock;     /* locate lock among fields that rarely change */
1689
#endif /* USE_LOCKS */
-
 
1690
  msegment   seg;
350
  msegment   seg;
-
 
351
  void*      extp;      /* Unused but available for extensions */
-
 
352
  size_t     exts;
1691
};
353
};
Line 1692... Line 354...
1692
 
354
 
Line 1693... Line 355...
1693
typedef struct malloc_state*    mstate;
355
typedef struct malloc_state*    mstate;
Line 1694... Line 356...
1694
 
356
 
1695
/* ------------- Global malloc_state and malloc_params ------------------- */
357
/* ------------- Global malloc_state and malloc_params ------------------- */
1696
 
358
 
1697
/*
359
/*
-
 
360
  malloc_params holds global properties, including those that can be
1698
  malloc_params holds global properties, including those that can be
361
  dynamically set using mallopt. There is a single instance, mparams,
Line 1699... Line 362...
1699
  dynamically set using mallopt. There is a single instance, mparams,
362
  initialized in init_mparams. Note that the non-zeroness of "magic"
-
 
363
  also serves as an initialization flag.
1700
  initialized in init_mparams.
364
*/
1701
*/
365
 
1702
 
366
struct malloc_params
1703
struct malloc_params {
367
{
1704
  size_t magic;
368
    volatile size_t  magic;
1705
  size_t page_size;
369
    size_t  page_size;
1706
  size_t granularity;
370
    size_t  granularity;
Line 1707... Line 371...
1707
  size_t mmap_threshold;
371
    size_t  mmap_threshold;
Line 1708... Line 372...
1708
  size_t trim_threshold;
372
    size_t  trim_threshold;
-
 
373
    flag_t  default_mflags;
1709
  flag_t default_mflags;
374
};
1710
};
375
 
1711
 
376
static struct malloc_params mparams;
1712
static struct malloc_params mparams;
-
 
1713
 
-
 
1714
/* The global malloc_state used for all non-"mspace" calls */
-
 
1715
static struct malloc_state _gm_;
-
 
1716
#define gm                 (&_gm_)
-
 
1717
#define is_global(M)       ((M) == &_gm_)
-
 
1718
#define is_initialized(M)  ((M)->top != 0)
-
 
1719
 
-
 
1720
/* -------------------------- system alloc setup ------------------------- */
-
 
1721
 
-
 
1722
/* Operations on mflags */
-
 
1723
 
-
 
1724
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
-
 
1725
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
-
 
1726
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
-
 
1727
 
-
 
1728
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
-
 
1729
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
-
 
1730
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
-
 
1731
 
-
 
1732
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
-
 
1733
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
-
 
1734
 
-
 
1735
#define set_lock(M,L)\
-
 
1736
 ((M)->mflags = (L)?\
-
 
1737
  ((M)->mflags | USE_LOCK_BIT) :\
-
 
1738
  ((M)->mflags & ~USE_LOCK_BIT))
-
 
1739
 
-
 
1740
/* page-align a size */
-
 
1741
#define page_align(S)\
-
 
1742
 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
-
 
1743
 
-
 
1744
/* granularity-align a size */
-
 
1745
#define granularity_align(S)\
-
 
1746
  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
-
 
1747
 
-
 
1748
#define is_page_aligned(S)\
-
 
1749
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
-
 
1750
#define is_granularity_aligned(S)\
-
 
1751
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
-
 
1752
 
-
 
1753
/*  True if segment S holds address A */
-
 
1754
#define segment_holds(S, A)\
-
 
1755
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
-
 
1756
 
-
 
1757
/* Return segment holding given address */
-
 
1758
static msegmentptr segment_holding(mstate m, char* addr) {
-
 
1759
  msegmentptr sp = &m->seg;
-
 
1760
  for (;;) {
-
 
1761
    if (addr >= sp->base && addr < sp->base + sp->size)
-
 
1762
      return sp;
-
 
1763
    if ((sp = sp->next) == 0)
-
 
1764
      return 0;
-
 
1765
  }
-
 
1766
}
-
 
1767
 
-
 
1768
/* Return true if segment contains a segment link */
-
 
1769
static int has_segment_link(mstate m, msegmentptr ss) {
-
 
1770
  msegmentptr sp = &m->seg;
-
 
1771
  for (;;) {
-
 
1772
    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
-
 
1773
      return 1;
-
 
1774
    if ((sp = sp->next) == 0)
-
 
1775
      return 0;
-
 
1776
  }
-
 
1777
}
-
 
1778
 
-
 
1779
#ifndef MORECORE_CANNOT_TRIM
-
 
1780
#define should_trim(M,s)  ((s) > (M)->trim_check)
-
 
1781
#else  /* MORECORE_CANNOT_TRIM */
-
 
1782
#define should_trim(M,s)  (0)
-
 
1783
#endif /* MORECORE_CANNOT_TRIM */
-
 
1784
 
-
 
1785
/*
-
 
1786
  TOP_FOOT_SIZE is padding at the end of a segment, including space
-
 
1787
  that may be needed to place segment records and fenceposts when new
-
 
1788
  noncontiguous segments are added.
-
 
1789
*/
-
 
1790
#define TOP_FOOT_SIZE\
-
 
1791
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
-
 
1792
 
-
 
1793
 
-
 
1794
/* -------------------------------  Hooks -------------------------------- */
-
 
1795
 
-
 
1796
/*
-
 
1797
  PREACTION should be defined to return 0 on success, and nonzero on
-
 
1798
  failure. If you are not using locking, you can redefine these to do
-
 
1799
  anything you like.
-
 
1800
*/
-
 
1801
 
-
 
1802
#if USE_LOCKS
-
 
1803
 
-
 
1804
/* Ensure locks are initialized */
-
 
1805
#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
-
 
1806
 
-
 
1807
#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
-
 
1808
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
-
 
1809
#else /* USE_LOCKS */
-
 
1810
 
-
 
1811
#ifndef PREACTION
-
 
1812
#define PREACTION(M) (0)
-
 
1813
#endif  /* PREACTION */
-
 
1814
 
-
 
1815
#ifndef POSTACTION
-
 
1816
#define POSTACTION(M)
-
 
1817
#endif  /* POSTACTION */
-
 
1818
 
-
 
1819
#endif /* USE_LOCKS */
-
 
1820
 
-
 
1821
/*
-
 
1822
  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
-
 
1823
  USAGE_ERROR_ACTION is triggered on detected bad frees and
-
 
1824
  reallocs. The argument p is an address that might have triggered the
-
 
1825
  fault. It is ignored by the two predefined actions, but might be
-
 
1826
  useful in custom actions that try to help diagnose errors.
-
 
1827
*/
-
 
1828
 
-
 
1829
#if PROCEED_ON_ERROR
-
 
1830
 
-
 
1831
/* A count of the number of corruption errors causing resets */
-
 
1832
int malloc_corruption_error_count;
-
 
Line 1833... Line 377...
1833
 
377
 
Line 1834... Line -...
1834
/* default corruption action */
-
 
1835
static void reset_on_error(mstate m);
-
 
1836
 
-
 
Line 1837... Line -...
1837
#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
-
 
1838
#define USAGE_ERROR_ACTION(m, p)
378
#define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
1839
 
-
 
Line 1840... Line 379...
1840
#else /* PROCEED_ON_ERROR */
379
 
Line -... Line 380...
-
 
380
static struct malloc_state _gm_;
1841
 
381
#define gm                 (&_gm_)
Line 1842... Line 382...
1842
#ifndef CORRUPTION_ERROR_ACTION
382
#define is_global(M)       ((M) == &_gm_)
-
 
383
 
Line 1843... Line -...
1843
#define CORRUPTION_ERROR_ACTION(m) ABORT
-
 
1844
#endif /* CORRUPTION_ERROR_ACTION */
-
 
1845
 
-
 
1846
#ifndef USAGE_ERROR_ACTION
-
 
1847
#define USAGE_ERROR_ACTION(m,p) ABORT
-
 
1848
#endif /* USAGE_ERROR_ACTION */
-
 
1849
 
-
 
1850
#endif /* PROCEED_ON_ERROR */
-
 
1851
 
-
 
1852
/* -------------------------- Debugging setup ---------------------------- */
-
 
1853
 
-
 
1854
#if ! DEBUG
-
 
1855
 
-
 
1856
#define check_free_chunk(M,P)
-
 
1857
#define check_inuse_chunk(M,P)
-
 
1858
#define check_malloced_chunk(M,P,N)
-
 
1859
#define check_mmapped_chunk(M,P)
-
 
1860
#define check_malloc_state(M)
-
 
1861
#define check_top_chunk(M,P)
-
 
1862
 
-
 
1863
#else /* DEBUG */
-
 
1864
#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
-
 
1865
#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
-
 
1866
#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
-
 
1867
#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
-
 
1868
#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
-
 
1869
#define check_malloc_state(M)       do_check_malloc_state(M)
-
 
1870
 
-
 
Line 1871... Line 384...
1871
static void   do_check_any_chunk(mstate m, mchunkptr p);
384
#define is_initialized(M)  ((M)->top != 0)
Line 1872... Line 385...
1872
static void   do_check_top_chunk(mstate m, mchunkptr p);
385
 
1873
static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
386
 
Line 1891... Line 404...
1891
 
404
 
1892
/* addressing by index. See above about smallbin repositioning */
405
/* addressing by index. See above about smallbin repositioning */
1893
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
406
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
Line 1894... Line -...
1894
#define treebin_at(M,i)     (&((M)->treebins[i]))
-
 
1895
 
-
 
-
 
407
#define treebin_at(M,i)     (&((M)->treebins[i]))
1896
/* assign tree index for size S to variable I */
408
 
1897
#if defined(__GNUC__) && defined(i386)
409
 
1898
#define compute_tree_index(S, I)\
410
#define compute_tree_index(S, I)\
1899
{\
411
{\
1900
  size_t X = S >> TREEBIN_SHIFT;\
412
  unsigned int X = S >> TREEBIN_SHIFT;\
1901
  if (X == 0)\
413
  if (X == 0)\
1902
    I = 0;\
414
    I = 0;\
1903
  else if (X > 0xFFFF)\
415
  else if (X > 0xFFFF)\
1904
    I = NTREEBINS-1;\
416
    I = NTREEBINS-1;\
1905
  else {\
417
  else {\
1906
    unsigned int K;\
418
    unsigned int K;\
1907
    __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
419
    __asm__("bsrl\t%1, %0\n\t" : "=r" (K) : "g"  (X));\
1908
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
420
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
1909
  }\
-
 
1910
}
-
 
1911
#else /* GNUC */
-
 
1912
#define compute_tree_index(S, I)\
-
 
1913
{\
-
 
1914
  size_t X = S >> TREEBIN_SHIFT;\
-
 
1915
  if (X == 0)\
-
 
1916
    I = 0;\
-
 
1917
  else if (X > 0xFFFF)\
-
 
1918
    I = NTREEBINS-1;\
-
 
1919
  else {\
-
 
1920
    unsigned int Y = (unsigned int)X;\
-
 
1921
    unsigned int N = ((Y - 0x100) >> 16) & 8;\
-
 
1922
    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
-
 
1923
    N += K;\
-
 
1924
    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
-
 
1925
    K = 14 - N + ((Y <<= K) >> 15);\
-
 
1926
    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
-
 
1927
  }\
-
 
Line 1928... Line 421...
1928
}
421
  }\
1929
#endif /* GNUC */
422
}
1930
 
423
 
Line 1955... Line 448...
1955
 
448
 
1956
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
449
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
1957
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
450
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
Line 1958... Line -...
1958
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
-
 
1959
 
-
 
1960
/* index corresponding to given bit */
-
 
1961
 
-
 
1962
#if defined(__GNUC__) && defined(i386)
-
 
1963
#define compute_bit2idx(X, I)\
-
 
1964
{\
-
 
1965
  unsigned int J;\
-
 
1966
  __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
-
 
1967
  I = (bindex_t)J;\
-
 
1968
}
-
 
1969
 
-
 
1970
#else /* GNUC */
-
 
1971
#if  USE_BUILTIN_FFS
-
 
1972
#define compute_bit2idx(X, I) I = ffs(X)-1
-
 
1973
 
-
 
1974
#else /* USE_BUILTIN_FFS */
-
 
1975
#define compute_bit2idx(X, I)\
-
 
1976
{\
-
 
1977
  unsigned int Y = X - 1;\
-
 
1978
  unsigned int K = Y >> (16-4) & 16;\
-
 
1979
  unsigned int N = K;        Y >>= K;\
-
 
1980
  N += K = Y >> (8-3) &  8;  Y >>= K;\
-
 
1981
  N += K = Y >> (4-2) &  4;  Y >>= K;\
-
 
1982
  N += K = Y >> (2-1) &  2;  Y >>= K;\
-
 
1983
  N += K = Y >> (1-0) &  1;  Y >>= K;\
-
 
1984
  I = (bindex_t)(N + Y);\
-
 
1985
}
-
 
1986
#endif /* USE_BUILTIN_FFS */
-
 
1987
#endif /* GNUC */
451
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
1988
 
452
 
Line 1989... Line 453...
1989
/* isolate the least set bit of a bitmap */
453
/* isolate the least set bit of a bitmap */
1990
#define least_bit(x)         ((x) & -(x))
454
#define least_bit(x)         ((x) & -(x))
Line 1991... Line 455...
1991
 
455
 
1992
/* mask with all bits to left of least bit of x on */
456
/* mask with all bits to left of least bit of x on */
Line 1993... Line -...
1993
#define left_bits(x)         ((x<<1) | -(x<<1))
-
 
1994
 
-
 
1995
/* mask with all bits to left of or equal to least bit of x on */
-
 
1996
#define same_or_left_bits(x) ((x) | -(x))
-
 
1997
 
-
 
1998
 
-
 
1999
/* ----------------------- Runtime Check Support ------------------------- */
-
 
2000
 
-
 
2001
/*
-
 
2002
  For security, the main invariant is that malloc/free/etc never
-
 
2003
  writes to a static address other than malloc_state, unless static
-
 
2004
  malloc_state itself has been corrupted, which cannot occur via
-
 
2005
  malloc (because of these checks). In essence this means that we
-
 
2006
  believe all pointers, sizes, maps etc held in malloc_state, but
-
 
2007
  check all of those linked or offsetted from other embedded data
-
 
2008
  structures.  These checks are interspersed with main code in a way
-
 
2009
  that tends to minimize their run-time cost.
-
 
2010
 
-
 
2011
  When FOOTERS is defined, in addition to range checking, we also
457
#define left_bits(x)         ((x<<1) | -(x<<1))
2012
  verify footer fields of inuse chunks, which can be used guarantee
-
 
2013
  that the mstate controlling malloc/free is intact.  This is a
-
 
2014
  streamlined version of the approach described by William Robertson
-
 
2015
  et al in "Run-time Detection of Heap-based Overflows" LISA'03
-
 
2016
  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
-
 
2017
  of an inuse chunk holds the xor of its mstate and a random seed,
-
 
2018
  that is checked upon calls to free() and realloc().  This is
-
 
2019
  (probablistically) unguessable from outside the program, but can be
-
 
Line 2020... Line -...
2020
  computed by any code successfully malloc'ing any chunk, so does not
-
 
2021
  itself provide protection against code that has already broken
-
 
2022
  security through some other means.  Unlike Robertson et al, we
-
 
2023
  always dynamically check addresses of all offset chunks (previous,
-
 
2024
  next, etc). This turns out to be cheaper than relying on hashes.
458
 
-
 
459
/* mask with all bits to left of or equal to least bit of x on */
2025
*/
460
#define same_or_left_bits(x) ((x) | -(x))
2026
 
461
 
2027
#if !INSECURE
462
 
2028
/* Check if address a is at least as high as any from MORECORE or MMAP */
-
 
-
 
463
/* index corresponding to given bit. Use x86 asm if possible */
Line 2029... Line -...
2029
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
-
 
2030
/* Check if address of next chunk n is higher than base chunk p */
-
 
2031
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
-
 
2032
/* Check if p has its cinuse bit on */
-
 
2033
#define ok_cinuse(p)     cinuse(p)
-
 
2034
/* Check if p has its pinuse bit on */
-
 
2035
#define ok_pinuse(p)     pinuse(p)
-
 
2036
 
-
 
2037
#else /* !INSECURE */
-
 
2038
#define ok_address(M, a) (1)
-
 
2039
#define ok_next(b, n)    (1)
-
 
2040
#define ok_cinuse(p)     (1)
-
 
2041
#define ok_pinuse(p)     (1)
-
 
2042
#endif /* !INSECURE */
-
 
2043
 
-
 
2044
#if (FOOTERS && !INSECURE)
-
 
2045
/* Check if (alleged) mstate m has expected magic field */
-
 
2046
#define ok_magic(M)      ((M)->magic == mparams.magic)
-
 
2047
#else  /* (FOOTERS && !INSECURE) */
-
 
2048
#define ok_magic(M)      (1)
-
 
2049
#endif /* (FOOTERS && !INSECURE) */
-
 
2050
 
-
 
2051
 
-
 
2052
/* In gcc, use __builtin_expect to minimize impact of checks */
-
 
2053
#if !INSECURE
-
 
Line 2054... Line 464...
2054
#if defined(__GNUC__) && __GNUC__ >= 3
464
 
Line 2055... Line 465...
2055
#define RTCHECK(e)  __builtin_expect(e, 1)
465
#define compute_bit2idx(X, I)\
-
 
466
{\
-
 
467
  unsigned int J;\
Line 2056... Line 468...
2056
#else /* GNUC */
468
  __asm__("bsfl\t%1, %0\n\t" : "=r" (J) : "g" (X));\
Line 2057... Line 469...
2057
#define RTCHECK(e)  (e)
469
  I = (bindex_t)J;\
2058
#endif /* GNUC */
470
}
2059
#else /* !INSECURE */
471
 
2060
#define RTCHECK(e)  (1)
472
 
Line 2078... Line 490...
2078
 
490
 
2079
/* Set size, cinuse and pinuse bit of this chunk */
491
/* Set size, cinuse and pinuse bit of this chunk */
2080
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
492
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
Line 2081... Line -...
2081
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
-
 
2082
 
-
 
2083
#else /* FOOTERS */
-
 
2084
 
-
 
2085
/* Set foot of inuse chunk to be xor of mstate and seed */
-
 
2086
#define mark_inuse_foot(M,p,s)\
-
 
2087
  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
-
 
2088
 
-
 
2089
#define get_mstate_for(p)\
-
 
2090
  ((mstate)(((mchunkptr)((char*)(p) +\
-
 
2091
    (chunksize(p))))->prev_foot ^ mparams.magic))
-
 
2092
 
-
 
2093
#define set_inuse(M,p,s)\
-
 
2094
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
-
 
2095
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
-
 
2096
  mark_inuse_foot(M,p,s))
-
 
2097
 
-
 
2098
#define set_inuse_and_pinuse(M,p,s)\
-
 
2099
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
-
 
2100
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
-
 
2101
 mark_inuse_foot(M,p,s))
-
 
2102
 
-
 
2103
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
-
 
2104
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
-
 
2105
  mark_inuse_foot(M, p, s))
-
 
2106
 
-
 
2107
#endif /* !FOOTERS */
-
 
2108
 
-
 
2109
/* ---------------------------- setting mparams -------------------------- */
-
 
2110
 
-
 
2111
/* Initialize mparams */
-
 
2112
static int init_mparams(void) {
-
 
2113
  if (mparams.page_size == 0) {
-
 
2114
    size_t s;
-
 
2115
 
-
 
2116
    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
-
 
2117
    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
-
 
2118
#if MORECORE_CONTIGUOUS
-
 
2119
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
-
 
2120
#else  /* MORECORE_CONTIGUOUS */
-
 
2121
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
-
 
2122
#endif /* MORECORE_CONTIGUOUS */
-
 
2123
 
-
 
2124
#if (FOOTERS && !INSECURE)
-
 
2125
    {
-
 
2126
#if USE_DEV_RANDOM
-
 
2127
      int fd;
-
 
2128
      unsigned char buf[sizeof(size_t)];
-
 
2129
      /* Try to use /dev/urandom, else fall back on using time */
-
 
2130
      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
-
 
2131
          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
-
 
2132
        s = *((size_t *) buf);
-
 
2133
        close(fd);
-
 
2134
      }
-
 
2135
      else
-
 
Line 2136... Line -...
2136
#endif /* USE_DEV_RANDOM */
-
 
2137
        s = (size_t)(time(0) ^ (size_t)0x55555555U);
-
 
2138
 
-
 
2139
      s |= (size_t)8U;    /* ensure nonzero */
-
 
2140
      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
-
 
2141
 
493
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2142
    }
-
 
2143
#else /* (FOOTERS && !INSECURE) */
-
 
2144
    s = (size_t)0x58585858U;
-
 
2145
#endif /* (FOOTERS && !INSECURE) */
-
 
2146
    ACQUIRE_MAGIC_INIT_LOCK();
494
 
2147
    if (mparams.magic == 0) {
-
 
2148
      mparams.magic = s;
-
 
2149
      /* Set up lock for main malloc area */
-
 
2150
      INITIAL_LOCK(&gm->mutex);
-
 
2151
      gm->mflags = mparams.default_mflags;
-
 
2152
    }
-
 
2153
    RELEASE_MAGIC_INIT_LOCK();
-
 
Line 2154... Line 495...
2154
 
495
 
2155
    mparams.page_size = 4096;
-
 
2156
    mparams.granularity = 16384;
496
#define assert(x)
2157
 
497
#define RTCHECK(e)  __builtin_expect(e, 1)
2158
    /* Sanity-check configuration:
-
 
2159
       size_t must be unsigned and as wide as pointer type.
-
 
2160
       ints must be at least 4 bytes.
498
 
2161
       alignment must be at least 8.
499
#define check_free_chunk(M,P)
2162
       Alignment, min chunk size, and page size must all be powers of 2.
500
#define check_inuse_chunk(M,P)
2163
    */
-
 
2164
    if ((sizeof(size_t) != sizeof(char*)) ||
-
 
2165
        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
-
 
2166
        (sizeof(int) < 4)  ||
-
 
2167
        (MALLOC_ALIGNMENT < (size_t)8U) ||
-
 
2168
        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
-
 
2169
        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
-
 
2170
        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
-
 
2171
        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
-
 
Line 2172... Line -...
2172
      ABORT;
-
 
2173
  }
501
#define check_malloced_chunk(M,P,N)
2174
  return 0;
502
#define check_mmapped_chunk(M,P)
2175
}
-
 
2176
 
-
 
2177
/* support for mallopt */
-
 
2178
static int change_mparam(int param_number, int value) {
503
#define check_malloc_state(M)
2179
  size_t val = (size_t)value;
-
 
2180
  init_mparams();
-
 
2181
  switch(param_number) {
504
#define check_top_chunk(M,P)
2182
  case M_TRIM_THRESHOLD:
505
 
2183
    mparams.trim_threshold = val;
506
/* Check if address a is at least as high as any from MORECORE or MMAP */
2184
    return 1;
-
 
2185
  case M_GRANULARITY:
-
 
2186
    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
507
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2187
      mparams.granularity = val;
-
 
2188
      return 1;
508
/* Check if address of next chunk n is higher than base chunk p */
2189
    }
-
 
2190
    else
-
 
2191
      return 0;
-
 
2192
  case M_MMAP_THRESHOLD:
-
 
2193
    mparams.mmap_threshold = val;
-
 
Line 2194... Line 509...
2194
    return 1;
509
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
-
 
510
/* Check if p has inuse status */
2195
  default:
511
#define ok_inuse(p)     is_inuse(p)
-
 
512
/* Check if p has its pinuse bit on */
-
 
513
#define ok_pinuse(p)     pinuse(p)
-
 
514
 
-
 
515
#define CORRUPTION_ERROR_ACTION(m)                             \
-
 
516
        do {                                                   \
Line 2196... Line -...
2196
    return 0;
-
 
Line 2197... Line 517...
2197
  }
517
            printf("%s malloc heap corrupted\n",__FUNCTION__); \
-
 
518
            while(1)                                           \
2198
}
519
            {                                                  \
-
 
520
                delay(100);                                    \
-
 
521
            }                                                  \
-
 
522
        }while(0)                                              \
-
 
523
 
-
 
524
 
Line 2199... Line 525...
2199
 
525
#define USAGE_ERROR_ACTION(m, p)                               \
Line 2200... Line 526...
2200
#if DEBUG
526
        do {                                                   \
2201
#endif /* DEBUG */
527
            printf("%s malloc heap corrupted\n",__FUNCTION__); \
Line 2281... Line 607...
2281
  }\
607
  }\
2282
  M->dvsize = S;\
608
  M->dvsize = S;\
2283
  M->dv = P;\
609
  M->dv = P;\
2284
}
610
}
Line -... Line 611...
-
 
611
 
2285
 
612
 
Line 2286... Line 613...
2286
/* ------------------------- Operations on trees ------------------------- */
613
/* ------------------------- Operations on trees ------------------------- */
2287
 
614
 
2288
/* Insert chunk into tree */
615
/* Insert chunk into tree */
Line 2434... Line 761...
2434
#define unlink_chunk(M, P, S)\
761
#define unlink_chunk(M, P, S)\
2435
  if (is_small(S)) unlink_small_chunk(M, P, S)\
762
  if (is_small(S)) unlink_small_chunk(M, P, S)\
2436
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
763
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
Line 2437... Line 764...
2437
 
764
 
-
 
765
 
-
 
766
/* -------------------------- system alloc setup ------------------------- */
-
 
767
 
-
 
768
/* Operations on mflags */
-
 
769
 
-
 
770
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
-
 
771
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
-
 
772
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
-
 
773
 
-
 
774
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
-
 
775
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
-
 
776
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
-
 
777
 
Line -... Line 778...
-
 
778
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
-
 
779
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
-
 
780
 
-
 
781
#define set_lock(M,L)\
-
 
782
 ((M)->mflags = (L)?\
-
 
783
  ((M)->mflags | USE_LOCK_BIT) :\
2438
 
784
  ((M)->mflags & ~USE_LOCK_BIT))
-
 
785
 
-
 
786
/* page-align a size */
-
 
787
#define page_align(S)\
-
 
788
 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
-
 
789
 
-
 
790
/* granularity-align a size */
-
 
791
#define granularity_align(S)\
-
 
792
  (((S) + (mparams.granularity - SIZE_T_ONE))\
-
 
793
   & ~(mparams.granularity - SIZE_T_ONE))
2439
/* Relays to internal calls to malloc/free from realloc, memalign etc */
794
 
-
 
795
 
-
 
796
/* For mmap, use granularity alignment  */
2440
 
797
#define mmap_align(S) granularity_align(S)
-
 
798
 
2441
#if ONLY_MSPACES
799
/* For sys_alloc, enough padding to ensure can malloc request on success */
-
 
800
#define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2442
#define internal_malloc(m, b) mspace_malloc(m, b)
801
 
-
 
802
#define is_page_aligned(S)\
-
 
803
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
-
 
804
#define is_granularity_aligned(S)\
2443
#define internal_free(m, mem) mspace_free(m,mem);
805
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
-
 
806
 
-
 
807
/*  True if segment S holds address A */
-
 
808
#define segment_holds(S, A)\
2444
#else /* ONLY_MSPACES */
809
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
-
 
810
 
2445
#if MSPACES
811
/* Return segment holding given address */
-
 
812
static msegmentptr segment_holding(mstate m, char* addr)
2446
#define internal_malloc(m, b)\
813
{
-
 
814
    msegmentptr sp = &m->seg;
-
 
815
    for (;;) {
-
 
816
        if (addr >= sp->base && addr < sp->base + sp->size)
-
 
817
            return sp;
-
 
818
        if ((sp = sp->next) == 0)
-
 
819
            return 0;
-
 
820
    }
-
 
821
}
-
 
822
 
-
 
823
/* Return true if segment contains a segment link */
-
 
824
static int has_segment_link(mstate m, msegmentptr ss)
-
 
825
{
-
 
826
    msegmentptr sp = &m->seg;
-
 
827
    for (;;) {
2447
   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
828
        if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
-
 
829
            return 1;
-
 
830
        if ((sp = sp->next) == 0)
-
 
831
            return 0;
-
 
832
    }
-
 
833
}
-
 
834
 
-
 
835
static inline void* os_mmap(size_t size)
-
 
836
{
-
 
837
  void* ptr = KernelAlloc(size);
-
 
838
  return (ptr != 0)? ptr: MFAIL;
-
 
839
}
-
 
840
 
-
 
841
static inline int os_munmap(void* ptr, size_t size)
-
 
842
{
-
 
843
    return (KernelFree(ptr) != 0) ? 0 : -1;
-
 
844
}
-
 
845
 
-
 
846
#define should_trim(M,s)  ((s) > (M)->trim_check)
-
 
847
 
-
 
848
 
-
 
849
#define MMAP_DEFAULT(s)        os_mmap(s)
2448
#define internal_free(m, mem)\
850
#define MUNMAP_DEFAULT(a, s)   os_munmap((a), (s))
2449
   if (m == gm) dlfree(mem); else mspace_free(m,mem);
851
#define DIRECT_MMAP_DEFAULT(s) os_mmap(s)
2450
#else /* MSPACES */
-
 
2451
#define internal_malloc(m, b) dlmalloc(b)
-
 
Line 2452... Line 852...
2452
#define internal_free(m, mem) dlfree(mem)
852
 
Line 2453... Line 853...
2453
#endif /* MSPACES */
853
#define internal_malloc(m, b) malloc(b)
2454
#endif /* ONLY_MSPACES */
854
#define internal_free(m, mem) free(mem)
2455
 
855
 
2456
/* -----------------------  Direct-mmapping chunks ----------------------- */
856
/* -----------------------  Direct-mmapping chunks ----------------------- */
2457
 
857
 
2458
/*
858
/*
2459
  Directly mmapped chunks are set up with an offset to the start of
-
 
2460
  the mmapped region stored in the prev_foot field of the chunk. This
-
 
2461
  allows reconstruction of the required argument to MUNMAP when freed,
859
  Directly mmapped chunks are set up with an offset to the start of
Line 2462... Line 860...
2462
  and also allows adjustment of the returned chunk to meet alignment
860
  the mmapped region stored in the prev_foot field of the chunk. This
2463
  requirements (especially in memalign).  There is also enough space
861
  allows reconstruction of the required argument to MUNMAP when freed,
-
 
862
  and also allows adjustment of the returned chunk to meet alignment
2464
  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
863
  requirements (especially in memalign).
2465
  the PINUSE bit so frees can be checked.
864
*/
-
 
865
 
2466
*/
866
/* Malloc using mmap */
2467
 
867
static void* mmap_alloc(mstate m, size_t nb)
-
 
868
{
2468
/* Malloc using mmap */
869
    size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2469
static void* mmap_alloc(mstate m, size_t nb) {
870
    if (mmsize > nb)       /* Check for wrap around 0 */
2470
  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
871
    {
2471
  if (mmsize > nb) {     /* Check for wrap around 0 */
872
        char* mm = (char*)(os_mmap(mmsize));
2472
    char* mm = (char*)(DIRECT_MMAP(mmsize));
873
        if (mm != CMFAIL)
2473
    if (mm != CMFAIL) {
874
        {
2474
      size_t offset = align_offset(chunk2mem(mm));
875
            size_t offset = align_offset(chunk2mem(mm));
2475
      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
876
            size_t psize = mmsize - offset - MMAP_FOOT_PAD;
Line 2476... Line 877...
2476
      mchunkptr p = (mchunkptr)(mm + offset);
877
            mchunkptr p = (mchunkptr)(mm + offset);
2477
      p->prev_foot = offset | IS_MMAPPED_BIT;
878
            p->prev_foot = offset;
2478
      (p)->head = (psize|CINUSE_BIT);
879
            p->head = psize;
2479
      mark_inuse_foot(m, p, psize);
880
            mark_inuse_foot(m, p, psize);
2480
      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
881
            chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
2481
      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
882
            chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
Line 2491... Line 892...
2491
  }
892
    }
2492
  return 0;
893
    return 0;
2493
}
894
}
Line 2494... Line 895...
2494
 
895
 
2495
/* Realloc using mmap */
896
/* Realloc using mmap */
-
 
897
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb)
2496
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
898
{
2497
  size_t oldsize = chunksize(oldp);
899
    size_t oldsize = chunksize(oldp);
2498
  if (is_small(nb)) /* Can't shrink mmap regions below small size */
900
    if (is_small(nb)) /* Can't shrink mmap regions below small size */
2499
    return 0;
901
        return 0;
2500
  /* Keep old chunk if big enough but not too big */
902
  /* Keep old chunk if big enough but not too big */
2501
  if (oldsize >= nb + SIZE_T_SIZE &&
903
    if (oldsize >= nb + SIZE_T_SIZE &&
2502
      (oldsize - nb) <= (mparams.granularity << 1))
904
      (oldsize - nb) <= (mparams.granularity << 1))
2503
    return oldp;
905
        return oldp;
-
 
906
    else
2504
  else {
907
    {
2505
    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
908
        size_t offset = oldp->prev_foot;
2506
    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
909
        size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
2507
    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
-
 
2508
                                         CHUNK_ALIGN_MASK);
910
        size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2509
    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
911
        char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
2510
                                  oldmmsize, newmmsize, 1);
912
                                  oldmmsize, newmmsize, 1);
-
 
913
        if (cp != CMFAIL)
2511
    if (cp != CMFAIL) {
914
        {
2512
      mchunkptr newp = (mchunkptr)(cp + offset);
915
            mchunkptr newp = (mchunkptr)(cp + offset);
2513
      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
916
            size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
2514
      newp->head = (psize|CINUSE_BIT);
917
            newp->head = psize;
2515
      mark_inuse_foot(m, newp, psize);
918
            mark_inuse_foot(m, newp, psize);
2516
      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
919
            chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
Line 2517... Line 920...
2517
      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
920
            chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
Line 2525... Line 928...
2525
    }
928
        }
2526
  }
929
    }
2527
  return 0;
930
    return 0;
2528
}
931
}
Line -... Line 932...
-
 
932
 
-
 
933
/* ---------------------------- setting mparams -------------------------- */
-
 
934
 
-
 
935
/* Initialize mparams */
-
 
936
static int init_mparams(void) {
-
 
937
 
-
 
938
    ACQUIRE_MALLOC_GLOBAL_LOCK();
-
 
939
 
-
 
940
    if (mparams.magic == 0)
-
 
941
    {
-
 
942
        size_t magic;
-
 
943
        size_t psize;
-
 
944
        size_t gsize;
-
 
945
 
-
 
946
        psize = 4096;
-
 
947
        gsize = DEFAULT_GRANULARITY;
-
 
948
 
-
 
949
    /* Sanity-check configuration:
-
 
950
       size_t must be unsigned and as wide as pointer type.
-
 
951
       ints must be at least 4 bytes.
-
 
952
       alignment must be at least 8.
-
 
953
       Alignment, min chunk size, and page size must all be powers of 2.
-
 
954
    */
-
 
955
 
-
 
956
        mparams.granularity = gsize;
-
 
957
        mparams.page_size = psize;
-
 
958
        mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
-
 
959
        mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
-
 
960
        mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
-
 
961
 
-
 
962
    /* Set up lock for main malloc area */
-
 
963
        gm->mflags = mparams.default_mflags;
-
 
964
        MutexInit(&gm->lock);
-
 
965
 
-
 
966
        magic = (size_t)(GetTimerTicks() ^ (size_t)0x55555555U);
-
 
967
        magic |= (size_t)8U;    /* ensure nonzero */
-
 
968
        magic &= ~(size_t)7U;   /* improve chances of fault for bad values */
-
 
969
        mparams.magic = magic;
-
 
970
    }
-
 
971
 
-
 
972
    RELEASE_MALLOC_GLOBAL_LOCK();
-
 
973
    return 1;
-
 
974
}
2529
 
975
 
Line 2530... Line 976...
2530
/* -------------------------- mspace management -------------------------- */
976
/* -------------------------- mspace management -------------------------- */
2531
 
977
 
-
 
978
/* Initialize top chunk and its size */
2532
/* Initialize top chunk and its size */
979
static void init_top(mstate m, mchunkptr p, size_t psize)
2533
static void init_top(mstate m, mchunkptr p, size_t psize) {
980
{
2534
  /* Ensure alignment */
981
  /* Ensure alignment */
2535
  size_t offset = align_offset(chunk2mem(p));
982
    size_t offset = align_offset(chunk2mem(p));
Line 2543... Line 990...
2543
  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
990
    chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
2544
  m->trim_check = mparams.trim_threshold; /* reset on each update */
991
    m->trim_check = mparams.trim_threshold; /* reset on each update */
2545
}
992
}
Line 2546... Line 993...
2546
 
993
 
2547
/* Initialize bins for a new mstate that is otherwise zeroed out */
994
/* Initialize bins for a new mstate that is otherwise zeroed out */
-
 
995
static void init_bins(mstate m)
2548
static void init_bins(mstate m) {
996
{
2549
  /* Establish circular links for smallbins */
997
  /* Establish circular links for smallbins */
2550
  bindex_t i;
998
    bindex_t i;
2551
  for (i = 0; i < NSMALLBINS; ++i) {
999
    for (i = 0; i < NSMALLBINS; ++i) {
2552
    sbinptr bin = smallbin_at(m,i);
1000
        sbinptr bin = smallbin_at(m,i);
2553
    bin->fd = bin->bk = bin;
1001
        bin->fd = bin->bk = bin;
2554
  }
1002
    }
Line 2555... Line -...
2555
}
-
 
2556
 
-
 
2557
#if PROCEED_ON_ERROR
-
 
2558
 
-
 
2559
/* default corruption action */
-
 
2560
static void reset_on_error(mstate m) {
-
 
2561
  int i;
-
 
2562
  ++malloc_corruption_error_count;
-
 
2563
  /* Reinitialize fields to forget about all memory */
-
 
2564
  m->smallbins = m->treebins = 0;
-
 
2565
  m->dvsize = m->topsize = 0;
-
 
2566
  m->seg.base = 0;
-
 
2567
  m->seg.size = 0;
-
 
2568
  m->seg.next = 0;
-
 
2569
  m->top = m->dv = 0;
-
 
2570
  for (i = 0; i < NTREEBINS; ++i)
-
 
2571
    *treebin_at(m, i) = 0;
-
 
2572
  init_bins(m);
-
 
2573
}
-
 
2574
#endif /* PROCEED_ON_ERROR */
1003
}
2575
 
1004
 
2576
/* Allocate chunk and prepend remainder with chunk in successor base. */
1005
/* Allocate chunk and prepend remainder with chunk in successor base. */
-
 
1006
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
2577
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
1007
                           size_t nb)
2578
                           size_t nb) {
1008
{
2579
  mchunkptr p = align_as_chunk(newbase);
1009
    mchunkptr p = align_as_chunk(newbase);
2580
  mchunkptr oldfirst = align_as_chunk(oldbase);
1010
    mchunkptr oldfirst = align_as_chunk(oldbase);
2581
  size_t psize = (char*)oldfirst - (char*)p;
1011
    size_t psize = (char*)oldfirst - (char*)p;
Line 2598... Line 1028...
2598
    size_t dsize = m->dvsize += qsize;
1028
        size_t dsize = m->dvsize += qsize;
2599
    m->dv = q;
1029
        m->dv = q;
2600
    set_size_and_pinuse_of_free_chunk(q, dsize);
1030
        set_size_and_pinuse_of_free_chunk(q, dsize);
2601
  }
1031
    }
2602
  else {
1032
    else {
2603
    if (!cinuse(oldfirst)) {
1033
        if (!is_inuse(oldfirst)) {
2604
      size_t nsize = chunksize(oldfirst);
1034
            size_t nsize = chunksize(oldfirst);
2605
      unlink_chunk(m, oldfirst, nsize);
1035
            unlink_chunk(m, oldfirst, nsize);
2606
      oldfirst = chunk_plus_offset(oldfirst, nsize);
1036
            oldfirst = chunk_plus_offset(oldfirst, nsize);
2607
      qsize += nsize;
1037
            qsize += nsize;
2608
    }
1038
        }
Line 2613... Line 1043...
2613
 
1043
 
2614
  check_malloced_chunk(m, chunk2mem(p), nb);
1044
    check_malloced_chunk(m, chunk2mem(p), nb);
2615
  return chunk2mem(p);
1045
    return chunk2mem(p);
Line 2616... Line -...
2616
}
-
 
2617
 
1046
}
2618
 
1047
 
-
 
1048
/* Add a segment to hold a new noncontiguous region */
2619
/* Add a segment to hold a new noncontiguous region */
1049
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped)
2620
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
1050
{
2621
  /* Determine locations and sizes of segment, fenceposts, old top */
1051
  /* Determine locations and sizes of segment, fenceposts, old top */
2622
  char* old_top = (char*)m->top;
1052
    char* old_top = (char*)m->top;
2623
  msegmentptr oldsp = segment_holding(m, old_top);
1053
    msegmentptr oldsp = segment_holding(m, old_top);
Line 2670... Line 1100...
2670
}
1100
}
Line 2671... Line 1101...
2671
 
1101
 
Line 2672... Line 1102...
2672
/* -------------------------- System allocation -------------------------- */
1102
/* -------------------------- System allocation -------------------------- */
2673
 
1103
 
-
 
1104
/* Get memory from system using MORECORE or MMAP */
2674
/* Get memory from system using MORECORE or MMAP */
1105
static void* sys_alloc(mstate m, size_t nb)
2675
static void* sys_alloc(mstate m, size_t nb) {
1106
{
2676
  char* tbase = CMFAIL;
1107
    char* tbase = CMFAIL;
Line 2677... Line 1108...
2677
  size_t tsize = 0;
1108
    size_t tsize = 0;
Line 2678... Line 1109...
2678
  flag_t mmap_flag = 0;
1109
    flag_t mmap_flag = 0;
2679
 
1110
 
-
 
1111
    ensure_initialization();
2680
  init_mparams();
1112
 
2681
 
1113
  /* Directly map large chunks, but only if already initialized */
2682
  /* Directly map large chunks */
1114
    if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0)
2683
  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
1115
    {
Line 2699... Line 1131...
2699
       strategy for systems with holes in address spaces -- in this case
1131
       strategy for systems with holes in address spaces -- in this case
2700
       sbrk cannot contiguously expand the heap, but mmap may be able to
1132
       sbrk cannot contiguously expand the heap, but mmap may be able to
2701
       find space.
1133
       find space.
2702
    3. A call to MORECORE that cannot usually contiguously extend memory.
1134
    3. A call to MORECORE that cannot usually contiguously extend memory.
2703
       (disabled if not HAVE_MORECORE)
1135
       (disabled if not HAVE_MORECORE)
2704
  */
-
 
2705
 
-
 
2706
  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
-
 
2707
    char* br = CMFAIL;
-
 
2708
    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
-
 
2709
    size_t asize = 0;
-
 
2710
    ACQUIRE_MORECORE_LOCK();
-
 
2711
 
-
 
2712
    if (ss == 0) {  /* First time through or recovery */
-
 
2713
      char* base = (char*)CALL_MORECORE(0);
-
 
2714
      if (base != CMFAIL) {
-
 
2715
        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
-
 
2716
        /* Adjust to end on a page boundary */
-
 
2717
        if (!is_page_aligned(base))
-
 
2718
          asize += (page_align((size_t)base) - (size_t)base);
-
 
2719
        /* Can't call MORECORE if size is negative when treated as signed */
-
 
2720
        if (asize < HALF_MAX_SIZE_T &&
-
 
2721
            (br = (char*)(CALL_MORECORE(asize))) == base) {
-
 
2722
          tbase = base;
-
 
2723
          tsize = asize;
-
 
2724
        }
-
 
2725
      }
-
 
2726
    }
-
 
2727
    else {
-
 
2728
      /* Subtract out existing available top space from MORECORE request. */
-
 
2729
      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
-
 
2730
      /* Use mem here only if it did continuously extend old space */
-
 
2731
      if (asize < HALF_MAX_SIZE_T &&
-
 
2732
          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
-
 
2733
        tbase = br;
-
 
2734
        tsize = asize;
-
 
2735
      }
-
 
2736
    }
-
 
2737
 
-
 
2738
    if (tbase == CMFAIL) {    /* Cope with partial failure */
-
 
2739
      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
-
 
2740
        if (asize < HALF_MAX_SIZE_T &&
-
 
2741
            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
-
 
2742
          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
-
 
2743
          if (esize < HALF_MAX_SIZE_T) {
-
 
2744
            char* end = (char*)CALL_MORECORE(esize);
-
 
2745
            if (end != CMFAIL)
-
 
2746
              asize += esize;
-
 
2747
            else {            /* Can't use; try to release */
-
 
2748
              CALL_MORECORE(-asize);
-
 
2749
              br = CMFAIL;
-
 
2750
            }
-
 
2751
          }
-
 
2752
        }
-
 
2753
      }
-
 
2754
      if (br != CMFAIL) {    /* Use the space we did get */
-
 
2755
        tbase = br;
-
 
2756
        tsize = asize;
-
 
2757
      }
-
 
2758
      else
-
 
2759
        disable_contiguous(m); /* Don't try contiguous path in the future */
-
 
2760
    }
-
 
Line -... Line 1136...
-
 
1136
 
-
 
1137
   In all cases, we need to request enough bytes from system to ensure
-
 
1138
   we can malloc nb bytes upon success, so pad with enough space for
2761
 
1139
   top_foot, plus alignment-pad to make sure we don't lose bytes if
2762
    RELEASE_MORECORE_LOCK();
1140
   not on boundary, and round this up to a granularity unit.
Line 2763... Line 1141...
2763
  }
1141
  */
2764
 
1142
 
2765
  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
1143
    if (HAVE_MMAP && tbase == CMFAIL)   /* Try MMAP */
2766
    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
1144
    {
-
 
1145
        size_t rsize = granularity_align(nb + SYS_ALLOC_PADDING);
2767
    size_t rsize = granularity_align(req);
1146
        if (rsize > nb)  /* Fail if wraps around zero */
2768
    if (rsize > nb) { /* Fail if wraps around zero */
1147
        {
-
 
1148
            char* mp = (char*)(CALL_MMAP(rsize));
2769
      char* mp = (char*)(CALL_MMAP(rsize));
1149
            if (mp != CMFAIL)
2770
      if (mp != CMFAIL) {
1150
            {
2771
        tbase = mp;
1151
                tbase = mp;
2772
        tsize = rsize;
1152
                tsize = rsize;
2773
        mmap_flag = IS_MMAPPED_BIT;
1153
                mmap_flag = USE_MMAP_BIT;
2774
      }
1154
            }
Line 2775... Line -...
2775
    }
-
 
2776
  }
-
 
2777
 
-
 
2778
  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
-
 
2779
    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
-
 
2780
    if (asize < HALF_MAX_SIZE_T) {
-
 
2781
      char* br = CMFAIL;
-
 
2782
      char* end = CMFAIL;
-
 
2783
      ACQUIRE_MORECORE_LOCK();
-
 
2784
      br = (char*)(CALL_MORECORE(asize));
-
 
2785
      end = (char*)(CALL_MORECORE(0));
-
 
2786
      RELEASE_MORECORE_LOCK();
-
 
2787
      if (br != CMFAIL && end != CMFAIL && br < end) {
-
 
2788
        size_t ssize = end - br;
-
 
2789
        if (ssize > nb + TOP_FOOT_SIZE) {
-
 
2790
          tbase = br;
-
 
2791
          tsize = ssize;
-
 
2792
        }
-
 
Line 2793... Line 1155...
2793
      }
1155
        }
-
 
1156
    }
Line 2794... Line 1157...
2794
    }
1157
 
2795
  }
1158
 
Line 2796... Line 1159...
2796
 
1159
    if (tbase != CMFAIL)
-
 
1160
    {
-
 
1161
 
2797
  if (tbase != CMFAIL) {
1162
        if ((m->footprint += tsize) > m->max_footprint)
-
 
1163
            m->max_footprint = m->footprint;
2798
 
1164
 
2799
    if ((m->footprint += tsize) > m->max_footprint)
1165
        if (!is_initialized(m))  /* first-time initialization */
2800
      m->max_footprint = m->footprint;
1166
        {
-
 
1167
            if (m->least_addr == 0 || tbase < m->least_addr)
2801
 
1168
                m->least_addr = tbase;
-
 
1169
            m->seg.base = tbase;
2802
    if (!is_initialized(m)) { /* first-time initialization */
1170
             m->seg.size = tsize;
2803
      m->seg.base = m->least_addr = tbase;
1171
            m->seg.sflags = mmap_flag;
-
 
1172
            m->magic = mparams.magic;
2804
      m->seg.size = tsize;
1173
            m->release_checks = MAX_RELEASE_CHECK_RATE;
2805
      m->seg.sflags = mmap_flag;
1174
            init_bins(m);
2806
      m->magic = mparams.magic;
1175
 
2807
      init_bins(m);
1176
            if (is_global(m))
2808
      if (is_global(m))
1177
                init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2809
        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
1178
            else
2810
      else {
1179
            {
2811
        /* Offset top by embedded malloc_state */
1180
        /* Offset top by embedded malloc_state */
2812
        mchunkptr mn = next_chunk(mem2chunk(m));
1181
                mchunkptr mn = next_chunk(mem2chunk(m));
2813
        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
1182
                init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
-
 
1183
            }
2814
      }
1184
        }
2815
    }
1185
        else
2816
 
-
 
2817
    else {
1186
        {
2818
      /* Try to merge with an existing segment */
1187
      /* Try to merge with an existing segment */
2819
      msegmentptr sp = &m->seg;
1188
            msegmentptr sp = &m->seg;
-
 
1189
      /* Only consider most recent segment if traversal suppressed */
2820
      while (sp != 0 && tbase != sp->base + sp->size)
1190
            while (sp != 0 && tbase != sp->base + sp->size)
2821
        sp = sp->next;
1191
                sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
2822
      if (sp != 0 &&
1192
            if (sp != 0 && !is_extern_segment(sp) &&
-
 
1193
               (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
2823
          !is_extern_segment(sp) &&
1194
                segment_holds(sp, m->top))  /* append */
2824
          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
1195
            {
2825
          segment_holds(sp, m->top)) { /* append */
1196
                sp->size += tsize;
2826
        sp->size += tsize;
1197
                init_top(m, m->top, m->topsize + tsize);
2827
        init_top(m, m->top, m->topsize + tsize);
1198
            }
2828
      }
1199
            else
2829
      else {
1200
            {
2830
        if (tbase < m->least_addr)
1201
                if (tbase < m->least_addr)
2831
          m->least_addr = tbase;
1202
                    m->least_addr = tbase;
-
 
1203
                sp = &m->seg;
2832
        sp = &m->seg;
1204
                while (sp != 0 && sp->base != tbase + tsize)
2833
        while (sp != 0 && sp->base != tbase + tsize)
1205
                    sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
2834
          sp = sp->next;
1206
                if (sp != 0 &&
2835
        if (sp != 0 &&
1207
                    !is_extern_segment(sp) &&
2836
            !is_extern_segment(sp) &&
1208
                    (sp->sflags & USE_MMAP_BIT) == mmap_flag)
2837
            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
1209
                {
2838
          char* oldbase = sp->base;
1210
                    char* oldbase = sp->base;
2839
          sp->base = tbase;
1211
                    sp->base = tbase;
2840
          sp->size += tsize;
1212
                    sp->size += tsize;
Line 2841... Line 1213...
2841
          return prepend_alloc(m, tbase, oldbase, nb);
1213
                    return prepend_alloc(m, tbase, oldbase, nb);
-
 
1214
                }
2842
        }
1215
                else
2843
        else
1216
                    add_segment(m, tbase, tsize, mmap_flag);
2844
          add_segment(m, tbase, tsize, mmap_flag);
1217
            }
2845
      }
1218
        }
2846
    }
1219
 
Line 2855... Line 1228...
2855
      check_malloced_chunk(m, chunk2mem(p), nb);
1228
            check_malloced_chunk(m, chunk2mem(p), nb);
2856
      return chunk2mem(p);
1229
            return chunk2mem(p);
2857
    }
1230
        }
2858
  }
1231
    }
Line 2859... Line 1232...
2859
 
1232
 
2860
  MALLOC_FAILURE_ACTION;
1233
//    MALLOC_FAILURE_ACTION;
2861
  return 0;
1234
    return 0;
Line -... Line 1235...
-
 
1235
}
2862
}
1236
 
Line 2863... Line 1237...
2863
 
1237
 
2864
/* -----------------------  system deallocation -------------------------- */
1238
/* -----------------------  system deallocation -------------------------- */
-
 
1239
 
2865
 
1240
/* Unmap and unlink any mmapped segments that don't contain used chunks */
-
 
1241
static size_t release_unused_segments(mstate m)
2866
/* Unmap and unlink any mmapped segments that don't contain used chunks */
1242
{
2867
static size_t release_unused_segments(mstate m) {
1243
    size_t released = 0;
2868
  size_t released = 0;
1244
    int nsegs = 0;
-
 
1245
    msegmentptr pred = &m->seg;
2869
  msegmentptr pred = &m->seg;
1246
    msegmentptr sp = pred->next;
2870
  msegmentptr sp = pred->next;
1247
    while (sp != 0)
2871
  while (sp != 0) {
1248
    {
-
 
1249
        char* base = sp->base;
2872
    char* base = sp->base;
1250
        size_t size = sp->size;
-
 
1251
        msegmentptr next = sp->next;
2873
    size_t size = sp->size;
1252
        ++nsegs;
2874
    msegmentptr next = sp->next;
1253
        if (is_mmapped_segment(sp) && !is_extern_segment(sp))
2875
    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
1254
        {
2876
      mchunkptr p = align_as_chunk(base);
1255
            mchunkptr p = align_as_chunk(base);
-
 
1256
            size_t psize = chunksize(p);
2877
      size_t psize = chunksize(p);
1257
      /* Can unmap if first chunk holds entire segment and not pinned */
2878
      /* Can unmap if first chunk holds entire segment and not pinned */
1258
            if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE)
2879
      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
1259
            {
2880
        tchunkptr tp = (tchunkptr)p;
1260
                tchunkptr tp = (tchunkptr)p;
2881
        assert(segment_holds(sp, (char*)sp));
1261
                assert(segment_holds(sp, (char*)sp));
2882
        if (p == m->dv) {
1262
                if (p == m->dv) {
2883
          m->dv = 0;
1263
                    m->dv = 0;
2884
          m->dvsize = 0;
1264
                    m->dvsize = 0;
2885
        }
1265
                }
2886
        else {
1266
                else {
-
 
1267
                    unlink_large_chunk(m, tp);
2887
          unlink_large_chunk(m, tp);
1268
                }
2888
        }
1269
                if (CALL_MUNMAP(base, size) == 0)
2889
        if (CALL_MUNMAP(base, size) == 0) {
1270
                {
2890
          released += size;
1271
                    released += size;
2891
          m->footprint -= size;
1272
                    m->footprint -= size;
Line 2896... Line 1277...
2896
        else { /* back out if cannot unmap */
1277
                else { /* back out if cannot unmap */
2897
          insert_large_chunk(m, tp, psize);
1278
                    insert_large_chunk(m, tp, psize);
2898
        }
1279
                }
2899
      }
1280
            }
2900
    }
1281
        }
-
 
1282
        if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
-
 
1283
            break;
2901
    pred = sp;
1284
        pred = sp;
2902
    sp = next;
1285
        sp = next;
2903
  }
1286
    }
-
 
1287
  /* Reset check counter */
-
 
1288
    m->release_checks = ((nsegs > MAX_RELEASE_CHECK_RATE)?
-
 
1289
                       nsegs : MAX_RELEASE_CHECK_RATE);
2904
  return released;
1290
    return released;
2905
}
1291
}
Line 2906... Line 1292...
2906
 
1292
 
-
 
1293
static int sys_trim(mstate m, size_t pad)
2907
static int sys_trim(mstate m, size_t pad) {
1294
{
-
 
1295
    size_t released = 0;
2908
  size_t released = 0;
1296
    ensure_initialization();
-
 
1297
    if (pad < MAX_REQUEST && is_initialized(m))
2909
  if (pad < MAX_REQUEST && is_initialized(m)) {
1298
    {
Line 2910... Line 1299...
2910
    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
1299
        pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
-
 
1300
 
2911
 
1301
        if (m->topsize > pad)
2912
    if (m->topsize > pad) {
1302
        {
2913
      /* Shrink top space in granularity-size units, keeping at least one */
1303
      /* Shrink top space in granularity-size units, keeping at least one */
2914
      size_t unit = mparams.granularity;
1304
            size_t unit = mparams.granularity;
2915
      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
1305
            size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
Line 2916... Line 1306...
2916
                      SIZE_T_ONE) * unit;
1306
                             SIZE_T_ONE) * unit;
-
 
1307
            msegmentptr sp = segment_holding(m, (char*)m->top);
2917
      msegmentptr sp = segment_holding(m, (char*)m->top);
1308
 
-
 
1309
            if (!is_extern_segment(sp))
2918
 
1310
            {
2919
      if (!is_extern_segment(sp)) {
1311
                if (is_mmapped_segment(sp))
2920
        if (is_mmapped_segment(sp)) {
1312
                {
-
 
1313
                    if (HAVE_MMAP &&
2921
          if (HAVE_MMAP &&
1314
                        sp->size >= extra &&
2922
              sp->size >= extra &&
1315
                        !has_segment_link(m, sp))  /* can't shrink if pinned */
2923
              !has_segment_link(m, sp)) { /* can't shrink if pinned */
1316
                    {
2924
            size_t newsize = sp->size - extra;
1317
                        size_t newsize = sp->size - extra;
2925
            /* Prefer mremap, fall back to munmap */
-
 
2926
            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
-
 
2927
                (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
-
 
2928
              released = extra;
-
 
2929
            }
-
 
2930
          }
-
 
2931
        }
-
 
2932
        else if (HAVE_MORECORE) {
-
 
2933
          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
1318
            /* Prefer mremap, fall back to munmap */
2934
            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
-
 
2935
          ACQUIRE_MORECORE_LOCK();
-
 
2936
          {
-
 
2937
            /* Make sure end of memory is where we last set it. */
-
 
2938
            char* old_br = (char*)(CALL_MORECORE(0));
-
 
2939
            if (old_br == sp->base + sp->size) {
-
 
2940
              char* rel_br = (char*)(CALL_MORECORE(-extra));
1319
                        if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
2941
              char* new_br = (char*)(CALL_MORECORE(0));
1320
                            (CALL_MUNMAP(sp->base + newsize, extra) == 0))
2942
              if (rel_br != CMFAIL && new_br < old_br)
1321
                        {
2943
                released = old_br - new_br;
-
 
2944
            }
1322
                            released = extra;
2945
          }
1323
                        }
Line 2946... Line 1324...
2946
          RELEASE_MORECORE_LOCK();
1324
                    }
-
 
1325
                }
2947
        }
1326
            }
2948
      }
1327
 
2949
 
1328
            if (released != 0)
2950
      if (released != 0) {
1329
            {
2951
        sp->size -= released;
1330
                sp->size -= released;
Line 2958... Line 1337...
2958
    /* Unmap any unused mmapped segments */
1337
    /* Unmap any unused mmapped segments */
2959
    if (HAVE_MMAP)
1338
        if (HAVE_MMAP)
2960
      released += release_unused_segments(m);
1339
            released += release_unused_segments(m);
Line 2961... Line 1340...
2961
 
1340
 
2962
    /* On failure, disable autotrim to avoid repeated failed future calls */
1341
    /* On failure, disable autotrim to avoid repeated failed future calls */
2963
    if (released == 0)
1342
        if (released == 0 && m->topsize > m->trim_check)
2964
      m->trim_check = MAX_SIZE_T;
1343
        m->trim_check = MAX_SIZE_T;
Line 2965... Line 1344...
2965
  }
1344
    }
2966
 
1345
 
Line -... Line 1346...
-
 
1346
    return (released != 0)? 1 : 0;
-
 
1347
}
2967
  return (released != 0)? 1 : 0;
1348
 
Line 2968... Line 1349...
2968
}
1349
 
2969
 
1350
 
2970
/* ---------------------------- malloc support --------------------------- */
1351
/* ---------------------------- malloc support --------------------------- */
2971
 
1352
 
2972
/* allocate a large request from the best fitting chunk in a treebin */
1353
/* allocate a large request from the best fitting chunk in a treebin */
2973
static void* tmalloc_large(mstate m, size_t nb) {
1354
static void* tmalloc_large(mstate m, size_t nb) {
2974
  tchunkptr v = 0;
1355
  tchunkptr v = 0;
2975
  size_t rsize = -nb; /* Unsigned negation */
-
 
2976
  tchunkptr t;
1356
  size_t rsize = -nb; /* Unsigned negation */
2977
  bindex_t idx;
1357
  tchunkptr t;
2978
  compute_tree_index(nb, idx);
1358
  bindex_t idx;
2979
 
1359
  compute_tree_index(nb, idx);
2980
  if ((t = *treebin_at(m, idx)) != 0) {
1360
  if ((t = *treebin_at(m, idx)) != 0) {
Line 2998... Line 1378...
2998
        break;
1378
        break;
2999
      }
1379
      }
3000
      sizebits <<= 1;
1380
      sizebits <<= 1;
3001
    }
1381
    }
3002
  }
1382
  }
3003
 
-
 
3004
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
1383
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3005
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
1384
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3006
    if (leftbits != 0) {
1385
    if (leftbits != 0) {
3007
      bindex_t i;
1386
      bindex_t i;
3008
      binmap_t leastbit = least_bit(leftbits);
1387
      binmap_t leastbit = least_bit(leftbits);
Line 3041... Line 1420...
3041
  }
1420
  }
3042
  return 0;
1421
  return 0;
3043
}
1422
}
Line 3044... Line 1423...
3044
 
1423
 
3045
/* allocate a small request from the best fitting chunk in a treebin */
1424
/* allocate a small request from the best fitting chunk in a treebin */
-
 
1425
static void* tmalloc_small(mstate m, size_t nb)
3046
static void* tmalloc_small(mstate m, size_t nb) {
1426
{
3047
  tchunkptr t, v;
1427
    tchunkptr t, v;
3048
  size_t rsize;
1428
    size_t rsize;
3049
  bindex_t i;
1429
    bindex_t i;
3050
  binmap_t leastbit = least_bit(m->treemap);
1430
    binmap_t leastbit = least_bit(m->treemap);
3051
  compute_bit2idx(leastbit, i);
-
 
3052
 
1431
    compute_bit2idx(leastbit, i);
3053
  v = t = *treebin_at(m, i);
1432
    v = t = *treebin_at(m, i);
Line 3054... Line 1433...
3054
  rsize = chunksize(t) - nb;
1433
    rsize = chunksize(t) - nb;
3055
 
1434
 
Line 3079... Line 1458...
3079
 
1458
 
3080
  CORRUPTION_ERROR_ACTION(m);
1459
    CORRUPTION_ERROR_ACTION(m);
3081
  return 0;
1460
    return 0;
Line 3082... Line -...
3082
}
-
 
3083
 
-
 
3084
/* --------------------------- realloc support --------------------------- */
-
 
3085
 
-
 
3086
static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
-
 
3087
  if (bytes >= MAX_REQUEST) {
-
 
3088
    MALLOC_FAILURE_ACTION;
-
 
3089
    return 0;
-
 
3090
  }
-
 
3091
  if (!PREACTION(m)) {
-
 
3092
    mchunkptr oldp = mem2chunk(oldmem);
-
 
3093
    size_t oldsize = chunksize(oldp);
-
 
3094
    mchunkptr next = chunk_plus_offset(oldp, oldsize);
-
 
3095
    mchunkptr newp = 0;
-
 
3096
    void* extra = 0;
-
 
3097
 
-
 
3098
    /* Try to either shrink or extend into top. Else malloc-copy-free */
-
 
3099
 
-
 
3100
    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
-
 
3101
                ok_next(oldp, next) && ok_pinuse(next))) {
-
 
3102
      size_t nb = request2size(bytes);
-
 
3103
      if (is_mmapped(oldp))
-
 
3104
        newp = mmap_resize(m, oldp, nb);
-
 
3105
      else if (oldsize >= nb) { /* already big enough */
-
 
3106
        size_t rsize = oldsize - nb;
-
 
3107
        newp = oldp;
-
 
3108
        if (rsize >= MIN_CHUNK_SIZE) {
-
 
3109
          mchunkptr remainder = chunk_plus_offset(newp, nb);
-
 
3110
          set_inuse(m, newp, nb);
-
 
3111
          set_inuse(m, remainder, rsize);
-
 
3112
          extra = chunk2mem(remainder);
-
 
3113
        }
-
 
3114
      }
-
 
3115
      else if (next == m->top && oldsize + m->topsize > nb) {
-
 
3116
        /* Expand into top */
-
 
3117
        size_t newsize = oldsize + m->topsize;
-
 
3118
        size_t newtopsize = newsize - nb;
-
 
3119
        mchunkptr newtop = chunk_plus_offset(oldp, nb);
-
 
3120
        set_inuse(m, oldp, nb);
-
 
3121
        newtop->head = newtopsize |PINUSE_BIT;
-
 
3122
        m->top = newtop;
-
 
3123
        m->topsize = newtopsize;
-
 
3124
        newp = oldp;
-
 
3125
      }
-
 
3126
    }
-
 
3127
    else {
-
 
3128
      USAGE_ERROR_ACTION(m, oldmem);
-
 
3129
      POSTACTION(m);
-
 
3130
      return 0;
-
 
3131
    }
-
 
3132
 
-
 
3133
    POSTACTION(m);
-
 
3134
 
-
 
3135
    if (newp != 0) {
-
 
3136
      if (extra != 0) {
-
 
3137
        internal_free(m, extra);
-
 
3138
      }
-
 
3139
      check_inuse_chunk(m, newp);
-
 
3140
      return chunk2mem(newp);
-
 
3141
    }
-
 
3142
    else {
-
 
3143
      void* newmem = internal_malloc(m, bytes);
-
 
3144
      if (newmem != 0) {
-
 
3145
        size_t oc = oldsize - overhead_for(oldp);
-
 
3146
        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
-
 
3147
        internal_free(m, oldmem);
-
 
3148
      }
-
 
3149
      return newmem;
-
 
3150
    }
-
 
3151
  }
-
 
3152
  return 0;
-
 
3153
}
1461
}
Line 3154... Line 1462...
3154
 
1462
 
-
 
1463
/* --------------------------- memalign support -------------------------- */
3155
/* --------------------------- memalign support -------------------------- */
1464
 
3156
 
1465
static void* internal_memalign(mstate m, size_t alignment, size_t bytes)
3157
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
1466
{
3158
  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
1467
    if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3159
    return internal_malloc(m, bytes);
1468
        return internal_malloc(m, bytes);
Line 3165... Line 1474...
3165
    alignment = a;
1474
        alignment = a;
3166
  }
1475
    }
Line 3167... Line 1476...
3167
 
1476
 
3168
  if (bytes >= MAX_REQUEST - alignment) {
1477
    if (bytes >= MAX_REQUEST - alignment) {
3169
    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
1478
        if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3170
      MALLOC_FAILURE_ACTION;
1479
//            MALLOC_FAILURE_ACTION;
3171
    }
1480
        }
3172
  }
1481
    }
-
 
1482
    else
3173
  else {
1483
    {
3174
    size_t nb = request2size(bytes);
1484
        size_t nb = request2size(bytes);
3175
    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
1485
        size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3176
    char* mem = (char*)internal_malloc(m, req);
1486
        char* mem = (char*)internal_malloc(m, req);
-
 
1487
        if (mem != 0)
3177
    if (mem != 0) {
1488
        {
3178
      void* leader = 0;
1489
            void* leader = 0;
3179
      void* trailer = 0;
1490
            void* trailer = 0;
Line 3180... Line 1491...
3180
      mchunkptr p = mem2chunk(mem);
1491
            mchunkptr p = mem2chunk(mem);
-
 
1492
 
3181
 
1493
            PREACTION(m);
-
 
1494
 
3182
      if (PREACTION(m)) return 0;
1495
            if ((((size_t)(mem)) % alignment) != 0)  /* misaligned */
3183
      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
1496
            {
3184
        /*
1497
        /*
3185
          Find an aligned spot inside chunk.  Since we need to give
1498
          Find an aligned spot inside chunk.  Since we need to give
3186
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
1499
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
Line 3199... Line 1512...
3199
        size_t leadsize = pos - (char*)(p);
1512
                size_t leadsize = pos - (char*)(p);
3200
        size_t newsize = chunksize(p) - leadsize;
1513
                size_t newsize = chunksize(p) - leadsize;
Line 3201... Line 1514...
3201
 
1514
 
3202
        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
1515
                if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3203
          newp->prev_foot = p->prev_foot + leadsize;
1516
                    newp->prev_foot = p->prev_foot + leadsize;
3204
          newp->head = (newsize|CINUSE_BIT);
1517
                    newp->head = newsize;
3205
        }
1518
                }
3206
        else { /* Otherwise, give back leader, use the rest */
1519
                else { /* Otherwise, give back leader, use the rest */
3207
          set_inuse(m, newp, newsize);
1520
                    set_inuse(m, newp, newsize);
3208
          set_inuse(m, p, leadsize);
1521
                    set_inuse(m, p, leadsize);
3209
          leader = chunk2mem(p);
1522
                    leader = chunk2mem(p);
3210
        }
1523
                }
3211
        p = newp;
1524
                p = newp;
Line 3212... Line 1525...
3212
      }
1525
            }
3213
 
1526
 
-
 
1527
      /* Give back spare room at the end */
3214
      /* Give back spare room at the end */
1528
            if (!is_mmapped(p))
3215
      if (!is_mmapped(p)) {
1529
            {
-
 
1530
                size_t size = chunksize(p);
3216
        size_t size = chunksize(p);
1531
                if (size > nb + MIN_CHUNK_SIZE)
3217
        if (size > nb + MIN_CHUNK_SIZE) {
1532
                {
3218
          size_t remainder_size = size - nb;
1533
                    size_t remainder_size = size - nb;
3219
          mchunkptr remainder = chunk_plus_offset(p, nb);
1534
                    mchunkptr remainder = chunk_plus_offset(p, nb);
3220
          set_inuse(m, p, nb);
1535
                    set_inuse(m, p, nb);
Line 3237... Line 1552...
3237
    }
1552
        }
3238
  }
1553
    }
3239
  return 0;
1554
    return 0;
3240
}
1555
}
Line 3241... Line -...
3241
 
-
 
3242
/* ------------------------ comalloc/coalloc support --------------------- */
-
 
3243
 
-
 
3244
static void** ialloc(mstate m,
-
 
3245
                     size_t n_elements,
-
 
3246
                     size_t* sizes,
-
 
3247
                     int opts,
-
 
3248
                     void* chunks[]) {
-
 
3249
  /*
-
 
3250
    This provides common support for independent_X routines, handling
-
 
3251
    all of the combinations that can result.
-
 
3252
 
-
 
3253
    The opts arg has:
-
 
3254
    bit 0 set if all elements are same size (using sizes[0])
-
 
3255
    bit 1 set if elements should be zeroed
-
 
3256
  */
-
 
3257
 
-
 
3258
  size_t    element_size;   /* chunksize of each element, if all same */
-
 
3259
  size_t    contents_size;  /* total size of elements */
-
 
3260
  size_t    array_size;     /* request size of pointer array */
-
 
3261
  void*     mem;            /* malloced aggregate space */
-
 
3262
  mchunkptr p;              /* corresponding chunk */
-
 
3263
  size_t    remainder_size; /* remaining bytes while splitting */
-
 
3264
  void**    marray;         /* either "chunks" or malloced ptr array */
-
 
3265
  mchunkptr array_chunk;    /* chunk for malloced ptr array */
-
 
3266
  flag_t    was_enabled;    /* to disable mmap */
-
 
3267
  size_t    size;
-
 
3268
  size_t    i;
-
 
3269
 
-
 
3270
  /* compute array length, if needed */
-
 
3271
  if (chunks != 0) {
-
 
3272
    if (n_elements == 0)
-
 
3273
      return chunks; /* nothing to do */
-
 
3274
    marray = chunks;
-
 
3275
    array_size = 0;
-
 
3276
  }
-
 
3277
  else {
-
 
3278
    /* if empty req, must still return chunk representing empty array */
-
 
3279
    if (n_elements == 0)
-
 
3280
      return (void**)internal_malloc(m, 0);
-
 
3281
    marray = 0;
-
 
3282
    array_size = request2size(n_elements * (sizeof(void*)));
-
 
3283
  }
-
 
3284
 
-
 
3285
  /* compute total element size */
-
 
3286
  if (opts & 0x1) { /* all-same-size */
1556
 
3287
    element_size = request2size(*sizes);
-
 
3288
    contents_size = n_elements * element_size;
-
 
3289
  }
-
 
3290
  else { /* add up all the sizes */
-
 
3291
    element_size = 0;
-
 
3292
    contents_size = 0;
-
 
3293
    for (i = 0; i != n_elements; ++i)
-
 
3294
      contents_size += request2size(sizes[i]);
-
 
3295
  }
-
 
3296
 
-
 
3297
  size = contents_size + array_size;
1557
void* memalign(size_t alignment, size_t bytes)
3298
 
-
 
3299
  /*
-
 
3300
     Allocate the aggregate chunk.  First disable direct-mmapping so
-
 
3301
     malloc won't use it, since we would not be able to later
-
 
3302
     free/realloc space internal to a segregated mmap region.
-
 
3303
  */
-
 
3304
  was_enabled = use_mmap(m);
-
 
3305
  disable_mmap(m);
1558
{
3306
  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
-
 
3307
  if (was_enabled)
-
 
3308
    enable_mmap(m);
-
 
3309
  if (mem == 0)
-
 
3310
    return 0;
-
 
3311
 
-
 
3312
  if (PREACTION(m)) return 0;
-
 
3313
  p = mem2chunk(mem);
-
 
3314
  remainder_size = chunksize(p);
-
 
3315
 
-
 
3316
  assert(!is_mmapped(p));
-
 
3317
 
-
 
3318
  if (opts & 0x2) {       /* optionally clear the elements */
-
 
3319
    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
-
 
3320
  }
-
 
3321
 
-
 
3322
  /* If not provided, allocate the pointer array as final part of chunk */
-
 
3323
  if (marray == 0) {
-
 
3324
    size_t  array_chunk_size;
-
 
3325
    array_chunk = chunk_plus_offset(p, contents_size);
-
 
3326
    array_chunk_size = remainder_size - contents_size;
-
 
3327
    marray = (void**) (chunk2mem(array_chunk));
-
 
3328
    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
-
 
3329
    remainder_size = contents_size;
-
 
3330
  }
-
 
3331
 
-
 
3332
  /* split out elements */
-
 
3333
  for (i = 0; ; ++i) {
-
 
3334
    marray[i] = chunk2mem(p);
-
 
3335
    if (i != n_elements-1) {
-
 
3336
      if (element_size != 0)
-
 
3337
        size = element_size;
-
 
3338
      else
-
 
3339
        size = request2size(sizes[i]);
-
 
3340
      remainder_size -= size;
-
 
3341
      set_size_and_pinuse_of_inuse_chunk(m, p, size);
-
 
3342
      p = chunk_plus_offset(p, size);
-
 
3343
    }
-
 
3344
    else { /* the final element absorbs any overallocation slop */
-
 
3345
      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
-
 
3346
      break;
-
 
3347
    }
-
 
3348
  }
-
 
3349
 
-
 
3350
#if DEBUG
-
 
3351
  if (marray != chunks) {
-
 
3352
    /* final element must have exactly exhausted chunk */
-
 
3353
    if (element_size != 0) {
-
 
3354
      assert(remainder_size == element_size);
-
 
3355
    }
-
 
3356
    else {
-
 
3357
      assert(remainder_size == request2size(sizes[i]));
-
 
3358
    }
-
 
3359
    check_inuse_chunk(m, mem2chunk(marray));
-
 
3360
  }
-
 
3361
  for (i = 0; i != n_elements; ++i)
-
 
3362
    check_inuse_chunk(m, mem2chunk(marray[i]));
-
 
3363
 
-
 
3364
#endif /* DEBUG */
-
 
3365
 
-
 
3366
  POSTACTION(m);
-
 
3367
  return marray;
1559
    return internal_memalign(gm, alignment, bytes);
Line 3368... Line -...
3368
}
-
 
3369
 
-
 
3370
 
1560
}
3371
/* -------------------------- public routines ---------------------------- */
1561
 
3372
 
-
 
3373
#if !ONLY_MSPACES
1562
 
3374
 
1563
void* malloc(size_t bytes)
3375
void* dlmalloc(size_t bytes) {
1564
{
3376
  /*
1565
  /*
3377
     Basic algorithm:
1566
     Basic algorithm:
Line 3394... Line 1583...
3394
       5. If available, get memory from system and use it
1583
       5. If available, get memory from system and use it
Line 3395... Line 1584...
3395
 
1584
 
3396
     The ugly goto's here ensure that postaction occurs along all paths.
1585
     The ugly goto's here ensure that postaction occurs along all paths.
Line -... Line 1586...
-
 
1586
  */
-
 
1587
 
3397
  */
1588
    ensure_initialization(); /* initialize in sys_alloc if not using locks */
-
 
1589
 
3398
 
1590
    PREACTION(gm);
3399
  if (!PREACTION(gm)) {
1591
    {
-
 
1592
        void* mem;
3400
    void* mem;
1593
        size_t nb;
-
 
1594
 
3401
    size_t nb;
1595
        if (bytes <= MAX_SMALL_REQUEST)
3402
    if (bytes <= MAX_SMALL_REQUEST) {
1596
        {
3403
      bindex_t idx;
1597
            bindex_t idx;
3404
      binmap_t smallbits;
1598
            binmap_t smallbits;
3405
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
1599
            nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
Line 3406... Line 1600...
3406
      idx = small_index(nb);
1600
            idx = small_index(nb);
-
 
1601
            smallbits = gm->smallmap >> idx;
3407
      smallbits = gm->smallmap >> idx;
1602
 
3408
 
1603
            if ((smallbits & 0x3U) != 0) /* Remainderless fit to a smallbin. */
3409
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
1604
            {
3410
        mchunkptr b, p;
1605
                mchunkptr b, p;
3411
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
1606
                idx += ~smallbits & 1;       /* Uses next bin if idx empty */
Line 3416... Line 1611...
3416
        set_inuse_and_pinuse(gm, p, small_index2size(idx));
1611
                set_inuse_and_pinuse(gm, p, small_index2size(idx));
3417
        mem = chunk2mem(p);
1612
                mem = chunk2mem(p);
3418
        check_malloced_chunk(gm, mem, nb);
1613
                check_malloced_chunk(gm, mem, nb);
3419
        goto postaction;
1614
                goto postaction;
3420
      }
1615
            }
3421
 
-
 
3422
      else if (nb > gm->dvsize) {
1616
            else if (nb > gm->dvsize)
-
 
1617
            {
3423
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
1618
                if (smallbits != 0) /* Use chunk in next nonempty smallbin */
-
 
1619
                {
3424
          mchunkptr b, p, r;
1620
                    mchunkptr b, p, r;
3425
          size_t rsize;
1621
                    size_t rsize;
3426
          bindex_t i;
1622
                    bindex_t i;
3427
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
1623
                    binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3428
          binmap_t leastbit = least_bit(leftbits);
1624
                    binmap_t leastbit = least_bit(leftbits);
Line 3433... Line 1629...
3433
          unlink_first_small_chunk(gm, b, p, i);
1629
                    unlink_first_small_chunk(gm, b, p, i);
3434
          rsize = small_index2size(i) - nb;
1630
                    rsize = small_index2size(i) - nb;
3435
          /* Fit here cannot be remainderless if 4byte sizes */
1631
          /* Fit here cannot be remainderless if 4byte sizes */
3436
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
1632
                    if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3437
            set_inuse_and_pinuse(gm, p, small_index2size(i));
1633
                        set_inuse_and_pinuse(gm, p, small_index2size(i));
-
 
1634
                    else
3438
          else {
1635
                    {
3439
            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
1636
                        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3440
            r = chunk_plus_offset(p, nb);
1637
                        r = chunk_plus_offset(p, nb);
3441
            set_size_and_pinuse_of_free_chunk(r, rsize);
1638
                        set_size_and_pinuse_of_free_chunk(r, rsize);
3442
            replace_dv(gm, r, rsize);
1639
                        replace_dv(gm, r, rsize);
3443
          }
1640
                    }
3444
          mem = chunk2mem(p);
1641
                    mem = chunk2mem(p);
3445
          check_malloced_chunk(gm, mem, nb);
1642
                    check_malloced_chunk(gm, mem, nb);
3446
          goto postaction;
1643
                    goto postaction;
3447
        }
1644
                }
3448
 
-
 
3449
        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
1645
                else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0)
-
 
1646
                {
3450
          check_malloced_chunk(gm, mem, nb);
1647
                    check_malloced_chunk(gm, mem, nb);
3451
          goto postaction;
1648
                    goto postaction;
3452
        }
1649
                }
3453
      }
1650
            }
3454
    }
1651
        }
3455
    else if (bytes >= MAX_REQUEST)
1652
        else if (bytes >= MAX_REQUEST)
3456
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
1653
            nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
-
 
1654
        else
3457
    else {
1655
        {
3458
      nb = pad_request(bytes);
1656
            nb = pad_request(bytes);
3459
      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
1657
            if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0)
-
 
1658
            {
3460
        check_malloced_chunk(gm, mem, nb);
1659
                check_malloced_chunk(gm, mem, nb);
3461
        goto postaction;
1660
                goto postaction;
3462
      }
1661
            }
3463
    }
1662
        }
Line 3479... Line 1678...
3479
      }
1678
            }
3480
      mem = chunk2mem(p);
1679
            mem = chunk2mem(p);
3481
      check_malloced_chunk(gm, mem, nb);
1680
            check_malloced_chunk(gm, mem, nb);
3482
      goto postaction;
1681
            goto postaction;
3483
    }
1682
        }
3484
 
-
 
3485
    else if (nb < gm->topsize) { /* Split top */
1683
        else if (nb < gm->topsize) { /* Split top */
3486
      size_t rsize = gm->topsize -= nb;
1684
            size_t rsize = gm->topsize -= nb;
3487
      mchunkptr p = gm->top;
1685
            mchunkptr p = gm->top;
3488
      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
1686
            mchunkptr r = gm->top = chunk_plus_offset(p, nb);
3489
      r->head = rsize | PINUSE_BIT;
1687
            r->head = rsize | PINUSE_BIT;
Line 3502... Line 1700...
3502
  }
1700
    }
Line 3503... Line 1701...
3503
 
1701
 
3504
  return 0;
1702
    return 0;
Line -... Line 1703...
-
 
1703
}
3505
}
1704
 
-
 
1705
 
3506
 
1706
void free(void* mem)
3507
void dlfree(void* mem) {
1707
{
3508
  /*
1708
  /*
3509
     Consolidate freed chunks with preceeding or succeeding bordering
1709
     Consolidate freed chunks with preceeding or succeeding bordering
3510
     free chunks, if they exist, and then place in a bin.  Intermixed
1710
     free chunks, if they exist, and then place in a bin.  Intermixed
Line 3511... Line 1711...
3511
     with special cases for top, dv, mmapped chunks, and usage errors.
1711
     with special cases for top, dv, mmapped chunks, and usage errors.
-
 
1712
  */
3512
  */
1713
 
3513
 
-
 
3514
  if (mem != 0) {
-
 
3515
    mchunkptr p  = mem2chunk(mem);
-
 
3516
#if FOOTERS
-
 
3517
    mstate fm = get_mstate_for(p);
-
 
3518
    if (!ok_magic(fm)) {
1714
    if (mem != 0)
3519
      USAGE_ERROR_ACTION(fm, p);
-
 
3520
      return;
1715
    {
-
 
1716
        mchunkptr p  = mem2chunk(mem);
3521
    }
1717
 
3522
#else /* FOOTERS */
1718
#define fm gm
3523
#define fm gm
1719
 
3524
#endif /* FOOTERS */
1720
        PREACTION(fm);
-
 
1721
        {
3525
    if (!PREACTION(fm)) {
1722
            check_inuse_chunk(fm, p);
3526
      check_inuse_chunk(fm, p);
1723
            if (RTCHECK(ok_address(fm, p) && ok_inuse(p)))
3527
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
1724
            {
-
 
1725
                size_t psize = chunksize(p);
3528
        size_t psize = chunksize(p);
1726
                mchunkptr next = chunk_plus_offset(p, psize);
3529
        mchunkptr next = chunk_plus_offset(p, psize);
1727
                if (!pinuse(p))
3530
        if (!pinuse(p)) {
1728
                {
3531
          size_t prevsize = p->prev_foot;
1729
                    size_t prevsize = p->prev_foot;
3532
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
1730
                    if (is_mmapped(p))
3533
            prevsize &= ~IS_MMAPPED_BIT;
1731
                    {
3534
            psize += prevsize + MMAP_FOOT_PAD;
1732
                        psize += prevsize + MMAP_FOOT_PAD;
3535
            if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
1733
                        if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
-
 
1734
                        fm->footprint -= psize;
3536
              fm->footprint -= psize;
1735
                        goto postaction;
3537
            goto postaction;
1736
                    }
3538
          }
1737
                    else
3539
          else {
1738
                    {
3540
            mchunkptr prev = chunk_minus_offset(p, prevsize);
1739
                        mchunkptr prev = chunk_minus_offset(p, prevsize);
-
 
1740
                        psize += prevsize;
-
 
1741
                        p = prev;
3541
            psize += prevsize;
1742
                        if (RTCHECK(ok_address(fm, prev)))  /* consolidate backward */
3542
            p = prev;
1743
                        {
3543
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
1744
                            if (p != fm->dv)
3544
              if (p != fm->dv) {
1745
                            {
-
 
1746
                                unlink_chunk(fm, p, prevsize);
3545
                unlink_chunk(fm, p, prevsize);
1747
                            }
3546
              }
1748
                            else if ((next->head & INUSE_BITS) == INUSE_BITS)
3547
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
1749
                            {
3548
                fm->dvsize = psize;
1750
                                fm->dvsize = psize;
3549
                set_free_with_pinuse(p, psize, next);
1751
                                set_free_with_pinuse(p, psize, next);
3550
                goto postaction;
1752
                                goto postaction;
3551
              }
1753
                            }
3552
            }
1754
                        }
3553
            else
1755
                        else
Line 3554... Line 1756...
3554
              goto erroraction;
1756
                            goto erroraction;
-
 
1757
                    }
3555
          }
1758
                }
-
 
1759
 
3556
        }
1760
                if (RTCHECK(ok_next(p, next) && ok_pinuse(next)))
-
 
1761
                {
3557
 
1762
                    if (!cinuse(next))    /* consolidate forward */
3558
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
1763
                    {
3559
          if (!cinuse(next)) {  /* consolidate forward */
1764
                        if (next == fm->top)
-
 
1765
                        {
3560
            if (next == fm->top) {
1766
                            size_t tsize = fm->topsize += psize;
3561
              size_t tsize = fm->topsize += psize;
1767
                            fm->top = p;
3562
              fm->top = p;
1768
                            p->head = tsize | PINUSE_BIT;
3563
              p->head = tsize | PINUSE_BIT;
1769
                            if (p == fm->dv)
3564
              if (p == fm->dv) {
1770
                            {
3565
                fm->dv = 0;
1771
                                fm->dv = 0;
3566
                fm->dvsize = 0;
1772
                                fm->dvsize = 0;
3567
              }
1773
                            }
3568
              if (should_trim(fm, tsize))
1774
                            if (should_trim(fm, tsize))
-
 
1775
                                sys_trim(fm, 0);
3569
                sys_trim(fm, 0);
1776
                            goto postaction;
3570
              goto postaction;
1777
                        }
3571
            }
1778
                        else if (next == fm->dv)
3572
            else if (next == fm->dv) {
1779
                        {
3573
              size_t dsize = fm->dvsize += psize;
1780
                            size_t dsize = fm->dvsize += psize;
-
 
1781
                            fm->dv = p;
3574
              fm->dv = p;
1782
                            set_size_and_pinuse_of_free_chunk(p, dsize);
3575
              set_size_and_pinuse_of_free_chunk(p, dsize);
1783
                            goto postaction;
3576
              goto postaction;
1784
                        }
3577
            }
1785
                        else
3578
            else {
1786
                        {
-
 
1787
                            size_t nsize = chunksize(next);
3579
              size_t nsize = chunksize(next);
1788
                            psize += nsize;
3580
              psize += nsize;
1789
                            unlink_chunk(fm, next, nsize);
3581
              unlink_chunk(fm, next, nsize);
1790
                            set_size_and_pinuse_of_free_chunk(p, psize);
3582
              set_size_and_pinuse_of_free_chunk(p, psize);
1791
                            if (p == fm->dv)
3583
              if (p == fm->dv) {
1792
                            {
3584
                fm->dvsize = psize;
1793
                                fm->dvsize = psize;
3585
                goto postaction;
1794
                                goto postaction;
3586
              }
1795
                            }
-
 
1796
                        }
-
 
1797
                    }
-
 
1798
                    else
3587
            }
1799
                        set_free_with_pinuse(p, psize, next);
-
 
1800
 
-
 
1801
                    if (is_small(psize))
-
 
1802
                    {
-
 
1803
                        insert_small_chunk(fm, p, psize);
-
 
1804
                        check_free_chunk(fm, p);
-
 
1805
                    }
3588
          }
1806
                    else
-
 
1807
                    {
-
 
1808
                        tchunkptr tp = (tchunkptr)p;
-
 
1809
                        insert_large_chunk(fm, tp, psize);
3589
          else
1810
                        check_free_chunk(fm, p);
3590
            set_free_with_pinuse(p, psize, next);
1811
                        if (--fm->release_checks == 0)
3591
          insert_chunk(fm, p, psize);
1812
                            release_unused_segments(fm);
3592
          check_free_chunk(fm, p);
1813
                    }
3593
          goto postaction;
1814
                    goto postaction;
3594
        }
1815
                }
3595
      }
1816
            }
3596
    erroraction:
1817
    erroraction:
3597
      USAGE_ERROR_ACTION(fm, p);
1818
        USAGE_ERROR_ACTION(fm, p);
3598
    postaction:
-
 
3599
      POSTACTION(fm);
1819
    postaction:
3600
    }
-
 
3601
  }
-
 
3602
#if !FOOTERS
-
 
3603
#undef fm
-
 
3604
#endif /* FOOTERS */
-
 
3605
}
-
 
3606
 
-
 
3607
void* dlcalloc(size_t n_elements, size_t elem_size) {
-
 
3608
  void* mem;
-
 
3609
  size_t req = 0;
-
 
3610
  if (n_elements != 0) {
-
 
3611
    req = n_elements * elem_size;
-
 
3612
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
-
 
3613
        (req / n_elements != elem_size))
-
 
3614
      req = MAX_SIZE_T; /* force downstream failure on overflow */
-
 
3615
  }
-
 
3616
  mem = dlmalloc(req);
-
 
3617
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
-
 
3618
    memset(mem, 0, req);
-
 
3619
  return mem;
-
 
3620
}
-
 
3621
 
-
 
3622
void* dlrealloc(void* oldmem, size_t bytes) {
-
 
3623
  if (oldmem == 0)
-
 
3624
    return dlmalloc(bytes);
-
 
3625
#ifdef REALLOC_ZERO_BYTES_FREES
-
 
3626
  if (bytes == 0) {
-
 
3627
    dlfree(oldmem);
-
 
3628
    return 0;
-
 
3629
  }
-
 
3630
#endif /* REALLOC_ZERO_BYTES_FREES */
-
 
3631
  else {
-
 
3632
#if ! FOOTERS
-
 
3633
    mstate m = gm;
-
 
3634
#else /* FOOTERS */
-
 
3635
    mstate m = get_mstate_for(mem2chunk(oldmem));
-
 
3636
    if (!ok_magic(m)) {
-
 
3637
      USAGE_ERROR_ACTION(m, oldmem);
-
 
3638
      return 0;
-
 
3639
    }
-
 
3640
#endif /* FOOTERS */
-
 
3641
    return internal_realloc(m, oldmem, bytes);
-
 
3642
  }
-
 
3643
}
-
 
3644
 
-
 
3645
void* dlmemalign(size_t alignment, size_t bytes) {
-
 
3646
  return internal_memalign(gm, alignment, bytes);
-
 
3647
}
-
 
3648
 
-
 
3649
void** dlindependent_calloc(size_t n_elements, size_t elem_size,
1820
        POSTACTION(fm);
Line 3650... Line -...
3650
                                 void* chunks[]) {
-
 
3651
  size_t sz = elem_size; /* serves as 1-element array */
-
 
3652
  return ialloc(gm, n_elements, &sz, 3, chunks);
-
 
3653
}
-
 
3654
 
-
 
3655
void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
-
 
3656
                                   void* chunks[]) {
-
 
3657
  return ialloc(gm, n_elements, sizes, 0, chunks);
-
 
3658
}
-
 
3659
 
-
 
3660
void* dlvalloc(size_t bytes) {
-
 
3661
  size_t pagesz;
-
 
3662
  init_mparams();
-
 
3663
  pagesz = mparams.page_size;
-
 
3664
  return dlmemalign(pagesz, bytes);
-
 
3665
}
-
 
3666
 
-
 
3667
void* dlpvalloc(size_t bytes) {
-
 
3668
  size_t pagesz;
-
 
3669
  init_mparams();
-
 
3670
  pagesz = mparams.page_size;
-
 
3671
  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
-
 
3672
}
-
 
3673
 
-
 
3674
int dlmalloc_trim(size_t pad) {
-
 
3675
  int result = 0;
-
 
3676
  if (!PREACTION(gm)) {
-
 
3677
    result = sys_trim(gm, pad);
-
 
3678
    POSTACTION(gm);
-
 
3679
  }
-
 
3680
  return result;
-
 
3681
}
-
 
3682
 
-
 
3683
size_t dlmalloc_footprint(void) {
-
 
3684
  return gm->footprint;
-
 
3685
}
-
 
3686
 
-
 
3687
size_t dlmalloc_max_footprint(void) {
-
 
3688
  return gm->max_footprint;
-
 
3689
}
-
 
3690
 
-
 
3691
#if !NO_MALLINFO
-
 
3692
struct mallinfo dlmallinfo(void) {
-
 
3693
  return internal_mallinfo(gm);
-
 
3694
}
-
 
3695
#endif /* NO_MALLINFO */
-
 
3696
 
-
 
3697
//void dlmalloc_stats() {
-
 
3698
//  internal_malloc_stats(gm);
-
 
3699
//}
-
 
3700
 
-
 
3701
size_t dlmalloc_usable_size(void* mem) {
-
 
3702
  if (mem != 0) {
-
 
3703
    mchunkptr p = mem2chunk(mem);
-
 
3704
    if (cinuse(p))
-
 
3705
      return chunksize(p) - overhead_for(p);
-
 
3706
  }
-
 
3707
  return 0;
-
 
3708
}
-
 
3709
 
-
 
3710
int dlmallopt(int param_number, int value) {
-
 
3711
  return change_mparam(param_number, value);
-
 
Line 3712... Line -...
3712
}
-
 
3713
 
-
 
3714
#endif /* !ONLY_MSPACES */
-
 
3715
 
-
 
3716
/* ----------------------------- user mspaces ---------------------------- */
-
 
3717
 
-
 
3718
#if MSPACES
-
 
3719
#endif /* MSPACES */
-
 
3720
 
-
 
3721
/* -------------------- Alternative MORECORE functions ------------------- */
-
 
3722
 
-
 
3723
/*
-
 
3724
  Guidelines for creating a custom version of MORECORE:
-
 
3725
 
-
 
3726
  * For best performance, MORECORE should allocate in multiples of pagesize.
-
 
3727
  * MORECORE may allocate more memory than requested. (Or even less,
-
 
3728
      but this will usually result in a malloc failure.)
-
 
3729
  * MORECORE must not allocate memory when given argument zero, but
-
 
3730
      instead return one past the end address of memory from previous
-
 
3731
      nonzero call.
-
 
3732
  * For best performance, consecutive calls to MORECORE with positive
-
 
3733
      arguments should return increasing addresses, indicating that
-
 
3734
      space has been contiguously extended.
-
 
3735
  * Even though consecutive calls to MORECORE need not return contiguous
-
 
3736
      addresses, it must be OK for malloc'ed chunks to span multiple
-
 
3737
      regions in those cases where they do happen to be contiguous.
-
 
3738
  * MORECORE need not handle negative arguments -- it may instead
-
 
3739
      just return MFAIL when given negative arguments.
-
 
3740
      Negative arguments are always multiples of pagesize. MORECORE
-
 
3741
      must not misinterpret negative args as large positive unsigned
-
 
3742
      args. You can suppress all such calls from even occurring by defining
-
 
3743
      MORECORE_CANNOT_TRIM,
-
 
3744
 
-
 
3745
  As an example alternative MORECORE, here is a custom allocator
-
 
3746
  kindly contributed for pre-OSX macOS.  It uses virtually but not
-
 
3747
  necessarily physically contiguous non-paged memory (locked in,
-
 
3748
  present and won't get swapped out).  You can use it by uncommenting
-
 
3749
  this section, adding some #includes, and setting up the appropriate
-
 
3750
  defines above:
-
 
3751
 
-
 
3752
      #define MORECORE osMoreCore
-
 
3753
 
-
 
3754
  There is also a shutdown routine that should somehow be called for
-
 
3755
  cleanup upon program exit.
-
 
3756
 
-
 
3757
  #define MAX_POOL_ENTRIES 100
-
 
3758
  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
-
 
3759
  static int next_os_pool;
-
 
3760
  void *our_os_pools[MAX_POOL_ENTRIES];
-
 
3761
 
-
 
3762
  void *osMoreCore(int size)
-
 
3763
  {
-
 
3764
    void *ptr = 0;
-
 
3765
    static void *sbrk_top = 0;
-
 
3766
 
-
 
3767
    if (size > 0)
-
 
3768
    {
-
 
3769
      if (size < MINIMUM_MORECORE_SIZE)
-
 
3770
         size = MINIMUM_MORECORE_SIZE;
-
 
3771
      if (CurrentExecutionLevel() == kTaskLevel)
-
 
3772
         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
-
 
3773
      if (ptr == 0)
-
 
3774
      {
-
 
3775
        return (void *) MFAIL;
-
 
3776
      }
-
 
3777
      // save ptrs so they can be freed during cleanup
-
 
3778
      our_os_pools[next_os_pool] = ptr;
-
 
3779
      next_os_pool++;
-
 
3780
      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
-
 
3781
      sbrk_top = (char *) ptr + size;
-
 
3782
      return ptr;
-
 
3783
    }
-
 
3784
    else if (size < 0)
-
 
3785
    {
-
 
3786
      // we don't currently support shrink behavior
-
 
3787
      return (void *) MFAIL;
-
 
3788
    }
-
 
3789
    else
-
 
3790
    {
-
 
3791
      return sbrk_top;
-
 
3792
    }
-
 
3793
  }
-
 
3794
 
-
 
3795
  // cleanup any allocated memory pools
-
 
3796
  // called as last thing before shutting down driver
-
 
3797
 
-
 
3798
  void osCleanupMem(void)
-
 
3799
  {
-
 
3800
    void **ptr;
-
 
3801
 
-
 
3802
    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
-
 
3803
      if (*ptr)
-
 
3804
      {
-
 
3805
         PoolDeallocate(*ptr);
-
 
3806
         *ptr = 0;
-
 
3807
      }
-
 
3808
  }
-
 
3809
 
-
 
3810
*/
-
 
3811
 
-
 
3812
 
-
 
3813
/* -----------------------------------------------------------------------
-
 
3814
History:
-
 
3815
    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
-
 
3816
      * Add max_footprint functions
-
 
3817
      * Ensure all appropriate literals are size_t
-
 
3818
      * Fix conditional compilation problem for some #define settings
-
 
3819
      * Avoid concatenating segments with the one provided
-
 
3820
        in create_mspace_with_base
-
 
3821
      * Rename some variables to avoid compiler shadowing warnings
-
 
3822
      * Use explicit lock initialization.
-
 
3823
      * Better handling of sbrk interference.
-
 
3824
      * Simplify and fix segment insertion, trimming and mspace_destroy
-
 
3825
      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
-
 
3826
      * Thanks especially to Dennis Flanagan for help on these.
-
 
3827
 
-
 
3828
    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
-
 
3829
      * Fix memalign brace error.
-
 
3830
 
-
 
3831
    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
-
 
3832
      * Fix improper #endif nesting in C++
-
 
3833
      * Add explicit casts needed for C++
-
 
3834
 
-
 
3835
    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
-
 
3836
      * Use trees for large bins
-
 
3837
      * Support mspaces
-
 
3838
      * Use segments to unify sbrk-based and mmap-based system allocation,
-
 
3839
        removing need for emulation on most platforms without sbrk.
-
 
3840
      * Default safety checks
-
 
3841
      * Optional footer checks. Thanks to William Robertson for the idea.
-
 
3842
      * Internal code refactoring
-
 
3843
      * Incorporate suggestions and platform-specific changes.
-
 
3844
        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
-
 
3845
        Aaron Bachmann,  Emery Berger, and others.
-
 
3846
      * Speed up non-fastbin processing enough to remove fastbins.
-
 
3847
      * Remove useless cfree() to avoid conflicts with other apps.
-
 
3848
      * Remove internal memcpy, memset. Compilers handle builtins better.
-
 
3849
      * Remove some options that no one ever used and rename others.
-
 
3850
 
-
 
3851
    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
-
 
3852
      * Fix malloc_state bitmap array misdeclaration
-
 
3853
 
-
 
3854
    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
-
 
3855
      * Allow tuning of FIRST_SORTED_BIN_SIZE
-
 
3856
      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
-
 
3857
      * Better detection and support for non-contiguousness of MORECORE.
-
 
3858
        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
-
 
3859
      * Bypass most of malloc if no frees. Thanks To Emery Berger.
-
 
3860
      * Fix freeing of old top non-contiguous chunk im sysmalloc.
-
 
3861
      * Raised default trim and map thresholds to 256K.
-
 
3862
      * Fix mmap-related #defines. Thanks to Lubos Lunak.
-
 
3863
      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
-
 
3864
      * Branch-free bin calculation
-
 
3865
      * Default trim and mmap thresholds now 256K.
-
 
3866
 
-
 
3867
    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
-
 
3868
      * Introduce independent_comalloc and independent_calloc.
-
 
3869
        Thanks to Michael Pachos for motivation and help.
-
 
3870
      * Make optional .h file available
-
 
3871
      * Allow > 2GB requests on 32bit systems.
-
 
3872
      * new WIN32 sbrk, mmap, munmap, lock code from .
-
 
3873
        Thanks also to Andreas Mueller ,
-
 
3874
        and Anonymous.
-
 
3875
      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
-
 
3876
        helping test this.)
-
 
3877
      * memalign: check alignment arg
-
 
3878
      * realloc: don't try to shift chunks backwards, since this
-
 
3879
        leads to  more fragmentation in some programs and doesn't
-
 
3880
        seem to help in any others.
-
 
3881
      * Collect all cases in malloc requiring system memory into sysmalloc
-
 
3882
      * Use mmap as backup to sbrk
-
 
3883
      * Place all internal state in malloc_state
-
 
3884
      * Introduce fastbins (although similar to 2.5.1)
-
 
3885
      * Many minor tunings and cosmetic improvements
-
 
3886
      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
-
 
3887
      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
-
 
3888
        Thanks to Tony E. Bennett  and others.
-
 
3889
      * Include errno.h to support default failure action.
-
 
3890
 
-
 
3891
    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
-
 
3892
      * return null for negative arguments
-
 
3893
      * Added Several WIN32 cleanups from Martin C. Fong 
-
 
3894
         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
-
 
3895
          (e.g. WIN32 platforms)
-
 
3896
         * Cleanup header file inclusion for WIN32 platforms
-
 
3897
         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
-
 
3898
         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
-
 
3899
           memory allocation routines
-
 
3900
         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
-
 
3901
         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
-
 
3902
           usage of 'assert' in non-WIN32 code
-
 
3903
         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
-
 
3904
           avoid infinite loop
-
 
3905
      * Always call 'fREe()' rather than 'free()'
-
 
3906
 
-
 
3907
    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
-
 
3908
      * Fixed ordering problem with boundary-stamping
-
 
3909
 
-
 
3910
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
-
 
3911
      * Added pvalloc, as recommended by H.J. Liu
-
 
3912
      * Added 64bit pointer support mainly from Wolfram Gloger
-
 
3913
      * Added anonymously donated WIN32 sbrk emulation
-
 
3914
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
-
 
3915
      * malloc_extend_top: fix mask error that caused wastage after
-
 
3916
        foreign sbrks
-
 
3917
      * Add linux mremap support code from HJ Liu
-
 
3918
 
-
 
3919
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
-
 
3920
      * Integrated most documentation with the code.
-
 
3921
      * Add support for mmap, with help from
-
 
3922
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
-
 
3923
      * Use last_remainder in more cases.
-
 
3924
      * Pack bins using idea from  colin@nyx10.cs.du.edu
-
 
3925
      * Use ordered bins instead of best-fit threshhold
-
 
3926
      * Eliminate block-local decls to simplify tracing and debugging.
-
 
3927
      * Support another case of realloc via move into top
-
 
3928
      * Fix error occuring when initial sbrk_base not word-aligned.
-
 
3929
      * Rely on page size for units instead of SBRK_UNIT to
-
 
3930
        avoid surprises about sbrk alignment conventions.
-
 
3931
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
-
 
3932
        (raymond@es.ele.tue.nl) for the suggestion.
-
 
3933
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
-
 
3934
      * More precautions for cases where other routines call sbrk,
-
 
3935
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
-
 
3936
      * Added macros etc., allowing use in linux libc from
-
 
3937
        H.J. Lu (hjl@gnu.ai.mit.edu)
-
 
3938
      * Inverted this history list
-
 
3939
 
-
 
3940
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
-
 
3941
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
-
 
3942
      * Removed all preallocation code since under current scheme
-
 
3943
        the work required to undo bad preallocations exceeds
-
 
3944
        the work saved in good cases for most test programs.
-
 
3945
      * No longer use return list or unconsolidated bins since
-
 
3946
        no scheme using them consistently outperforms those that don't
-
 
3947
        given above changes.
-
 
3948
      * Use best fit for very large chunks to prevent some worst-cases.
-
 
3949
      * Added some support for debugging
-
 
3950
 
-
 
3951
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
-
 
3952
      * Removed footers when chunks are in use. Thanks to
-
 
3953
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
-
 
3954
 
-
 
3955
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
-
 
3956
      * Added malloc_trim, with help from Wolfram Gloger
-
 
3957
        (wmglo@Dent.MED.Uni-Muenchen.DE).
-
 
3958
 
-
 
3959
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
-
 
3960
 
-
 
3961
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
-
 
3962
      * realloc: try to expand in both directions
-
 
3963
      * malloc: swap order of clean-bin strategy;
-
 
3964
      * realloc: only conditionally expand backwards
-
 
3965
      * Try not to scavenge used bins
-
 
3966
      * Use bin counts as a guide to preallocation
-
 
3967
      * Occasionally bin return list chunks in first scan
-
 
3968
      * Add a few optimizations from colin@nyx10.cs.du.edu
-
 
3969
 
-
 
3970
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
-
 
3971
      * faster bin computation & slightly different binning
-
 
3972
      * merged all consolidations to one part of malloc proper
-
 
3973
         (eliminating old malloc_find_space & malloc_clean_bin)
-
 
3974
      * Scan 2 returns chunks (not just 1)
-
 
3975
      * Propagate failure in realloc if malloc returns 0
-
 
3976
      * Add stuff to allow compilation on non-ANSI compilers
-
 
3977
          from kpv@research.att.com
-
 
3978
 
-
 
3979
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
-
 
3980
      * removed potential for odd address access in prev_chunk
-
 
3981
      * removed dependency on getpagesize.h
-
 
3982
      * misc cosmetics and a bit more internal documentation
-
 
3983
      * anticosmetics: mangled names in macros to evade debugger strangeness
-
 
3984
      * tested on sparc, hp-700, dec-mips, rs6000
-
 
3985
          with gcc & native cc (hp, dec only) allowing
-
 
3986
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
-