Subversion Repositories Kolibri OS

Rev

Details | Last modification | View Log | RSS feed

Rev Author Line No. Line
1176 andrew_pro 1
/*
2
  This is a version (aka dlmalloc) of malloc/free/realloc written by
3
  Doug Lea and released to the public domain, as explained at
4
  http://creativecommons.org/licenses/publicdomain.  Send questions,
5
  comments, complaints, performance data, etc to dl@cs.oswego.edu
6
 
7
* Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
8
 
9
   Note: There may be an updated version of this malloc obtainable at
10
           ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11
         Check before installing!
12
 
13
* Quickstart
14
 
15
  This library is all in one file to simplify the most common usage:
16
  ftp it, compile it (-O3), and link it into another program. All of
17
  the compile-time options default to reasonable values for use on
18
  most platforms.  You might later want to step through various
19
  compile-time and dynamic tuning options.
20
 
21
  For convenience, an include file for code using this malloc is at:
22
     ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
23
  You don't really need this .h file unless you call functions not
24
  defined in your system include files.  The .h file contains only the
25
  excerpts from this file needed for using this malloc on ANSI C/C++
26
  systems, so long as you haven't changed compile-time options about
27
  naming and tuning parameters.  If you do, then you can create your
28
  own malloc.h that does include all settings by cutting at the point
29
  indicated below. Note that you may already by default be using a C
30
  library containing a malloc that is based on some version of this
31
  malloc (for example in linux). You might still want to use the one
32
  in this file to customize settings or to avoid overheads associated
33
  with library versions.
34
 
35
* Vital statistics:
36
 
37
  Supported pointer/size_t representation:       4 or 8 bytes
38
       size_t MUST be an unsigned type of the same width as
39
       pointers. (If you are using an ancient system that declares
40
       size_t as a signed type, or need it to be a different width
41
       than pointers, you can use a previous release of this malloc
42
       (e.g. 2.7.2) supporting these.)
43
 
44
  Alignment:                                     8 bytes (default)
45
       This suffices for nearly all current machines and C compilers.
46
       However, you can define MALLOC_ALIGNMENT to be wider than this
47
       if necessary (up to 128bytes), at the expense of using more space.
48
 
49
  Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
50
                                          8 or 16 bytes (if 8byte sizes)
51
       Each malloced chunk has a hidden word of overhead holding size
52
       and status information, and additional cross-check word
53
       if FOOTERS is defined.
54
 
55
  Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
56
                          8-byte ptrs:  32 bytes    (including overhead)
57
 
58
       Even a request for zero bytes (i.e., malloc(0)) returns a
59
       pointer to something of the minimum allocatable size.
60
       The maximum overhead wastage (i.e., number of extra bytes
61
       allocated than were requested in malloc) is less than or equal
62
       to the minimum size, except for requests >= mmap_threshold that
63
       are serviced via mmap(), where the worst case wastage is about
64
       32 bytes plus the remainder from a system page (the minimal
65
       mmap unit); typically 4096 or 8192 bytes.
66
 
67
  Security: static-safe; optionally more or less
68
       The "security" of malloc refers to the ability of malicious
69
       code to accentuate the effects of errors (for example, freeing
70
       space that is not currently malloc'ed or overwriting past the
71
       ends of chunks) in code that calls malloc.  This malloc
72
       guarantees not to modify any memory locations below the base of
73
       heap, i.e., static variables, even in the presence of usage
74
       errors.  The routines additionally detect most improper frees
75
       and reallocs.  All this holds as long as the static bookkeeping
76
       for malloc itself is not corrupted by some other means.  This
77
       is only one aspect of security -- these checks do not, and
78
       cannot, detect all possible programming errors.
79
 
80
       If FOOTERS is defined nonzero, then each allocated chunk
81
       carries an additional check word to verify that it was malloced
82
       from its space.  These check words are the same within each
83
       execution of a program using malloc, but differ across
84
       executions, so externally crafted fake chunks cannot be
85
       freed. This improves security by rejecting frees/reallocs that
86
       could corrupt heap memory, in addition to the checks preventing
87
       writes to statics that are always on.  This may further improve
88
       security at the expense of time and space overhead.  (Note that
89
       FOOTERS may also be worth using with MSPACES.)
90
 
91
       By default detected errors cause the program to abort (calling
92
       "abort()"). You can override this to instead proceed past
93
       errors by defining PROCEED_ON_ERROR.  In this case, a bad free
94
       has no effect, and a malloc that encounters a bad address
95
       caused by user overwrites will ignore the bad address by
96
       dropping pointers and indices to all known memory. This may
97
       be appropriate for programs that should continue if at all
98
       possible in the face of programming errors, although they may
99
       run out of memory because dropped memory is never reclaimed.
100
 
101
       If you don't like either of these options, you can define
102
       CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103
       else. And if if you are sure that your program using malloc has
104
       no errors or vulnerabilities, you can define INSECURE to 1,
105
       which might (or might not) provide a small performance improvement.
106
 
107
  Thread-safety: NOT thread-safe unless USE_LOCKS defined
108
       When USE_LOCKS is defined, each public call to malloc, free,
109
       etc is surrounded with either a pthread mutex or a win32
110
       spinlock (depending on WIN32). This is not especially fast, and
111
       can be a major bottleneck.  It is designed only to provide
112
       minimal protection in concurrent environments, and to provide a
113
       basis for extensions.  If you are using malloc in a concurrent
114
       program, consider instead using ptmalloc, which is derived from
115
       a version of this malloc. (See http://www.malloc.de).
116
 
117
  System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
118
       This malloc can use unix sbrk or any emulation (invoked using
119
       the CALL_MORECORE macro) and/or mmap/munmap or any emulation
120
       (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
121
       memory.  On most unix systems, it tends to work best if both
122
       MORECORE and MMAP are enabled.  On Win32, it uses emulations
123
       based on VirtualAlloc. It also uses common C library functions
124
       like memset.
125
 
126
  Compliance: I believe it is compliant with the Single Unix Specification
127
       (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
128
       others as well.
129
 
130
* Overview of algorithms
131
 
132
  This is not the fastest, most space-conserving, most portable, or
133
  most tunable malloc ever written. However it is among the fastest
134
  while also being among the most space-conserving, portable and
135
  tunable.  Consistent balance across these factors results in a good
136
  general-purpose allocator for malloc-intensive programs.
137
 
138
  In most ways, this malloc is a best-fit allocator. Generally, it
139
  chooses the best-fitting existing chunk for a request, with ties
140
  broken in approximately least-recently-used order. (This strategy
141
  normally maintains low fragmentation.) However, for requests less
142
  than 256bytes, it deviates from best-fit when there is not an
143
  exactly fitting available chunk by preferring to use space adjacent
144
  to that used for the previous small request, as well as by breaking
145
  ties in approximately most-recently-used order. (These enhance
146
  locality of series of small allocations.)  And for very large requests
147
  (>= 256Kb by default), it relies on system memory mapping
148
  facilities, if supported.  (This helps avoid carrying around and
149
  possibly fragmenting memory used only for large chunks.)
150
 
151
  All operations (except malloc_stats and mallinfo) have execution
152
  times that are bounded by a constant factor of the number of bits in
153
  a size_t, not counting any clearing in calloc or copying in realloc,
154
  or actions surrounding MORECORE and MMAP that have times
155
  proportional to the number of non-contiguous regions returned by
156
  system allocation routines, which is often just 1.
157
 
158
  The implementation is not very modular and seriously overuses
159
  macros. Perhaps someday all C compilers will do as good a job
160
  inlining modular code as can now be done by brute-force expansion,
161
  but now, enough of them seem not to.
162
 
163
  Some compilers issue a lot of warnings about code that is
164
  dead/unreachable only on some platforms, and also about intentional
165
  uses of negation on unsigned types. All known cases of each can be
166
  ignored.
167
 
168
  For a longer but out of date high-level description, see
169
     http://gee.cs.oswego.edu/dl/html/malloc.html
170
 
171
* MSPACES
172
  If MSPACES is defined, then in addition to malloc, free, etc.,
173
  this file also defines mspace_malloc, mspace_free, etc. These
174
  are versions of malloc routines that take an "mspace" argument
175
  obtained using create_mspace, to control all internal bookkeeping.
176
  If ONLY_MSPACES is defined, only these versions are compiled.
177
  So if you would like to use this allocator for only some allocations,
178
  and your system malloc for others, you can compile with
179
  ONLY_MSPACES and then do something like...
180
    static mspace mymspace = create_mspace(0,0); // for example
181
    #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
182
 
183
  (Note: If you only need one instance of an mspace, you can instead
184
  use "USE_DL_PREFIX" to relabel the global malloc.)
185
 
186
  You can similarly create thread-local allocators by storing
187
  mspaces as thread-locals. For example:
188
    static __thread mspace tlms = 0;
189
    void*  tlmalloc(size_t bytes) {
190
      if (tlms == 0) tlms = create_mspace(0, 0);
191
      return mspace_malloc(tlms, bytes);
192
    }
193
    void  tlfree(void* mem) { mspace_free(tlms, mem); }
194
 
195
  Unless FOOTERS is defined, each mspace is completely independent.
196
  You cannot allocate from one and free to another (although
197
  conformance is only weakly checked, so usage errors are not always
198
  caught). If FOOTERS is defined, then each chunk carries around a tag
199
  indicating its originating mspace, and frees are directed to their
200
  originating spaces.
201
 
202
 -------------------------  Compile-time options ---------------------------
203
 
204
Be careful in setting #define values for numerical constants of type
205
size_t. On some systems, literal values are not automatically extended
206
to size_t precision unless they are explicitly casted.
207
 
208
WIN32                    default: defined if _WIN32 defined
209
  Defining WIN32 sets up defaults for MS environment and compilers.
210
  Otherwise defaults are for unix.
211
 
212
MALLOC_ALIGNMENT         default: (size_t)8
213
  Controls the minimum alignment for malloc'ed chunks.  It must be a
214
  power of two and at least 8, even on machines for which smaller
215
  alignments would suffice. It may be defined as larger than this
216
  though. Note however that code and data structures are optimized for
217
  the case of 8-byte alignment.
218
 
219
MSPACES                  default: 0 (false)
220
  If true, compile in support for independent allocation spaces.
221
  This is only supported if HAVE_MMAP is true.
222
 
223
ONLY_MSPACES             default: 0 (false)
224
  If true, only compile in mspace versions, not regular versions.
225
 
226
USE_LOCKS                default: 0 (false)
227
  Causes each call to each public routine to be surrounded with
228
  pthread or WIN32 mutex lock/unlock. (If set true, this can be
229
  overridden on a per-mspace basis for mspace versions.)
230
 
231
FOOTERS                  default: 0
232
  If true, provide extra checking and dispatching by placing
233
  information in the footers of allocated chunks. This adds
234
  space and time overhead.
235
 
236
INSECURE                 default: 0
237
  If true, omit checks for usage errors and heap space overwrites.
238
 
239
USE_DL_PREFIX            default: NOT defined
240
  Causes compiler to prefix all public routines with the string 'dl'.
241
  This can be useful when you only want to use this malloc in one part
242
  of a program, using your regular system malloc elsewhere.
243
 
244
ABORT                    default: defined as abort()
245
  Defines how to abort on failed checks.  On most systems, a failed
246
  check cannot die with an "assert" or even print an informative
247
  message, because the underlying print routines in turn call malloc,
248
  which will fail again.  Generally, the best policy is to simply call
249
  abort(). It's not very useful to do more than this because many
250
  errors due to overwriting will show up as address faults (null, odd
251
  addresses etc) rather than malloc-triggered checks, so will also
252
  abort.  Also, most compilers know that abort() does not return, so
253
  can better optimize code conditionally calling it.
254
 
255
PROCEED_ON_ERROR           default: defined as 0 (false)
256
  Controls whether detected bad addresses cause them to bypassed
257
  rather than aborting. If set, detected bad arguments to free and
258
  realloc are ignored. And all bookkeeping information is zeroed out
259
  upon a detected overwrite of freed heap space, thus losing the
260
  ability to ever return it from malloc again, but enabling the
261
  application to proceed. If PROCEED_ON_ERROR is defined, the
262
  static variable malloc_corruption_error_count is compiled in
263
  and can be examined to see if errors have occurred. This option
264
  generates slower code than the default abort policy.
265
 
266
DEBUG                    default: NOT defined
267
  The DEBUG setting is mainly intended for people trying to modify
268
  this code or diagnose problems when porting to new platforms.
269
  However, it may also be able to better isolate user errors than just
270
  using runtime checks.  The assertions in the check routines spell
271
  out in more detail the assumptions and invariants underlying the
272
  algorithms.  The checking is fairly extensive, and will slow down
273
  execution noticeably. Calling malloc_stats or mallinfo with DEBUG
274
  set will attempt to check every non-mmapped allocated and free chunk
275
  in the course of computing the summaries.
276
 
277
ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
278
  Debugging assertion failures can be nearly impossible if your
279
  version of the assert macro causes malloc to be called, which will
280
  lead to a cascade of further failures, blowing the runtime stack.
281
  ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
282
  which will usually make debugging easier.
283
 
284
MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
285
  The action to take before "return 0" when malloc fails to be able to
286
  return memory because there is none available.
287
 
288
HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
289
  True if this system supports sbrk or an emulation of it.
290
 
291
MORECORE                  default: sbrk
292
  The name of the sbrk-style system routine to call to obtain more
293
  memory.  See below for guidance on writing custom MORECORE
294
  functions. The type of the argument to sbrk/MORECORE varies across
295
  systems.  It cannot be size_t, because it supports negative
296
  arguments, so it is normally the signed type of the same width as
297
  size_t (sometimes declared as "intptr_t").  It doesn't much matter
298
  though. Internally, we only call it with arguments less than half
299
  the max value of a size_t, which should work across all reasonable
300
  possibilities, although sometimes generating compiler warnings.  See
301
  near the end of this file for guidelines for creating a custom
302
  version of MORECORE.
303
 
304
MORECORE_CONTIGUOUS       default: 1 (true)
305
  If true, take advantage of fact that consecutive calls to MORECORE
306
  with positive arguments always return contiguous increasing
307
  addresses.  This is true of unix sbrk. It does not hurt too much to
308
  set it true anyway, since malloc copes with non-contiguities.
309
  Setting it false when definitely non-contiguous saves time
310
  and possibly wasted space it would take to discover this though.
311
 
312
MORECORE_CANNOT_TRIM      default: NOT defined
313
  True if MORECORE cannot release space back to the system when given
314
  negative arguments. This is generally necessary only if you are
315
  using a hand-crafted MORECORE function that cannot handle negative
316
  arguments.
317
 
318
HAVE_MMAP                 default: 1 (true)
319
  True if this system supports mmap or an emulation of it.  If so, and
320
  HAVE_MORECORE is not true, MMAP is used for all system
321
  allocation. If set and HAVE_MORECORE is true as well, MMAP is
322
  primarily used to directly allocate very large blocks. It is also
323
  used as a backup strategy in cases where MORECORE fails to provide
324
  space from system. Note: A single call to MUNMAP is assumed to be
325
  able to unmap memory that may have be allocated using multiple calls
326
  to MMAP, so long as they are adjacent.
327
 
328
HAVE_MREMAP               default: 1 on linux, else 0
329
  If true realloc() uses mremap() to re-allocate large blocks and
330
  extend or shrink allocation spaces.
331
 
332
MMAP_CLEARS               default: 1 on unix
333
  True if mmap clears memory so calloc doesn't need to. This is true
334
  for standard unix mmap using /dev/zero.
335
 
336
USE_BUILTIN_FFS            default: 0 (i.e., not used)
337
  Causes malloc to use the builtin ffs() function to compute indices.
338
  Some compilers may recognize and intrinsify ffs to be faster than the
339
  supplied C version. Also, the case of x86 using gcc is special-cased
340
  to an asm instruction, so is already as fast as it can be, and so
341
  this setting has no effect. (On most x86s, the asm version is only
342
  slightly faster than the C version.)
343
 
344
malloc_getpagesize         default: derive from system includes, or 4096.
345
  The system page size. To the extent possible, this malloc manages
346
  memory from the system in page-size units.  This may be (and
347
  usually is) a function rather than a constant. This is ignored
348
  if WIN32, where page size is determined using getSystemInfo during
349
  initialization.
350
 
351
USE_DEV_RANDOM             default: 0 (i.e., not used)
352
  Causes malloc to use /dev/random to initialize secure magic seed for
353
  stamping footers. Otherwise, the current time is used.
354
 
355
NO_MALLINFO                default: 0
356
  If defined, don't compile "mallinfo". This can be a simple way
357
  of dealing with mismatches between system declarations and
358
  those in this file.
359
 
360
MALLINFO_FIELD_TYPE        default: size_t
361
  The type of the fields in the mallinfo struct. This was originally
362
  defined as "int" in SVID etc, but is more usefully defined as
363
  size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
364
 
365
REALLOC_ZERO_BYTES_FREES    default: not defined
366
  This should be set if a call to realloc with zero bytes should
367
  be the same as a call to free. Some people think it should. Otherwise,
368
  since this malloc returns a unique pointer for malloc(0), so does
369
  realloc(p, 0).
370
 
371
LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
372
LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
373
LACKS_STDLIB_H                default: NOT defined unless on WIN32
374
  Define these if your system does not have these header files.
375
  You might need to manually insert some of the declarations they provide.
376
 
377
DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
378
                                system_info.dwAllocationGranularity in WIN32,
379
                                otherwise 64K.
380
      Also settable using mallopt(M_GRANULARITY, x)
381
  The unit for allocating and deallocating memory from the system.  On
382
  most systems with contiguous MORECORE, there is no reason to
383
  make this more than a page. However, systems with MMAP tend to
384
  either require or encourage larger granularities.  You can increase
385
  this value to prevent system allocation functions to be called so
386
  often, especially if they are slow.  The value must be at least one
387
  page and must be a power of two.  Setting to 0 causes initialization
388
  to either page size or win32 region size.  (Note: In previous
389
  versions of malloc, the equivalent of this option was called
390
  "TOP_PAD")
391
 
392
DEFAULT_TRIM_THRESHOLD    default: 2MB
393
      Also settable using mallopt(M_TRIM_THRESHOLD, x)
394
  The maximum amount of unused top-most memory to keep before
395
  releasing via malloc_trim in free().  Automatic trimming is mainly
396
  useful in long-lived programs using contiguous MORECORE.  Because
397
  trimming via sbrk can be slow on some systems, and can sometimes be
398
  wasteful (in cases where programs immediately afterward allocate
399
  more large chunks) the value should be high enough so that your
400
  overall system performance would improve by releasing this much
401
  memory.  As a rough guide, you might set to a value close to the
402
  average size of a process (program) running on your system.
403
  Releasing this much memory would allow such a process to run in
404
  memory.  Generally, it is worth tuning trim thresholds when a
405
  program undergoes phases where several large chunks are allocated
406
  and released in ways that can reuse each other's storage, perhaps
407
  mixed with phases where there are no such chunks at all. The trim
408
  value must be greater than page size to have any useful effect.  To
409
  disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
410
  some people use of mallocing a huge space and then freeing it at
411
  program startup, in an attempt to reserve system memory, doesn't
412
  have the intended effect under automatic trimming, since that memory
413
  will immediately be returned to the system.
414
 
415
DEFAULT_MMAP_THRESHOLD       default: 256K
416
      Also settable using mallopt(M_MMAP_THRESHOLD, x)
417
  The request size threshold for using MMAP to directly service a
418
  request. Requests of at least this size that cannot be allocated
419
  using already-existing space will be serviced via mmap.  (If enough
420
  normal freed space already exists it is used instead.)  Using mmap
421
  segregates relatively large chunks of memory so that they can be
422
  individually obtained and released from the host system. A request
423
  serviced through mmap is never reused by any other request (at least
424
  not directly; the system may just so happen to remap successive
425
  requests to the same locations).  Segregating space in this way has
426
  the benefits that: Mmapped space can always be individually released
427
  back to the system, which helps keep the system level memory demands
428
  of a long-lived program low.  Also, mapped memory doesn't become
429
  `locked' between other chunks, as can happen with normally allocated
430
  chunks, which means that even trimming via malloc_trim would not
431
  release them.  However, it has the disadvantage that the space
432
  cannot be reclaimed, consolidated, and then used to service later
433
  requests, as happens with normal chunks.  The advantages of mmap
434
  nearly always outweigh disadvantages for "large" chunks, but the
435
  value of "large" may vary across systems.  The default is an
436
  empirically derived value that works well in most systems. You can
437
  disable mmap by setting to MAX_SIZE_T.
438
 
439
*/
440
 
441
//typedef unsigned int size_t;
442
 
443
#define MALLOC_ALIGNMENT ((size_t)8U)
444
#define DEFAULT_MMAP_THRESHOLD  ((size_t)32U * (size_t)1024U)
445
#define NO_MALLINFO  1
446
#define HAVE_MMAP 1
447
#define MORECORE_CANNOT_TRIM
448
#define FOOTERS 0
449
#define ABORT
450
 
451
 
452
//#ifndef WIN32
453
//#ifdef _WIN32
454
//#define WIN32 1
455
//#endif  /* _WIN32 */
456
//#endif  /* WIN32 */
457
//#ifdef WIN32
458
//#define WIN32_LEAN_AND_MEAN
459
//#include 
460
//#endif  /* WIN32 */
461
//default settings for compilation for KolibriOS
462
#define HAVE_MMAP 1
463
#define HAVE_MORECORE 0
464
#define LACKS_UNISTD_H
465
#define LACKS_SYS_PARAM_H
466
#define LACKS_SYS_MMAN_H
467
#define LACKS_STRING_H
468
#define LACKS_STRINGS_H
469
#define LACKS_SYS_TYPES_H
470
#define LACKS_STDLIB_H
471
#define LACKS_ERRNO_H
472
#define LACKS_FCNTL_H
473
#define MALLOC_FAILURE_ACTION
474
#define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
475
//#endif  /* WIN32 */
476
 
477
//#if defined(DARWIN) || defined(_DARWIN)
478
/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
479
//#ifndef HAVE_MORECORE
480
//#define HAVE_MORECORE 0
481
//#define HAVE_MMAP 1
482
//#endif  /* HAVE_MORECORE */
483
//#endif  /* DARWIN */
484
 
485
#ifndef LACKS_SYS_TYPES_H
486
#include   /* For size_t */
487
#endif  /* LACKS_SYS_TYPES_H */
488
 
489
/* The maximum possible size_t value has all bits set */
490
#define MAX_SIZE_T           (~(size_t)0)
491
 
492
#ifndef ONLY_MSPACES
493
#define ONLY_MSPACES 0
494
#endif  /* ONLY_MSPACES */
495
#ifndef MSPACES
496
#if ONLY_MSPACES
497
#define MSPACES 1
498
#else   /* ONLY_MSPACES */
499
#define MSPACES 0
500
#endif  /* ONLY_MSPACES */
501
#endif  /* MSPACES */
502
#ifndef MALLOC_ALIGNMENT
503
#define MALLOC_ALIGNMENT ((size_t)8U)
504
#endif  /* MALLOC_ALIGNMENT */
505
#ifndef FOOTERS
506
#define FOOTERS 0
507
#endif  /* FOOTERS */
508
#ifndef ABORT
509
#define ABORT  abort()
510
#endif  /* ABORT */
511
#ifndef ABORT_ON_ASSERT_FAILURE
512
#define ABORT_ON_ASSERT_FAILURE 1
513
#endif  /* ABORT_ON_ASSERT_FAILURE */
514
#ifndef PROCEED_ON_ERROR
515
#define PROCEED_ON_ERROR 0
516
#endif  /* PROCEED_ON_ERROR */
517
#ifndef USE_LOCKS
518
#define USE_LOCKS 0
519
#endif  /* USE_LOCKS */
520
#ifndef INSECURE
521
#define INSECURE 0
522
#endif  /* INSECURE */
523
#ifndef HAVE_MMAP
524
#define HAVE_MMAP 1
525
#endif  /* HAVE_MMAP */
526
#ifndef MMAP_CLEARS
527
#define MMAP_CLEARS 1
528
#endif  /* MMAP_CLEARS */
529
 
530
//#ifndef HAVE_MREMAP
531
//#ifdef linux
532
//#define HAVE_MREMAP 1
533
//#else   /* linux */
534
//#define HAVE_MREMAP 0
535
//#endif  /* linux */
536
//#endif  /* HAVE_MREMAP */
537
 
538
#ifndef MALLOC_FAILURE_ACTION
539
#define MALLOC_FAILURE_ACTION  errno = ENOMEM;
540
#endif  /* MALLOC_FAILURE_ACTION */
541
#ifndef HAVE_MORECORE
542
#if ONLY_MSPACES
543
#define HAVE_MORECORE 0
544
#else   /* ONLY_MSPACES */
545
#define HAVE_MORECORE 1
546
#endif  /* ONLY_MSPACES */
547
#endif  /* HAVE_MORECORE */
548
#if !HAVE_MORECORE
549
#define MORECORE_CONTIGUOUS 0
550
#else   /* !HAVE_MORECORE */
551
#ifndef MORECORE
552
#define MORECORE sbrk
553
#endif  /* MORECORE */
554
#ifndef MORECORE_CONTIGUOUS
555
#define MORECORE_CONTIGUOUS 1
556
#endif  /* MORECORE_CONTIGUOUS */
557
#endif  /* HAVE_MORECORE */
558
#ifndef DEFAULT_GRANULARITY
559
#if MORECORE_CONTIGUOUS
560
#define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
561
#else   /* MORECORE_CONTIGUOUS */
562
#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
563
#endif  /* MORECORE_CONTIGUOUS */
564
#endif  /* DEFAULT_GRANULARITY */
565
#ifndef DEFAULT_TRIM_THRESHOLD
566
#ifndef MORECORE_CANNOT_TRIM
567
#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
568
#else   /* MORECORE_CANNOT_TRIM */
569
#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
570
#endif  /* MORECORE_CANNOT_TRIM */
571
#endif  /* DEFAULT_TRIM_THRESHOLD */
572
#ifndef DEFAULT_MMAP_THRESHOLD
573
#if HAVE_MMAP
574
#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
575
#else   /* HAVE_MMAP */
576
#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
577
#endif  /* HAVE_MMAP */
578
#endif  /* DEFAULT_MMAP_THRESHOLD */
579
#ifndef USE_BUILTIN_FFS
580
#define USE_BUILTIN_FFS 0
581
#endif  /* USE_BUILTIN_FFS */
582
#ifndef USE_DEV_RANDOM
583
#define USE_DEV_RANDOM 0
584
#endif  /* USE_DEV_RANDOM */
585
#ifndef NO_MALLINFO
586
#define NO_MALLINFO 0
587
#endif  /* NO_MALLINFO */
588
#ifndef MALLINFO_FIELD_TYPE
589
#define MALLINFO_FIELD_TYPE size_t
590
#endif  /* MALLINFO_FIELD_TYPE */
591
 
592
 
593
/*
594
  mallopt tuning options.  SVID/XPG defines four standard parameter
595
  numbers for mallopt, normally defined in malloc.h.  None of these
596
  are used in this malloc, so setting them has no effect. But this
597
  malloc does support the following options.
598
*/
599
 
600
#define M_TRIM_THRESHOLD     (-1)
601
#define M_GRANULARITY        (-2)
602
#define M_MMAP_THRESHOLD     (-3)
603
 
604
/* ------------------------ Mallinfo declarations ------------------------ */
605
 
606
#if !NO_MALLINFO
607
#endif /* NO_MALLINFO */
608
 
609
#ifdef __cplusplus
610
extern "C" {
611
#endif /* __cplusplus */
612
 
613
#if !ONLY_MSPACES
614
 
615
/* ------------------- Declarations of public routines ------------------- */
616
 
617
#ifndef USE_DL_PREFIX
618
#define dlcalloc               calloc
619
//#define dlfree                 free
620
//#define dlmalloc               malloc
621
#define dlmemalign             memalign
622
#define dlrealloc              realloc
623
#define dlvalloc               valloc
624
#define dlpvalloc              pvalloc
625
#define dlmallinfo             mallinfo
626
#define dlmallopt              mallopt
627
#define dlmalloc_trim          malloc_trim
628
#define dlmalloc_stats         malloc_stats
629
#define dlmalloc_usable_size   malloc_usable_size
630
#define dlmalloc_footprint     malloc_footprint
631
#define dlmalloc_max_footprint malloc_max_footprint
632
#define dlindependent_calloc   independent_calloc
633
#define dlindependent_comalloc independent_comalloc
634
#endif /* USE_DL_PREFIX */
635
 
636
 
637
/*
638
  malloc(size_t n)
639
  Returns a pointer to a newly allocated chunk of at least n bytes, or
640
  null if no space is available, in which case errno is set to ENOMEM
641
  on ANSI C systems.
642
 
643
  If n is zero, malloc returns a minimum-sized chunk. (The minimum
644
  size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
645
  systems.)  Note that size_t is an unsigned type, so calls with
646
  arguments that would be negative if signed are interpreted as
647
  requests for huge amounts of space, which will often fail. The
648
  maximum supported value of n differs across systems, but is in all
649
  cases less than the maximum representable value of a size_t.
650
*/
651
static void* dlmalloc(size_t);
652
 
653
/*
654
  free(void* p)
655
  Releases the chunk of memory pointed to by p, that had been previously
656
  allocated using malloc or a related routine such as realloc.
657
  It has no effect if p is null. If p was not malloced or already
658
  freed, free(p) will by default cause the current program to abort.
659
*/
660
static void  dlfree(void*);
661
 
662
/*
663
  calloc(size_t n_elements, size_t element_size);
664
  Returns a pointer to n_elements * element_size bytes, with all locations
665
  set to zero.
666
*/
667
static void* dlcalloc(size_t, size_t);
668
 
669
/*
670
  realloc(void* p, size_t n)
671
  Returns a pointer to a chunk of size n that contains the same data
672
  as does chunk p up to the minimum of (n, p's size) bytes, or null
673
  if no space is available.
674
 
675
  The returned pointer may or may not be the same as p. The algorithm
676
  prefers extending p in most cases when possible, otherwise it
677
  employs the equivalent of a malloc-copy-free sequence.
678
 
679
  If p is null, realloc is equivalent to malloc.
680
 
681
  If space is not available, realloc returns null, errno is set (if on
682
  ANSI) and p is NOT freed.
683
 
684
  if n is for fewer bytes than already held by p, the newly unused
685
  space is lopped off and freed if possible.  realloc with a size
686
  argument of zero (re)allocates a minimum-sized chunk.
687
 
688
  The old unix realloc convention of allowing the last-free'd chunk
689
  to be used as an argument to realloc is not supported.
690
*/
691
 
692
static void* dlrealloc(void*, size_t);
693
 
694
/*
695
  memalign(size_t alignment, size_t n);
696
  Returns a pointer to a newly allocated chunk of n bytes, aligned
697
  in accord with the alignment argument.
698
 
699
  The alignment argument should be a power of two. If the argument is
700
  not a power of two, the nearest greater power is used.
701
  8-byte alignment is guaranteed by normal malloc calls, so don't
702
  bother calling memalign with an argument of 8 or less.
703
 
704
  Overreliance on memalign is a sure way to fragment space.
705
*/
706
static void* dlmemalign(size_t, size_t);
707
 
708
/*
709
  valloc(size_t n);
710
  Equivalent to memalign(pagesize, n), where pagesize is the page
711
  size of the system. If the pagesize is unknown, 4096 is used.
712
*/
713
static void* dlvalloc(size_t);
714
 
715
/*
716
  mallopt(int parameter_number, int parameter_value)
717
  Sets tunable parameters The format is to provide a
718
  (parameter-number, parameter-value) pair.  mallopt then sets the
719
  corresponding parameter to the argument value if it can (i.e., so
720
  long as the value is meaningful), and returns 1 if successful else
721
  0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
722
  normally defined in malloc.h.  None of these are use in this malloc,
723
  so setting them has no effect. But this malloc also supports other
724
  options in mallopt. See below for details.  Briefly, supported
725
  parameters are as follows (listed defaults are for "typical"
726
  configurations).
727
 
728
  Symbol            param #  default    allowed param values
729
  M_TRIM_THRESHOLD     -1   2*1024*1024   any   (MAX_SIZE_T disables)
730
  M_GRANULARITY        -2     page size   any power of 2 >= page size
731
  M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
732
*/
733
static int dlmallopt(int, int);
734
 
735
/*
736
  malloc_footprint();
737
  Returns the number of bytes obtained from the system.  The total
738
  number of bytes allocated by malloc, realloc etc., is less than this
739
  value. Unlike mallinfo, this function returns only a precomputed
740
  result, so can be called frequently to monitor memory consumption.
741
  Even if locks are otherwise defined, this function does not use them,
742
  so results might not be up to date.
743
*/
744
static size_t dlmalloc_footprint(void);
745
 
746
/*
747
  malloc_max_footprint();
748
  Returns the maximum number of bytes obtained from the system. This
749
  value will be greater than current footprint if deallocated space
750
  has been reclaimed by the system. The peak number of bytes allocated
751
  by malloc, realloc etc., is less than this value. Unlike mallinfo,
752
  this function returns only a precomputed result, so can be called
753
  frequently to monitor memory consumption.  Even if locks are
754
  otherwise defined, this function does not use them, so results might
755
  not be up to date.
756
*/
757
static size_t dlmalloc_max_footprint(void);
758
 
759
#if !NO_MALLINFO
760
#endif /* NO_MALLINFO */
761
 
762
/*
763
  independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
764
 
765
  independent_calloc is similar to calloc, but instead of returning a
766
  single cleared space, it returns an array of pointers to n_elements
767
  independent elements that can hold contents of size elem_size, each
768
  of which starts out cleared, and can be independently freed,
769
  realloc'ed etc. The elements are guaranteed to be adjacently
770
  allocated (this is not guaranteed to occur with multiple callocs or
771
  mallocs), which may also improve cache locality in some
772
  applications.
773
 
774
  The "chunks" argument is optional (i.e., may be null, which is
775
  probably the most typical usage). If it is null, the returned array
776
  is itself dynamically allocated and should also be freed when it is
777
  no longer needed. Otherwise, the chunks array must be of at least
778
  n_elements in length. It is filled in with the pointers to the
779
  chunks.
780
 
781
  In either case, independent_calloc returns this pointer array, or
782
  null if the allocation failed.  If n_elements is zero and "chunks"
783
  is null, it returns a chunk representing an array with zero elements
784
  (which should be freed if not wanted).
785
 
786
  Each element must be individually freed when it is no longer
787
  needed. If you'd like to instead be able to free all at once, you
788
  should instead use regular calloc and assign pointers into this
789
  space to represent elements.  (In this case though, you cannot
790
  independently free elements.)
791
 
792
  independent_calloc simplifies and speeds up implementations of many
793
  kinds of pools.  It may also be useful when constructing large data
794
  structures that initially have a fixed number of fixed-sized nodes,
795
  but the number is not known at compile time, and some of the nodes
796
  may later need to be freed. For example:
797
 
798
  struct Node { int item; struct Node* next; };
799
 
800
  struct Node* build_list() {
801
    struct Node** pool;
802
    int n = read_number_of_nodes_needed();
803
    if (n <= 0) return 0;
804
    pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
805
    if (pool == 0) die();
806
    // organize into a linked list...
807
    struct Node* first = pool[0];
808
    for (i = 0; i < n-1; ++i)
809
      pool[i]->next = pool[i+1];
810
    free(pool);     // Can now free the array (or not, if it is needed later)
811
    return first;
812
  }
813
*/
814
static void** dlindependent_calloc(size_t, size_t, void**);
815
 
816
/*
817
  independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
818
 
819
  independent_comalloc allocates, all at once, a set of n_elements
820
  chunks with sizes indicated in the "sizes" array.    It returns
821
  an array of pointers to these elements, each of which can be
822
  independently freed, realloc'ed etc. The elements are guaranteed to
823
  be adjacently allocated (this is not guaranteed to occur with
824
  multiple callocs or mallocs), which may also improve cache locality
825
  in some applications.
826
 
827
  The "chunks" argument is optional (i.e., may be null). If it is null
828
  the returned array is itself dynamically allocated and should also
829
  be freed when it is no longer needed. Otherwise, the chunks array
830
  must be of at least n_elements in length. It is filled in with the
831
  pointers to the chunks.
832
 
833
  In either case, independent_comalloc returns this pointer array, or
834
  null if the allocation failed.  If n_elements is zero and chunks is
835
  null, it returns a chunk representing an array with zero elements
836
  (which should be freed if not wanted).
837
 
838
  Each element must be individually freed when it is no longer
839
  needed. If you'd like to instead be able to free all at once, you
840
  should instead use a single regular malloc, and assign pointers at
841
  particular offsets in the aggregate space. (In this case though, you
842
  cannot independently free elements.)
843
 
844
  independent_comallac differs from independent_calloc in that each
845
  element may have a different size, and also that it does not
846
  automatically clear elements.
847
 
848
  independent_comalloc can be used to speed up allocation in cases
849
  where several structs or objects must always be allocated at the
850
  same time.  For example:
851
 
852
  struct Head { ... }
853
  struct Foot { ... }
854
 
855
  void send_message(char* msg) {
856
    int msglen = strlen(msg);
857
    size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
858
    void* chunks[3];
859
    if (independent_comalloc(3, sizes, chunks) == 0)
860
      die();
861
    struct Head* head = (struct Head*)(chunks[0]);
862
    char*        body = (char*)(chunks[1]);
863
    struct Foot* foot = (struct Foot*)(chunks[2]);
864
    // ...
865
  }
866
 
867
  In general though, independent_comalloc is worth using only for
868
  larger values of n_elements. For small values, you probably won't
869
  detect enough difference from series of malloc calls to bother.
870
 
871
  Overuse of independent_comalloc can increase overall memory usage,
872
  since it cannot reuse existing noncontiguous small chunks that
873
  might be available for some of the elements.
874
*/
875
static void** dlindependent_comalloc(size_t, size_t*, void**);
876
 
877
 
878
/*
879
  pvalloc(size_t n);
880
  Equivalent to valloc(minimum-page-that-holds(n)), that is,
881
  round up n to nearest pagesize.
882
 */
883
static void*  dlpvalloc(size_t);
884
 
885
/*
886
  malloc_trim(size_t pad);
887
 
888
  If possible, gives memory back to the system (via negative arguments
889
  to sbrk) if there is unused memory at the `high' end of the malloc
890
  pool or in unused MMAP segments. You can call this after freeing
891
  large blocks of memory to potentially reduce the system-level memory
892
  requirements of a program. However, it cannot guarantee to reduce
893
  memory. Under some allocation patterns, some large free blocks of
894
  memory will be locked between two used chunks, so they cannot be
895
  given back to the system.
896
 
897
  The `pad' argument to malloc_trim represents the amount of free
898
  trailing space to leave untrimmed. If this argument is zero, only
899
  the minimum amount of memory to maintain internal data structures
900
  will be left. Non-zero arguments can be supplied to maintain enough
901
  trailing space to service future expected allocations without having
902
  to re-obtain memory from the system.
903
 
904
  Malloc_trim returns 1 if it actually released any memory, else 0.
905
*/
906
static int  dlmalloc_trim(size_t);
907
 
908
/*
909
  malloc_usable_size(void* p);
910
 
911
  Returns the number of bytes you can actually use in
912
  an allocated chunk, which may be more than you requested (although
913
  often not) due to alignment and minimum size constraints.
914
  You can use this many bytes without worrying about
915
  overwriting other allocated objects. This is not a particularly great
916
  programming practice. malloc_usable_size can be more useful in
917
  debugging and assertions, for example:
918
 
919
  p = malloc(n);
920
  assert(malloc_usable_size(p) >= 256);
921
*/
922
static size_t dlmalloc_usable_size(void*);
923
 
924
/*
925
  malloc_stats();
926
  Prints on stderr the amount of space obtained from the system (both
927
  via sbrk and mmap), the maximum amount (which may be more than
928
  current if malloc_trim and/or munmap got called), and the current
929
  number of bytes allocated via malloc (or realloc, etc) but not yet
930
  freed. Note that this is the number of bytes allocated, not the
931
  number requested. It will be larger than the number requested
932
  because of alignment and bookkeeping overhead. Because it includes
933
  alignment wastage as being in use, this figure may be greater than
934
  zero even when no user-level chunks are allocated.
935
 
936
  The reported current and maximum system memory can be inaccurate if
937
  a program makes other calls to system memory allocation functions
938
  (normally sbrk) outside of malloc.
939
 
940
  malloc_stats prints only the most commonly interesting statistics.
941
  More information can be obtained by calling mallinfo.
942
*/
943
static void  dlmalloc_stats(void);
944
 
945
#endif /* ONLY_MSPACES */
946
 
947
//////////////////////////////////////////////////////////////////////////
948
//			declaration of melloc,free,realloc
949
//////////////////////////////////////////////////////////////////////////
950
void* malloc(size_t size)
951
{
952
	void	*mem;
953
 
954
	mem=dlmalloc(size);
955
	if (mem!=0) memset(mem,0,size);//cleare befor use
956
	return(mem);
957
}
958
 
959
void free(void *mem)
960
{
961
	dlfree(mem);
962
}
963
 
964
#if MSPACES
965
#endif /* MSPACES */
966
 
967
#ifdef __cplusplus
968
};  /* end of extern "C" */
969
#endif /* __cplusplus */
970
 
971
/*
972
  ========================================================================
973
  To make a fully customizable malloc.h header file, cut everything
974
  above this line, put into file malloc.h, edit to suit, and #include it
975
  on the next line, as well as in programs that use this malloc.
976
  ========================================================================
977
*/
978
 
979
/* #include "malloc.h" */
980
 
981
/*------------------------------ internal #includes ---------------------- */
982
 
983
#ifdef WIN32
984
#pragma warning( disable : 4146 ) /* no "unsigned" warnings */
985
#endif /* WIN32 */
986
 
987
//#include        /* for printing in malloc_stats */
988
 
989
#ifndef LACKS_ERRNO_H
990
#include        /* for MALLOC_FAILURE_ACTION */
991
#endif /* LACKS_ERRNO_H */
992
#if FOOTERS
993
#include         /* for magic initialization */
994
#endif /* FOOTERS */
995
#ifndef LACKS_STDLIB_H
996
#include       /* for abort() */
997
#endif /* LACKS_STDLIB_H */
998
 
999
//#ifdef DEBUG
1000
//#if ABORT_ON_ASSERT_FAILURE
1001
//#define assert(x) if(!(x)) ABORT
1002
//#else /* ABORT_ON_ASSERT_FAILURE */
1003
//#include 
1004
//#endif /* ABORT_ON_ASSERT_FAILURE */
1005
//#else  /* DEBUG */
1006
#define assert(x)
1007
//#endif /* DEBUG */
1008
 
1009
#ifndef LACKS_STRING_H
1010
#include       /* for memset etc */
1011
#endif  /* LACKS_STRING_H */
1012
#if USE_BUILTIN_FFS
1013
#ifndef LACKS_STRINGS_H
1014
#include      /* for ffs */
1015
#endif /* LACKS_STRINGS_H */
1016
#endif /* USE_BUILTIN_FFS */
1017
#if HAVE_MMAP
1018
#ifndef LACKS_SYS_MMAN_H
1019
#include     /* for mmap */
1020
#endif /* LACKS_SYS_MMAN_H */
1021
#ifndef LACKS_FCNTL_H
1022
#include 
1023
#endif /* LACKS_FCNTL_H */
1024
#endif /* HAVE_MMAP */
1025
#if HAVE_MORECORE
1026
#endif /* HAVE_MMAP */
1027
 
1028
#ifndef WIN32
1029
#endif
1030
 
1031
/* ------------------- size_t and alignment properties -------------------- */
1032
 
1033
/* The byte and bit size of a size_t */
1034
#define SIZE_T_SIZE         (sizeof(size_t))
1035
#define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
1036
 
1037
/* Some constants coerced to size_t */
1038
/* Annoying but necessary to avoid errors on some plaftorms */
1039
#define SIZE_T_ZERO         ((size_t)0)
1040
#define SIZE_T_ONE          ((size_t)1)
1041
#define SIZE_T_TWO          ((size_t)2)
1042
#define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
1043
#define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
1044
#define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1045
#define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
1046
 
1047
/* The bit mask value corresponding to MALLOC_ALIGNMENT */
1048
#define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
1049
 
1050
/* True if address a has acceptable alignment */
1051
#define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1052
 
1053
/* the number of bytes to offset an address to align it */
1054
#define align_offset(A)\
1055
 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1056
  ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1057
 
1058
/* -------------------------- MMAP preliminaries ------------------------- */
1059
 
1060
/*
1061
   If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1062
   checks to fail so compiler optimizer can delete code rather than
1063
   using so many "#if"s.
1064
*/
1065
 
1066
 
1067
/* MORECORE and MMAP must return MFAIL on failure */
1068
#define MFAIL                ((void*)(MAX_SIZE_T))
1069
#define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
1070
 
1071
/* HAVE_MMAP */
1072
#define IS_MMAPPED_BIT       (SIZE_T_ONE)
1073
#define USE_MMAP_BIT         (SIZE_T_ONE)
1074
 
1075
 
1076
////////////////////////////////////////////////////////////////
1077
//					KolibriOS specifik
1078
////////////////////////////////////////////////////////////////
1079
void* gui_ksys_mem_alloc(size_t size)
1080
{
1081
	void *value;
1082
 
1083
  __asm__ __volatile__(
1084
      "int $0x40"
1085
	  :"=a"(value)
1086
      :"a"(68),"b"(12),"c"(size)
1087
      :"memory");
1088
 
1089
	return (value != 0)? value: MFAIL;
1090
}
1091
 
1092
int  gui_ksys_mem_free(void *mem,size_t size)
1093
{
1094
    __asm__ __volatile__(
1095
        "int $0x40"
1096
        :
1097
        :"a"(68),"b"(13),"c"(mem)
1098
        :"memory");
1099
 
1100
		return(0);
1101
}
1102
 
1103
//#define gui_ksys_mem_alloc(s)         win32mmap(s)
1104
//#define CALL_MUNMAP(a, s)    win32munmap((a), (s))
1105
//#define DIRECT_MMAP(s)       win32direct_mmap(s)
1106
 
1107
 
1108
#if HAVE_MMAP && HAVE_MREMAP
1109
#else  /* HAVE_MMAP && HAVE_MREMAP */
1110
#define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1111
#endif /* HAVE_MMAP && HAVE_MREMAP */
1112
 
1113
#if HAVE_MORECORE
1114
#else  /* HAVE_MORECORE */
1115
#define CALL_MORECORE(S)     MFAIL
1116
#endif /* HAVE_MORECORE */
1117
 
1118
/* mstate bit set if continguous morecore disabled or failed */
1119
#define USE_NONCONTIGUOUS_BIT (4U)
1120
 
1121
/* segment bit set in create_mspace_with_base */
1122
#define EXTERN_BIT            (8U)
1123
 
1124
 
1125
/* --------------------------- Lock preliminaries ------------------------ */
1126
 
1127
#if USE_LOCKS
1128
#else  /* USE_LOCKS */
1129
#define USE_LOCK_BIT               (0U)
1130
#define INITIAL_LOCK(l)
1131
#endif /* USE_LOCKS */
1132
 
1133
#if USE_LOCKS && HAVE_MORECORE
1134
#define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
1135
#define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
1136
#else /* USE_LOCKS && HAVE_MORECORE */
1137
#define ACQUIRE_MORECORE_LOCK()
1138
#define RELEASE_MORECORE_LOCK()
1139
#endif /* USE_LOCKS && HAVE_MORECORE */
1140
 
1141
#if USE_LOCKS
1142
#define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
1143
#define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
1144
#else  /* USE_LOCKS */
1145
#define ACQUIRE_MAGIC_INIT_LOCK()
1146
#define RELEASE_MAGIC_INIT_LOCK()
1147
#endif /* USE_LOCKS */
1148
 
1149
 
1150
/* -----------------------  Chunk representations ------------------------ */
1151
 
1152
/*
1153
  (The following includes lightly edited explanations by Colin Plumb.)
1154
 
1155
  The malloc_chunk declaration below is misleading (but accurate and
1156
  necessary).  It declares a "view" into memory allowing access to
1157
  necessary fields at known offsets from a given base.
1158
 
1159
  Chunks of memory are maintained using a `boundary tag' method as
1160
  originally described by Knuth.  (See the paper by Paul Wilson
1161
  ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1162
  techniques.)  Sizes of free chunks are stored both in the front of
1163
  each chunk and at the end.  This makes consolidating fragmented
1164
  chunks into bigger chunks fast.  The head fields also hold bits
1165
  representing whether chunks are free or in use.
1166
 
1167
  Here are some pictures to make it clearer.  They are "exploded" to
1168
  show that the state of a chunk can be thought of as extending from
1169
  the high 31 bits of the head field of its header through the
1170
  prev_foot and PINUSE_BIT bit of the following chunk header.
1171
 
1172
  A chunk that's in use looks like:
1173
 
1174
   chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1175
           | Size of previous chunk (if P = 1)                             |
1176
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1177
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1178
         | Size of this chunk                                         1| +-+
1179
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1180
         |                                                               |
1181
         +-                                                             -+
1182
         |                                                               |
1183
         +-                                                             -+
1184
         |                                                               :
1185
         +-      size - sizeof(size_t) available payload bytes          -+
1186
         :                                                               |
1187
 chunk-> +-                                                             -+
1188
         |                                                               |
1189
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1190
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1191
       | Size of next chunk (may or may not be in use)               | +-+
1192
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1193
 
1194
    And if it's free, it looks like this:
1195
 
1196
   chunk-> +-                                                             -+
1197
           | User payload (must be in use, or we would have merged!)       |
1198
           +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1199
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1200
         | Size of this chunk                                         0| +-+
1201
   mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1202
         | Next pointer                                                  |
1203
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1204
         | Prev pointer                                                  |
1205
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1206
         |                                                               :
1207
         +-      size - sizeof(struct chunk) unused bytes               -+
1208
         :                                                               |
1209
 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1210
         | Size of this chunk                                            |
1211
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1212
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1213
       | Size of next chunk (must be in use, or we would have merged)| +-+
1214
 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1215
       |                                                               :
1216
       +- User payload                                                -+
1217
       :                                                               |
1218
       +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1219
                                                                     |0|
1220
                                                                     +-+
1221
  Note that since we always merge adjacent free chunks, the chunks
1222
  adjacent to a free chunk must be in use.
1223
 
1224
  Given a pointer to a chunk (which can be derived trivially from the
1225
  payload pointer) we can, in O(1) time, find out whether the adjacent
1226
  chunks are free, and if so, unlink them from the lists that they
1227
  are on and merge them with the current chunk.
1228
 
1229
  Chunks always begin on even word boundaries, so the mem portion
1230
  (which is returned to the user) is also on an even word boundary, and
1231
  thus at least double-word aligned.
1232
 
1233
  The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1234
  chunk size (which is always a multiple of two words), is an in-use
1235
  bit for the *previous* chunk.  If that bit is *clear*, then the
1236
  word before the current chunk size contains the previous chunk
1237
  size, and can be used to find the front of the previous chunk.
1238
  The very first chunk allocated always has this bit set, preventing
1239
  access to non-existent (or non-owned) memory. If pinuse is set for
1240
  any given chunk, then you CANNOT determine the size of the
1241
  previous chunk, and might even get a memory addressing fault when
1242
  trying to do so.
1243
 
1244
  The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1245
  the chunk size redundantly records whether the current chunk is
1246
  inuse. This redundancy enables usage checks within free and realloc,
1247
  and reduces indirection when freeing and consolidating chunks.
1248
 
1249
  Each freshly allocated chunk must have both cinuse and pinuse set.
1250
  That is, each allocated chunk borders either a previously allocated
1251
  and still in-use chunk, or the base of its memory arena. This is
1252
  ensured by making all allocations from the the `lowest' part of any
1253
  found chunk.  Further, no free chunk physically borders another one,
1254
  so each free chunk is known to be preceded and followed by either
1255
  inuse chunks or the ends of memory.
1256
 
1257
  Note that the `foot' of the current chunk is actually represented
1258
  as the prev_foot of the NEXT chunk. This makes it easier to
1259
  deal with alignments etc but can be very confusing when trying
1260
  to extend or adapt this code.
1261
 
1262
  The exceptions to all this are
1263
 
1264
     1. The special chunk `top' is the top-most available chunk (i.e.,
1265
        the one bordering the end of available memory). It is treated
1266
        specially.  Top is never included in any bin, is used only if
1267
        no other chunk is available, and is released back to the
1268
        system if it is very large (see M_TRIM_THRESHOLD).  In effect,
1269
        the top chunk is treated as larger (and thus less well
1270
        fitting) than any other available chunk.  The top chunk
1271
        doesn't update its trailing size field since there is no next
1272
        contiguous chunk that would have to index off it. However,
1273
        space is still allocated for it (TOP_FOOT_SIZE) to enable
1274
        separation or merging when space is extended.
1275
 
1276
     3. Chunks allocated via mmap, which have the lowest-order bit
1277
        (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1278
        PINUSE_BIT in their head fields.  Because they are allocated
1279
        one-by-one, each must carry its own prev_foot field, which is
1280
        also used to hold the offset this chunk has within its mmapped
1281
        region, which is needed to preserve alignment. Each mmapped
1282
        chunk is trailed by the first two fields of a fake next-chunk
1283
        for sake of usage checks.
1284
 
1285
*/
1286
 
1287
struct malloc_chunk {
1288
  size_t               prev_foot;  /* Size of previous chunk (if free).  */
1289
  size_t               head;       /* Size and inuse bits. */
1290
  struct malloc_chunk* fd;         /* double links -- used only if free. */
1291
  struct malloc_chunk* bk;
1292
};
1293
 
1294
typedef struct malloc_chunk  mchunk;
1295
typedef struct malloc_chunk* mchunkptr;
1296
typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
1297
typedef unsigned int bindex_t;         /* Described below */
1298
typedef unsigned int binmap_t;         /* Described below */
1299
typedef unsigned int flag_t;           /* The type of various bit flag sets */
1300
 
1301
/* ------------------- Chunks sizes and alignments ----------------------- */
1302
 
1303
#define MCHUNK_SIZE         (sizeof(mchunk))
1304
 
1305
#if FOOTERS
1306
#define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
1307
#else /* FOOTERS */
1308
#define CHUNK_OVERHEAD      (SIZE_T_SIZE)
1309
#endif /* FOOTERS */
1310
 
1311
/* MMapped chunks need a second word of overhead ... */
1312
#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1313
/* ... and additional padding for fake next-chunk at foot */
1314
#define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
1315
 
1316
/* The smallest size we can malloc is an aligned minimal chunk */
1317
#define MIN_CHUNK_SIZE\
1318
  ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1319
 
1320
/* conversion from malloc headers to user pointers, and back */
1321
#define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
1322
#define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1323
/* chunk associated with aligned address A */
1324
#define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
1325
 
1326
/* Bounds on request (not chunk) sizes. */
1327
#define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
1328
#define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1329
 
1330
/* pad request bytes into a usable size */
1331
#define pad_request(req) \
1332
   (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1333
 
1334
/* pad request, checking for minimum (but not maximum) */
1335
#define request2size(req) \
1336
  (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1337
 
1338
 
1339
/* ------------------ Operations on head and foot fields ----------------- */
1340
 
1341
/*
1342
  The head field of a chunk is or'ed with PINUSE_BIT when previous
1343
  adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1344
  use. If the chunk was obtained with mmap, the prev_foot field has
1345
  IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1346
  mmapped region to the base of the chunk.
1347
*/
1348
 
1349
#define PINUSE_BIT          (SIZE_T_ONE)
1350
#define CINUSE_BIT          (SIZE_T_TWO)
1351
#define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1352
 
1353
/* Head value for fenceposts */
1354
#define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1355
 
1356
/* extraction of fields from head words */
1357
#define cinuse(p)           ((p)->head & CINUSE_BIT)
1358
#define pinuse(p)           ((p)->head & PINUSE_BIT)
1359
#define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1360
 
1361
#define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1362
#define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1363
 
1364
/* Treat space at ptr +/- offset as a chunk */
1365
#define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1366
#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1367
 
1368
/* Ptr to next or previous physical malloc_chunk. */
1369
#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1370
#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1371
 
1372
/* extract next chunk's pinuse bit */
1373
#define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1374
 
1375
/* Get/set size at footer */
1376
#define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1377
#define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1378
 
1379
/* Set size, pinuse bit, and foot */
1380
#define set_size_and_pinuse_of_free_chunk(p, s)\
1381
  ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1382
 
1383
/* Set size, pinuse bit, foot, and clear next pinuse */
1384
#define set_free_with_pinuse(p, s, n)\
1385
  (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1386
 
1387
#define is_mmapped(p)\
1388
  (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1389
 
1390
/* Get the internal overhead associated with chunk p */
1391
#define overhead_for(p)\
1392
 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1393
 
1394
/* Return true if malloced space is not necessarily cleared */
1395
#if MMAP_CLEARS
1396
#define calloc_must_clear(p) (!is_mmapped(p))
1397
#else /* MMAP_CLEARS */
1398
#define calloc_must_clear(p) (1)
1399
#endif /* MMAP_CLEARS */
1400
 
1401
/* ---------------------- Overlaid data structures ----------------------- */
1402
 
1403
/*
1404
  When chunks are not in use, they are treated as nodes of either
1405
  lists or trees.
1406
 
1407
  "Small"  chunks are stored in circular doubly-linked lists, and look
1408
  like this:
1409
 
1410
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1411
            |             Size of previous chunk                            |
1412
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1413
    `head:' |             Size of chunk, in bytes                         |P|
1414
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1415
            |             Forward pointer to next chunk in list             |
1416
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1417
            |             Back pointer to previous chunk in list            |
1418
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1419
            |             Unused space (may be 0 bytes long)                .
1420
            .                                                               .
1421
            .                                                               |
1422
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1423
    `foot:' |             Size of chunk, in bytes                           |
1424
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1425
 
1426
  Larger chunks are kept in a form of bitwise digital trees (aka
1427
  tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1428
  free chunks greater than 256 bytes, their size doesn't impose any
1429
  constraints on user chunk sizes.  Each node looks like:
1430
 
1431
    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1432
            |             Size of previous chunk                            |
1433
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1434
    `head:' |             Size of chunk, in bytes                         |P|
1435
      mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1436
            |             Forward pointer to next chunk of same size        |
1437
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1438
            |             Back pointer to previous chunk of same size       |
1439
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1440
            |             Pointer to left child (child[0])                  |
1441
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1442
            |             Pointer to right child (child[1])                 |
1443
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1444
            |             Pointer to parent                                 |
1445
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1446
            |             bin index of this chunk                           |
1447
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1448
            |             Unused space                                      .
1449
            .                                                               |
1450
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1451
    `foot:' |             Size of chunk, in bytes                           |
1452
            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1453
 
1454
  Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1455
  of the same size are arranged in a circularly-linked list, with only
1456
  the oldest chunk (the next to be used, in our FIFO ordering)
1457
  actually in the tree.  (Tree members are distinguished by a non-null
1458
  parent pointer.)  If a chunk with the same size an an existing node
1459
  is inserted, it is linked off the existing node using pointers that
1460
  work in the same way as fd/bk pointers of small chunks.
1461
 
1462
  Each tree contains a power of 2 sized range of chunk sizes (the
1463
  smallest is 0x100 <= x < 0x180), which is is divided in half at each
1464
  tree level, with the chunks in the smaller half of the range (0x100
1465
  <= x < 0x140 for the top nose) in the left subtree and the larger
1466
  half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1467
  done by inspecting individual bits.
1468
 
1469
  Using these rules, each node's left subtree contains all smaller
1470
  sizes than its right subtree.  However, the node at the root of each
1471
  subtree has no particular ordering relationship to either.  (The
1472
  dividing line between the subtree sizes is based on trie relation.)
1473
  If we remove the last chunk of a given size from the interior of the
1474
  tree, we need to replace it with a leaf node.  The tree ordering
1475
  rules permit a node to be replaced by any leaf below it.
1476
 
1477
  The smallest chunk in a tree (a common operation in a best-fit
1478
  allocator) can be found by walking a path to the leftmost leaf in
1479
  the tree.  Unlike a usual binary tree, where we follow left child
1480
  pointers until we reach a null, here we follow the right child
1481
  pointer any time the left one is null, until we reach a leaf with
1482
  both child pointers null. The smallest chunk in the tree will be
1483
  somewhere along that path.
1484
 
1485
  The worst case number of steps to add, find, or remove a node is
1486
  bounded by the number of bits differentiating chunks within
1487
  bins. Under current bin calculations, this ranges from 6 up to 21
1488
  (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1489
  is of course much better.
1490
*/
1491
 
1492
struct malloc_tree_chunk {
1493
  /* The first four fields must be compatible with malloc_chunk */
1494
  size_t                    prev_foot;
1495
  size_t                    head;
1496
  struct malloc_tree_chunk* fd;
1497
  struct malloc_tree_chunk* bk;
1498
 
1499
  struct malloc_tree_chunk* child[2];
1500
  struct malloc_tree_chunk* parent;
1501
  bindex_t                  index;
1502
};
1503
 
1504
typedef struct malloc_tree_chunk  tchunk;
1505
typedef struct malloc_tree_chunk* tchunkptr;
1506
typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1507
 
1508
/* A little helper macro for trees */
1509
#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1510
 
1511
/* ----------------------------- Segments -------------------------------- */
1512
 
1513
/*
1514
  Each malloc space may include non-contiguous segments, held in a
1515
  list headed by an embedded malloc_segment record representing the
1516
  top-most space. Segments also include flags holding properties of
1517
  the space. Large chunks that are directly allocated by mmap are not
1518
  included in this list. They are instead independently created and
1519
  destroyed without otherwise keeping track of them.
1520
 
1521
  Segment management mainly comes into play for spaces allocated by
1522
  MMAP.  Any call to MMAP might or might not return memory that is
1523
  adjacent to an existing segment.  MORECORE normally contiguously
1524
  extends the current space, so this space is almost always adjacent,
1525
  which is simpler and faster to deal with. (This is why MORECORE is
1526
  used preferentially to MMAP when both are available -- see
1527
  sys_alloc.)  When allocating using MMAP, we don't use any of the
1528
  hinting mechanisms (inconsistently) supported in various
1529
  implementations of unix mmap, or distinguish reserving from
1530
  committing memory. Instead, we just ask for space, and exploit
1531
  contiguity when we get it.  It is probably possible to do
1532
  better than this on some systems, but no general scheme seems
1533
  to be significantly better.
1534
 
1535
  Management entails a simpler variant of the consolidation scheme
1536
  used for chunks to reduce fragmentation -- new adjacent memory is
1537
  normally prepended or appended to an existing segment. However,
1538
  there are limitations compared to chunk consolidation that mostly
1539
  reflect the fact that segment processing is relatively infrequent
1540
  (occurring only when getting memory from system) and that we
1541
  don't expect to have huge numbers of segments:
1542
 
1543
  * Segments are not indexed, so traversal requires linear scans.  (It
1544
    would be possible to index these, but is not worth the extra
1545
    overhead and complexity for most programs on most platforms.)
1546
  * New segments are only appended to old ones when holding top-most
1547
    memory; if they cannot be prepended to others, they are held in
1548
    different segments.
1549
 
1550
  Except for the top-most segment of an mstate, each segment record
1551
  is kept at the tail of its segment. Segments are added by pushing
1552
  segment records onto the list headed by &mstate.seg for the
1553
  containing mstate.
1554
 
1555
  Segment flags control allocation/merge/deallocation policies:
1556
  * If EXTERN_BIT set, then we did not allocate this segment,
1557
    and so should not try to deallocate or merge with others.
1558
    (This currently holds only for the initial segment passed
1559
    into create_mspace_with_base.)
1560
  * If IS_MMAPPED_BIT set, the segment may be merged with
1561
    other surrounding mmapped segments and trimmed/de-allocated
1562
    using munmap.
1563
  * If neither bit is set, then the segment was obtained using
1564
    MORECORE so can be merged with surrounding MORECORE'd segments
1565
    and deallocated/trimmed using MORECORE with negative arguments.
1566
*/
1567
 
1568
struct malloc_segment {
1569
  char*        base;             /* base address */
1570
  size_t       size;             /* allocated size */
1571
  struct malloc_segment* next;   /* ptr to next segment */
1572
  flag_t       sflags;           /* mmap and extern flag */
1573
};
1574
 
1575
#define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
1576
#define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
1577
 
1578
typedef struct malloc_segment  msegment;
1579
typedef struct malloc_segment* msegmentptr;
1580
 
1581
/* ---------------------------- malloc_state ----------------------------- */
1582
 
1583
/*
1584
   A malloc_state holds all of the bookkeeping for a space.
1585
   The main fields are:
1586
 
1587
  Top
1588
    The topmost chunk of the currently active segment. Its size is
1589
    cached in topsize.  The actual size of topmost space is
1590
    topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1591
    fenceposts and segment records if necessary when getting more
1592
    space from the system.  The size at which to autotrim top is
1593
    cached from mparams in trim_check, except that it is disabled if
1594
    an autotrim fails.
1595
 
1596
  Designated victim (dv)
1597
    This is the preferred chunk for servicing small requests that
1598
    don't have exact fits.  It is normally the chunk split off most
1599
    recently to service another small request.  Its size is cached in
1600
    dvsize. The link fields of this chunk are not maintained since it
1601
    is not kept in a bin.
1602
 
1603
  SmallBins
1604
    An array of bin headers for free chunks.  These bins hold chunks
1605
    with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1606
    chunks of all the same size, spaced 8 bytes apart.  To simplify
1607
    use in double-linked lists, each bin header acts as a malloc_chunk
1608
    pointing to the real first node, if it exists (else pointing to
1609
    itself).  This avoids special-casing for headers.  But to avoid
1610
    waste, we allocate only the fd/bk pointers of bins, and then use
1611
    repositioning tricks to treat these as the fields of a chunk.
1612
 
1613
  TreeBins
1614
    Treebins are pointers to the roots of trees holding a range of
1615
    sizes. There are 2 equally spaced treebins for each power of two
1616
    from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1617
    larger.
1618
 
1619
  Bin maps
1620
    There is one bit map for small bins ("smallmap") and one for
1621
    treebins ("treemap).  Each bin sets its bit when non-empty, and
1622
    clears the bit when empty.  Bit operations are then used to avoid
1623
    bin-by-bin searching -- nearly all "search" is done without ever
1624
    looking at bins that won't be selected.  The bit maps
1625
    conservatively use 32 bits per map word, even if on 64bit system.
1626
    For a good description of some of the bit-based techniques used
1627
    here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1628
    supplement at http://hackersdelight.org/). Many of these are
1629
    intended to reduce the branchiness of paths through malloc etc, as
1630
    well as to reduce the number of memory locations read or written.
1631
 
1632
  Segments
1633
    A list of segments headed by an embedded malloc_segment record
1634
    representing the initial space.
1635
 
1636
  Address check support
1637
    The least_addr field is the least address ever obtained from
1638
    MORECORE or MMAP. Attempted frees and reallocs of any address less
1639
    than this are trapped (unless INSECURE is defined).
1640
 
1641
  Magic tag
1642
    A cross-check field that should always hold same value as mparams.magic.
1643
 
1644
  Flags
1645
    Bits recording whether to use MMAP, locks, or contiguous MORECORE
1646
 
1647
  Statistics
1648
    Each space keeps track of current and maximum system memory
1649
    obtained via MORECORE or MMAP.
1650
 
1651
  Locking
1652
    If USE_LOCKS is defined, the "mutex" lock is acquired and released
1653
    around every public call using this mspace.
1654
*/
1655
 
1656
/* Bin types, widths and sizes */
1657
#define NSMALLBINS        (32U)
1658
#define NTREEBINS         (32U)
1659
#define SMALLBIN_SHIFT    (3U)
1660
#define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
1661
#define TREEBIN_SHIFT     (8U)
1662
#define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
1663
#define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
1664
#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1665
 
1666
struct malloc_state {
1667
  binmap_t   smallmap;
1668
  binmap_t   treemap;
1669
  size_t     dvsize;
1670
  size_t     topsize;
1671
  char*      least_addr;
1672
  mchunkptr  dv;
1673
  mchunkptr  top;
1674
  size_t     trim_check;
1675
  size_t     magic;
1676
  mchunkptr  smallbins[(NSMALLBINS+1)*2];
1677
  tbinptr    treebins[NTREEBINS];
1678
  size_t     footprint;
1679
  size_t     max_footprint;
1680
  flag_t     mflags;
1681
#if USE_LOCKS
1682
  MLOCK_T    mutex;     /* locate lock among fields that rarely change */
1683
#endif /* USE_LOCKS */
1684
  msegment   seg;
1685
};
1686
 
1687
typedef struct malloc_state*    mstate;
1688
 
1689
/* ------------- Global malloc_state and malloc_params ------------------- */
1690
 
1691
/*
1692
  malloc_params holds global properties, including those that can be
1693
  dynamically set using mallopt. There is a single instance, mparams,
1694
  initialized in init_mparams.
1695
*/
1696
 
1697
struct malloc_params {
1698
  size_t magic;
1699
  size_t page_size;
1700
  size_t granularity;
1701
  size_t mmap_threshold;
1702
  size_t trim_threshold;
1703
  flag_t default_mflags;
1704
};
1705
 
1706
static struct malloc_params mparams;
1707
 
1708
/* The global malloc_state used for all non-"mspace" calls */
1709
static struct malloc_state _gm_;
1710
#define gm                 (&_gm_)
1711
#define is_global(M)       ((M) == &_gm_)
1712
#define is_initialized(M)  ((M)->top != 0)
1713
 
1714
/* -------------------------- system alloc setup ------------------------- */
1715
 
1716
/* Operations on mflags */
1717
 
1718
#define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
1719
#define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
1720
#define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
1721
 
1722
#define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
1723
#define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
1724
#define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
1725
 
1726
#define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
1727
#define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
1728
 
1729
#define set_lock(M,L)\
1730
 ((M)->mflags = (L)?\
1731
  ((M)->mflags | USE_LOCK_BIT) :\
1732
  ((M)->mflags & ~USE_LOCK_BIT))
1733
 
1734
/* page-align a size */
1735
#define page_align(S)\
1736
 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
1737
 
1738
/* granularity-align a size */
1739
#define granularity_align(S)\
1740
  (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
1741
 
1742
#define is_page_aligned(S)\
1743
   (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
1744
#define is_granularity_aligned(S)\
1745
   (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
1746
 
1747
/*  True if segment S holds address A */
1748
#define segment_holds(S, A)\
1749
  ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
1750
 
1751
/* Return segment holding given address */
1752
static msegmentptr segment_holding(mstate m, char* addr) {
1753
  msegmentptr sp = &m->seg;
1754
  for (;;) {
1755
    if (addr >= sp->base && addr < sp->base + sp->size)
1756
      return sp;
1757
    if ((sp = sp->next) == 0)
1758
      return 0;
1759
  }
1760
}
1761
 
1762
/* Return true if segment contains a segment link */
1763
static int has_segment_link(mstate m, msegmentptr ss) {
1764
  msegmentptr sp = &m->seg;
1765
  for (;;) {
1766
    if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
1767
      return 1;
1768
    if ((sp = sp->next) == 0)
1769
      return 0;
1770
  }
1771
}
1772
 
1773
#ifndef MORECORE_CANNOT_TRIM
1774
#define should_trim(M,s)  ((s) > (M)->trim_check)
1775
#else  /* MORECORE_CANNOT_TRIM */
1776
#define should_trim(M,s)  (0)
1777
#endif /* MORECORE_CANNOT_TRIM */
1778
 
1779
/*
1780
  TOP_FOOT_SIZE is padding at the end of a segment, including space
1781
  that may be needed to place segment records and fenceposts when new
1782
  noncontiguous segments are added.
1783
*/
1784
#define TOP_FOOT_SIZE\
1785
  (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
1786
 
1787
 
1788
/* -------------------------------  Hooks -------------------------------- */
1789
 
1790
/*
1791
  PREACTION should be defined to return 0 on success, and nonzero on
1792
  failure. If you are not using locking, you can redefine these to do
1793
  anything you like.
1794
*/
1795
 
1796
#if USE_LOCKS
1797
 
1798
/* Ensure locks are initialized */
1799
#define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
1800
 
1801
#define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
1802
#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
1803
#else /* USE_LOCKS */
1804
 
1805
#ifndef PREACTION
1806
#define PREACTION(M) (0)
1807
#endif  /* PREACTION */
1808
 
1809
#ifndef POSTACTION
1810
#define POSTACTION(M)
1811
#endif  /* POSTACTION */
1812
 
1813
#endif /* USE_LOCKS */
1814
 
1815
/*
1816
  CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
1817
  USAGE_ERROR_ACTION is triggered on detected bad frees and
1818
  reallocs. The argument p is an address that might have triggered the
1819
  fault. It is ignored by the two predefined actions, but might be
1820
  useful in custom actions that try to help diagnose errors.
1821
*/
1822
 
1823
#if PROCEED_ON_ERROR
1824
 
1825
/* A count of the number of corruption errors causing resets */
1826
int malloc_corruption_error_count;
1827
 
1828
/* default corruption action */
1829
static void reset_on_error(mstate m);
1830
 
1831
#define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
1832
#define USAGE_ERROR_ACTION(m, p)
1833
 
1834
#else /* PROCEED_ON_ERROR */
1835
 
1836
#ifndef CORRUPTION_ERROR_ACTION
1837
#define CORRUPTION_ERROR_ACTION(m) ABORT
1838
#endif /* CORRUPTION_ERROR_ACTION */
1839
 
1840
#ifndef USAGE_ERROR_ACTION
1841
#define USAGE_ERROR_ACTION(m,p) ABORT
1842
#endif /* USAGE_ERROR_ACTION */
1843
 
1844
#endif /* PROCEED_ON_ERROR */
1845
 
1846
/* -------------------------- Debugging setup ---------------------------- */
1847
 
1848
//#ifdef !DEBUG
1849
#define check_free_chunk(M,P)
1850
#define check_inuse_chunk(M,P)
1851
#define check_malloced_chunk(M,P,N)
1852
#define check_mmapped_chunk(M,P)
1853
#define check_malloc_state(M)
1854
#define check_top_chunk(M,P)
1855
 
1856
//#else /* DEBUG */
1857
//#define check_free_chunk(M,P)       do_check_free_chunk(M,P)
1858
//#define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
1859
//#define check_top_chunk(M,P)        do_check_top_chunk(M,P)
1860
//#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
1861
//#define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
1862
//#define check_malloc_state(M)       do_check_malloc_state(M)
1863
 
1864
//static void   do_check_any_chunk(mstate m, mchunkptr p);
1865
//static void   do_check_top_chunk(mstate m, mchunkptr p);
1866
//static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
1867
//static void   do_check_inuse_chunk(mstate m, mchunkptr p);
1868
//static void   do_check_free_chunk(mstate m, mchunkptr p);
1869
//static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
1870
//static void   do_check_tree(mstate m, tchunkptr t);
1871
//static void   do_check_treebin(mstate m, bindex_t i);
1872
//static void   do_check_smallbin(mstate m, bindex_t i);
1873
//static void   do_check_malloc_state(mstate m);
1874
//static int    bin_find(mstate m, mchunkptr x);
1875
//static size_t traverse_and_check(mstate m);
1876
//#endif /* DEBUG */
1877
 
1878
 
1879
/* ---------------------------- Indexing Bins ---------------------------- */
1880
 
1881
#define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
1882
#define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
1883
#define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
1884
#define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
1885
 
1886
/* addressing by index. See above about smallbin repositioning */
1887
#define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
1888
#define treebin_at(M,i)     (&((M)->treebins[i]))
1889
 
1890
/* assign tree index for size S to variable I */
1891
#if defined(__GNUC__) && defined(i386)
1892
#define compute_tree_index(S, I)\
1893
{\
1894
  size_t X = S >> TREEBIN_SHIFT;\
1895
  if (X == 0)\
1896
    I = 0;\
1897
  else if (X > 0xFFFF)\
1898
    I = NTREEBINS-1;\
1899
  else {\
1900
    unsigned int K;\
1901
    __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
1902
    I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
1903
  }\
1904
}
1905
#else /* GNUC */
1906
#define compute_tree_index(S, I)\
1907
{\
1908
  size_t X = S >> TREEBIN_SHIFT;\
1909
  if (X == 0)\
1910
    I = 0;\
1911
  else if (X > 0xFFFF)\
1912
    I = NTREEBINS-1;\
1913
  else {\
1914
    unsigned int Y = (unsigned int)X;\
1915
    unsigned int N = ((Y - 0x100) >> 16) & 8;\
1916
    unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
1917
    N += K;\
1918
    N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
1919
    K = 14 - N + ((Y <<= K) >> 15);\
1920
    I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
1921
  }\
1922
}
1923
#endif /* GNUC */
1924
 
1925
/* Bit representing maximum resolved size in a treebin at i */
1926
#define bit_for_tree_index(i) \
1927
   (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
1928
 
1929
/* Shift placing maximum resolved bit in a treebin at i as sign bit */
1930
#define leftshift_for_tree_index(i) \
1931
   ((i == NTREEBINS-1)? 0 : \
1932
    ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
1933
 
1934
/* The size of the smallest chunk held in bin with index i */
1935
#define minsize_for_tree_index(i) \
1936
   ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
1937
   (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
1938
 
1939
 
1940
/* ------------------------ Operations on bin maps ----------------------- */
1941
 
1942
/* bit corresponding to given index */
1943
#define idx2bit(i)              ((binmap_t)(1) << (i))
1944
 
1945
/* Mark/Clear bits with given index */
1946
#define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
1947
#define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
1948
#define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
1949
 
1950
#define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
1951
#define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
1952
#define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
1953
 
1954
/* index corresponding to given bit */
1955
 
1956
#if defined(__GNUC__) && defined(i386)
1957
#define compute_bit2idx(X, I)\
1958
{\
1959
  unsigned int J;\
1960
  __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
1961
  I = (bindex_t)J;\
1962
}
1963
 
1964
#else /* GNUC */
1965
#if  USE_BUILTIN_FFS
1966
#define compute_bit2idx(X, I) I = ffs(X)-1
1967
 
1968
#else /* USE_BUILTIN_FFS */
1969
#define compute_bit2idx(X, I)\
1970
{\
1971
  unsigned int Y = X - 1;\
1972
  unsigned int K = Y >> (16-4) & 16;\
1973
  unsigned int N = K;        Y >>= K;\
1974
  N += K = Y >> (8-3) &  8;  Y >>= K;\
1975
  N += K = Y >> (4-2) &  4;  Y >>= K;\
1976
  N += K = Y >> (2-1) &  2;  Y >>= K;\
1977
  N += K = Y >> (1-0) &  1;  Y >>= K;\
1978
  I = (bindex_t)(N + Y);\
1979
}
1980
#endif /* USE_BUILTIN_FFS */
1981
#endif /* GNUC */
1982
 
1983
/* isolate the least set bit of a bitmap */
1984
#define least_bit(x)         ((x) & -(x))
1985
 
1986
/* mask with all bits to left of least bit of x on */
1987
#define left_bits(x)         ((x<<1) | -(x<<1))
1988
 
1989
/* mask with all bits to left of or equal to least bit of x on */
1990
#define same_or_left_bits(x) ((x) | -(x))
1991
 
1992
 
1993
/* ----------------------- Runtime Check Support ------------------------- */
1994
 
1995
/*
1996
  For security, the main invariant is that malloc/free/etc never
1997
  writes to a static address other than malloc_state, unless static
1998
  malloc_state itself has been corrupted, which cannot occur via
1999
  malloc (because of these checks). In essence this means that we
2000
  believe all pointers, sizes, maps etc held in malloc_state, but
2001
  check all of those linked or offsetted from other embedded data
2002
  structures.  These checks are interspersed with main code in a way
2003
  that tends to minimize their run-time cost.
2004
 
2005
  When FOOTERS is defined, in addition to range checking, we also
2006
  verify footer fields of inuse chunks, which can be used guarantee
2007
  that the mstate controlling malloc/free is intact.  This is a
2008
  streamlined version of the approach described by William Robertson
2009
  et al in "Run-time Detection of Heap-based Overflows" LISA'03
2010
  http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2011
  of an inuse chunk holds the xor of its mstate and a random seed,
2012
  that is checked upon calls to free() and realloc().  This is
2013
  (probablistically) unguessable from outside the program, but can be
2014
  computed by any code successfully malloc'ing any chunk, so does not
2015
  itself provide protection against code that has already broken
2016
  security through some other means.  Unlike Robertson et al, we
2017
  always dynamically check addresses of all offset chunks (previous,
2018
  next, etc). This turns out to be cheaper than relying on hashes.
2019
*/
2020
 
2021
#if !INSECURE
2022
/* Check if address a is at least as high as any from MORECORE or MMAP */
2023
#define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2024
/* Check if address of next chunk n is higher than base chunk p */
2025
#define ok_next(p, n)    ((char*)(p) < (char*)(n))
2026
/* Check if p has its cinuse bit on */
2027
#define ok_cinuse(p)     cinuse(p)
2028
/* Check if p has its pinuse bit on */
2029
#define ok_pinuse(p)     pinuse(p)
2030
 
2031
#else /* !INSECURE */
2032
#define ok_address(M, a) (1)
2033
#define ok_next(b, n)    (1)
2034
#define ok_cinuse(p)     (1)
2035
#define ok_pinuse(p)     (1)
2036
#endif /* !INSECURE */
2037
 
2038
#if (FOOTERS && !INSECURE)
2039
/* Check if (alleged) mstate m has expected magic field */
2040
#define ok_magic(M)      ((M)->magic == mparams.magic)
2041
#else  /* (FOOTERS && !INSECURE) */
2042
#define ok_magic(M)      (1)
2043
#endif /* (FOOTERS && !INSECURE) */
2044
 
2045
 
2046
/* In gcc, use __builtin_expect to minimize impact of checks */
2047
#if !INSECURE
2048
//#if defined(__GNUC__) && __GNUC__ >= 3
2049
//#define RTCHECK(e)  __builtin_expect(e, 1)
2050
//#else /* GNUC */
2051
#define RTCHECK(e)  (e)
2052
//#endif /* GNUC */
2053
//#else /* !INSECURE */
2054
//#define RTCHECK(e)  (1)
2055
#endif /* !INSECURE */
2056
 
2057
/* macros to set up inuse chunks with or without footers */
2058
 
2059
#if !FOOTERS
2060
 
2061
#define mark_inuse_foot(M,p,s)
2062
 
2063
/* Set cinuse bit and pinuse bit of next chunk */
2064
#define set_inuse(M,p,s)\
2065
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2066
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2067
 
2068
/* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2069
#define set_inuse_and_pinuse(M,p,s)\
2070
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2071
  ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2072
 
2073
/* Set size, cinuse and pinuse bit of this chunk */
2074
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2075
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2076
 
2077
#else /* FOOTERS */
2078
 
2079
/* Set foot of inuse chunk to be xor of mstate and seed */
2080
#define mark_inuse_foot(M,p,s)\
2081
  (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2082
 
2083
#define get_mstate_for(p)\
2084
  ((mstate)(((mchunkptr)((char*)(p) +\
2085
    (chunksize(p))))->prev_foot ^ mparams.magic))
2086
 
2087
#define set_inuse(M,p,s)\
2088
  ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2089
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2090
  mark_inuse_foot(M,p,s))
2091
 
2092
#define set_inuse_and_pinuse(M,p,s)\
2093
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2094
  (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2095
 mark_inuse_foot(M,p,s))
2096
 
2097
#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2098
  ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2099
  mark_inuse_foot(M, p, s))
2100
 
2101
#endif /* !FOOTERS */
2102
 
2103
/* ---------------------------- setting mparams -------------------------- */
2104
 
2105
/* Initialize mparams */
2106
static int init_mparams(void) {
2107
  if (mparams.page_size == 0) {
2108
    size_t s;
2109
 
2110
    mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2111
    mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2112
#if MORECORE_CONTIGUOUS
2113
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2114
#else  /* MORECORE_CONTIGUOUS */
2115
    mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2116
#endif /* MORECORE_CONTIGUOUS */
2117
 
2118
#if (FOOTERS && !INSECURE)
2119
    {
2120
#if USE_DEV_RANDOM
2121
      int fd;
2122
      unsigned char buf[sizeof(size_t)];
2123
      /* Try to use /dev/urandom, else fall back on using time */
2124
      if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2125
          read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2126
        s = *((size_t *) buf);
2127
        close(fd);
2128
      }
2129
      else
2130
#endif /* USE_DEV_RANDOM */
2131
        s = (size_t)(time(0) ^ (size_t)0x55555555U);
2132
 
2133
      s |= (size_t)8U;    /* ensure nonzero */
2134
      s &= ~(size_t)7U;   /* improve chances of fault for bad values */
2135
 
2136
    }
2137
#else /* (FOOTERS && !INSECURE) */
2138
    s = (size_t)0x58585858U;
2139
#endif /* (FOOTERS && !INSECURE) */
2140
    ACQUIRE_MAGIC_INIT_LOCK();
2141
    if (mparams.magic == 0) {
2142
      mparams.magic = s;
2143
      /* Set up lock for main malloc area */
2144
      INITIAL_LOCK(&gm->mutex);
2145
      gm->mflags = mparams.default_mflags;
2146
    }
2147
    RELEASE_MAGIC_INIT_LOCK();
2148
 
2149
//#ifndef WIN32
2150
//    mparams.page_size = 4096;
2151
//    mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2152
//                           DEFAULT_GRANULARITY : mparams.page_size);
2153
//#else /* WIN32 */
2154
//    {
2155
      mparams.page_size = 4096;
2156
      mparams.granularity = 16384;
2157
//    }
2158
//#endif /* WIN32 */
2159
 
2160
    /* Sanity-check configuration:
2161
       size_t must be unsigned and as wide as pointer type.
2162
       ints must be at least 4 bytes.
2163
       alignment must be at least 8.
2164
       Alignment, min chunk size, and page size must all be powers of 2.
2165
    */
2166
    if ((sizeof(size_t) != sizeof(char*)) ||
2167
        (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
2168
        (sizeof(int) < 4)  ||
2169
        (MALLOC_ALIGNMENT < (size_t)8U) ||
2170
        ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
2171
        ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
2172
        ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2173
        ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
2174
      ABORT;
2175
  }
2176
  return 0;
2177
}
2178
 
2179
/* support for mallopt */
2180
static int change_mparam(int param_number, int value) {
2181
  size_t val = (size_t)value;
2182
  init_mparams();
2183
  switch(param_number) {
2184
  case M_TRIM_THRESHOLD:
2185
    mparams.trim_threshold = val;
2186
    return 1;
2187
  case M_GRANULARITY:
2188
    if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2189
      mparams.granularity = val;
2190
      return 1;
2191
    }
2192
    else
2193
      return 0;
2194
  case M_MMAP_THRESHOLD:
2195
    mparams.mmap_threshold = val;
2196
    return 1;
2197
  default:
2198
    return 0;
2199
  }
2200
}
2201
 
2202
#ifdef DEBUG
2203
#endif /* DEBUG */
2204
 
2205
/* ----------------------------- statistics ------------------------------ */
2206
 
2207
#if !NO_MALLINFO
2208
#endif /* !NO_MALLINFO */
2209
 
2210
/* ----------------------- Operations on smallbins ----------------------- */
2211
 
2212
/*
2213
  Various forms of linking and unlinking are defined as macros.  Even
2214
  the ones for trees, which are very long but have very short typical
2215
  paths.  This is ugly but reduces reliance on inlining support of
2216
  compilers.
2217
*/
2218
 
2219
/* Link a free chunk into a smallbin  */
2220
#define insert_small_chunk(M, P, S) {\
2221
  bindex_t I  = small_index(S);\
2222
  mchunkptr B = smallbin_at(M, I);\
2223
  mchunkptr F = B;\
2224
  assert(S >= MIN_CHUNK_SIZE);\
2225
  if (!smallmap_is_marked(M, I))\
2226
    mark_smallmap(M, I);\
2227
  else if (RTCHECK(ok_address(M, B->fd)))\
2228
    F = B->fd;\
2229
  else {\
2230
    CORRUPTION_ERROR_ACTION(M);\
2231
  }\
2232
  B->fd = P;\
2233
  F->bk = P;\
2234
  P->fd = F;\
2235
  P->bk = B;\
2236
}
2237
 
2238
/* Unlink a chunk from a smallbin  */
2239
#define unlink_small_chunk(M, P, S) {\
2240
  mchunkptr F = P->fd;\
2241
  mchunkptr B = P->bk;\
2242
  bindex_t I = small_index(S);\
2243
  assert(P != B);\
2244
  assert(P != F);\
2245
  assert(chunksize(P) == small_index2size(I));\
2246
  if (F == B)\
2247
    clear_smallmap(M, I);\
2248
  else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2249
                   (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2250
    F->bk = B;\
2251
    B->fd = F;\
2252
  }\
2253
  else {\
2254
    CORRUPTION_ERROR_ACTION(M);\
2255
  }\
2256
}
2257
 
2258
/* Unlink the first chunk from a smallbin */
2259
#define unlink_first_small_chunk(M, B, P, I) {\
2260
  mchunkptr F = P->fd;\
2261
  assert(P != B);\
2262
  assert(P != F);\
2263
  assert(chunksize(P) == small_index2size(I));\
2264
  if (B == F)\
2265
    clear_smallmap(M, I);\
2266
  else if (RTCHECK(ok_address(M, F))) {\
2267
    B->fd = F;\
2268
    F->bk = B;\
2269
  }\
2270
  else {\
2271
    CORRUPTION_ERROR_ACTION(M);\
2272
  }\
2273
}
2274
 
2275
/* Replace dv node, binning the old one */
2276
/* Used only when dvsize known to be small */
2277
#define replace_dv(M, P, S) {\
2278
  size_t DVS = M->dvsize;\
2279
  if (DVS != 0) {\
2280
    mchunkptr DV = M->dv;\
2281
    assert(is_small(DVS));\
2282
    insert_small_chunk(M, DV, DVS);\
2283
  }\
2284
  M->dvsize = S;\
2285
  M->dv = P;\
2286
}
2287
 
2288
/* ------------------------- Operations on trees ------------------------- */
2289
 
2290
/* Insert chunk into tree */
2291
#define insert_large_chunk(M, X, S) {\
2292
  tbinptr* H;\
2293
  bindex_t I;\
2294
  compute_tree_index(S, I);\
2295
  H = treebin_at(M, I);\
2296
  X->index = I;\
2297
  X->child[0] = X->child[1] = 0;\
2298
  if (!treemap_is_marked(M, I)) {\
2299
    mark_treemap(M, I);\
2300
    *H = X;\
2301
    X->parent = (tchunkptr)H;\
2302
    X->fd = X->bk = X;\
2303
  }\
2304
  else {\
2305
    tchunkptr T = *H;\
2306
    size_t K = S << leftshift_for_tree_index(I);\
2307
    for (;;) {\
2308
      if (chunksize(T) != S) {\
2309
        tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2310
        K <<= 1;\
2311
        if (*C != 0)\
2312
          T = *C;\
2313
        else if (RTCHECK(ok_address(M, C))) {\
2314
          *C = X;\
2315
          X->parent = T;\
2316
          X->fd = X->bk = X;\
2317
          break;\
2318
        }\
2319
        else {\
2320
          CORRUPTION_ERROR_ACTION(M);\
2321
          break;\
2322
        }\
2323
      }\
2324
      else {\
2325
        tchunkptr F = T->fd;\
2326
        if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
2327
          T->fd = F->bk = X;\
2328
          X->fd = F;\
2329
          X->bk = T;\
2330
          X->parent = 0;\
2331
          break;\
2332
        }\
2333
        else {\
2334
          CORRUPTION_ERROR_ACTION(M);\
2335
          break;\
2336
        }\
2337
      }\
2338
    }\
2339
  }\
2340
}
2341
 
2342
/*
2343
  Unlink steps:
2344
 
2345
  1. If x is a chained node, unlink it from its same-sized fd/bk links
2346
     and choose its bk node as its replacement.
2347
  2. If x was the last node of its size, but not a leaf node, it must
2348
     be replaced with a leaf node (not merely one with an open left or
2349
     right), to make sure that lefts and rights of descendents
2350
     correspond properly to bit masks.  We use the rightmost descendent
2351
     of x.  We could use any other leaf, but this is easy to locate and
2352
     tends to counteract removal of leftmosts elsewhere, and so keeps
2353
     paths shorter than minimally guaranteed.  This doesn't loop much
2354
     because on average a node in a tree is near the bottom.
2355
  3. If x is the base of a chain (i.e., has parent links) relink
2356
     x's parent and children to x's replacement (or null if none).
2357
*/
2358
 
2359
#define unlink_large_chunk(M, X) {\
2360
  tchunkptr XP = X->parent;\
2361
  tchunkptr R;\
2362
  if (X->bk != X) {\
2363
    tchunkptr F = X->fd;\
2364
    R = X->bk;\
2365
    if (RTCHECK(ok_address(M, F))) {\
2366
      F->bk = R;\
2367
      R->fd = F;\
2368
    }\
2369
    else {\
2370
      CORRUPTION_ERROR_ACTION(M);\
2371
    }\
2372
  }\
2373
  else {\
2374
    tchunkptr* RP;\
2375
    if (((R = *(RP = &(X->child[1]))) != 0) ||\
2376
        ((R = *(RP = &(X->child[0]))) != 0)) {\
2377
      tchunkptr* CP;\
2378
      while ((*(CP = &(R->child[1])) != 0) ||\
2379
             (*(CP = &(R->child[0])) != 0)) {\
2380
        R = *(RP = CP);\
2381
      }\
2382
      if (RTCHECK(ok_address(M, RP)))\
2383
        *RP = 0;\
2384
      else {\
2385
        CORRUPTION_ERROR_ACTION(M);\
2386
      }\
2387
    }\
2388
  }\
2389
  if (XP != 0) {\
2390
    tbinptr* H = treebin_at(M, X->index);\
2391
    if (X == *H) {\
2392
      if ((*H = R) == 0) \
2393
        clear_treemap(M, X->index);\
2394
    }\
2395
    else if (RTCHECK(ok_address(M, XP))) {\
2396
      if (XP->child[0] == X) \
2397
        XP->child[0] = R;\
2398
      else \
2399
        XP->child[1] = R;\
2400
    }\
2401
    else\
2402
      CORRUPTION_ERROR_ACTION(M);\
2403
    if (R != 0) {\
2404
      if (RTCHECK(ok_address(M, R))) {\
2405
        tchunkptr C0, C1;\
2406
        R->parent = XP;\
2407
        if ((C0 = X->child[0]) != 0) {\
2408
          if (RTCHECK(ok_address(M, C0))) {\
2409
            R->child[0] = C0;\
2410
            C0->parent = R;\
2411
          }\
2412
          else\
2413
            CORRUPTION_ERROR_ACTION(M);\
2414
        }\
2415
        if ((C1 = X->child[1]) != 0) {\
2416
          if (RTCHECK(ok_address(M, C1))) {\
2417
            R->child[1] = C1;\
2418
            C1->parent = R;\
2419
          }\
2420
          else\
2421
            CORRUPTION_ERROR_ACTION(M);\
2422
        }\
2423
      }\
2424
      else\
2425
        CORRUPTION_ERROR_ACTION(M);\
2426
    }\
2427
  }\
2428
}
2429
 
2430
/* Relays to large vs small bin operations */
2431
 
2432
#define insert_chunk(M, P, S)\
2433
  if (is_small(S)) insert_small_chunk(M, P, S)\
2434
  else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
2435
 
2436
#define unlink_chunk(M, P, S)\
2437
  if (is_small(S)) unlink_small_chunk(M, P, S)\
2438
  else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
2439
 
2440
 
2441
/* Relays to internal calls to malloc/free from realloc, memalign etc */
2442
 
2443
#if ONLY_MSPACES
2444
#define internal_malloc(m, b) mspace_malloc(m, b)
2445
#define internal_free(m, mem) mspace_free(m,mem);
2446
#else /* ONLY_MSPACES */
2447
#if MSPACES
2448
#define internal_malloc(m, b)\
2449
   (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
2450
#define internal_free(m, mem)\
2451
   if (m == gm) dlfree(mem); else mspace_free(m,mem);
2452
#else /* MSPACES */
2453
#define internal_malloc(m, b) dlmalloc(b)
2454
#define internal_free(m, mem) dlfree(mem)
2455
#endif /* MSPACES */
2456
#endif /* ONLY_MSPACES */
2457
 
2458
/* -----------------------  Direct-mmapping chunks ----------------------- */
2459
 
2460
/*
2461
  Directly mmapped chunks are set up with an offset to the start of
2462
  the mmapped region stored in the prev_foot field of the chunk. This
2463
  allows reconstruction of the required argument to MUNMAP when freed,
2464
  and also allows adjustment of the returned chunk to meet alignment
2465
  requirements (especially in memalign).  There is also enough space
2466
  allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
2467
  the PINUSE bit so frees can be checked.
2468
*/
2469
 
2470
/* Malloc using mmap */
2471
static void* mmap_alloc(mstate m, size_t nb) {
2472
  size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2473
  if (mmsize > nb) {     /* Check for wrap around 0 */
2474
    char* mm = (char*)(gui_ksys_mem_alloc(mmsize));
2475
    if (mm != CMFAIL) {
2476
      size_t offset = align_offset(chunk2mem(mm));
2477
      size_t psize = mmsize - offset - MMAP_FOOT_PAD;
2478
      mchunkptr p = (mchunkptr)(mm + offset);
2479
      p->prev_foot = offset | IS_MMAPPED_BIT;
2480
      (p)->head = (psize|CINUSE_BIT);
2481
      mark_inuse_foot(m, p, psize);
2482
      chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
2483
      chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
2484
 
2485
      if (mm < m->least_addr)
2486
        m->least_addr = mm;
2487
      if ((m->footprint += mmsize) > m->max_footprint)
2488
        m->max_footprint = m->footprint;
2489
      assert(is_aligned(chunk2mem(p)));
2490
      check_mmapped_chunk(m, p);
2491
      return chunk2mem(p);
2492
    }
2493
  }
2494
  return 0;
2495
}
2496
 
2497
/* Realloc using mmap */
2498
static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
2499
  size_t oldsize = chunksize(oldp);
2500
  if (is_small(nb)) /* Can't shrink mmap regions below small size */
2501
    return 0;
2502
  /* Keep old chunk if big enough but not too big */
2503
  if (oldsize >= nb + SIZE_T_SIZE &&
2504
      (oldsize - nb) <= (mparams.granularity << 1))
2505
    return oldp;
2506
  else {
2507
    size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
2508
    size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
2509
    size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
2510
                                         CHUNK_ALIGN_MASK);
2511
    char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
2512
                                  oldmmsize, newmmsize, 1);
2513
    if (cp != CMFAIL) {
2514
      mchunkptr newp = (mchunkptr)(cp + offset);
2515
      size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
2516
      newp->head = (psize|CINUSE_BIT);
2517
      mark_inuse_foot(m, newp, psize);
2518
      chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
2519
      chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
2520
 
2521
      if (cp < m->least_addr)
2522
        m->least_addr = cp;
2523
      if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
2524
        m->max_footprint = m->footprint;
2525
      check_mmapped_chunk(m, newp);
2526
      return newp;
2527
    }
2528
  }
2529
  return 0;
2530
}
2531
 
2532
/* -------------------------- mspace management -------------------------- */
2533
 
2534
/* Initialize top chunk and its size */
2535
static void init_top(mstate m, mchunkptr p, size_t psize) {
2536
  /* Ensure alignment */
2537
  size_t offset = align_offset(chunk2mem(p));
2538
  p = (mchunkptr)((char*)p + offset);
2539
  psize -= offset;
2540
 
2541
  m->top = p;
2542
  m->topsize = psize;
2543
  p->head = psize | PINUSE_BIT;
2544
  /* set size of fake trailing chunk holding overhead space only once */
2545
  chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
2546
  m->trim_check = mparams.trim_threshold; /* reset on each update */
2547
}
2548
 
2549
/* Initialize bins for a new mstate that is otherwise zeroed out */
2550
static void init_bins(mstate m) {
2551
  /* Establish circular links for smallbins */
2552
  bindex_t i;
2553
  for (i = 0; i < NSMALLBINS; ++i) {
2554
    sbinptr bin = smallbin_at(m,i);
2555
    bin->fd = bin->bk = bin;
2556
  }
2557
}
2558
 
2559
#if PROCEED_ON_ERROR
2560
 
2561
/* default corruption action */
2562
static void reset_on_error(mstate m) {
2563
  int i;
2564
  ++malloc_corruption_error_count;
2565
  /* Reinitialize fields to forget about all memory */
2566
  m->smallbins = m->treebins = 0;
2567
  m->dvsize = m->topsize = 0;
2568
  m->seg.base = 0;
2569
  m->seg.size = 0;
2570
  m->seg.next = 0;
2571
  m->top = m->dv = 0;
2572
  for (i = 0; i < NTREEBINS; ++i)
2573
    *treebin_at(m, i) = 0;
2574
  init_bins(m);
2575
}
2576
#endif /* PROCEED_ON_ERROR */
2577
 
2578
/* Allocate chunk and prepend remainder with chunk in successor base. */
2579
static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
2580
                           size_t nb) {
2581
  mchunkptr p = align_as_chunk(newbase);
2582
  mchunkptr oldfirst = align_as_chunk(oldbase);
2583
  size_t psize = (char*)oldfirst - (char*)p;
2584
  mchunkptr q = chunk_plus_offset(p, nb);
2585
  size_t qsize = psize - nb;
2586
  set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2587
 
2588
  assert((char*)oldfirst > (char*)q);
2589
  assert(pinuse(oldfirst));
2590
  assert(qsize >= MIN_CHUNK_SIZE);
2591
 
2592
  /* consolidate remainder with first chunk of old base */
2593
  if (oldfirst == m->top) {
2594
    size_t tsize = m->topsize += qsize;
2595
    m->top = q;
2596
    q->head = tsize | PINUSE_BIT;
2597
    check_top_chunk(m, q);
2598
  }
2599
  else if (oldfirst == m->dv) {
2600
    size_t dsize = m->dvsize += qsize;
2601
    m->dv = q;
2602
    set_size_and_pinuse_of_free_chunk(q, dsize);
2603
  }
2604
  else {
2605
    if (!cinuse(oldfirst)) {
2606
      size_t nsize = chunksize(oldfirst);
2607
      unlink_chunk(m, oldfirst, nsize);
2608
      oldfirst = chunk_plus_offset(oldfirst, nsize);
2609
      qsize += nsize;
2610
    }
2611
    set_free_with_pinuse(q, qsize, oldfirst);
2612
    insert_chunk(m, q, qsize);
2613
    check_free_chunk(m, q);
2614
  }
2615
 
2616
  check_malloced_chunk(m, chunk2mem(p), nb);
2617
  return chunk2mem(p);
2618
}
2619
 
2620
 
2621
/* Add a segment to hold a new noncontiguous region */
2622
static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
2623
  /* Determine locations and sizes of segment, fenceposts, old top */
2624
  char* old_top = (char*)m->top;
2625
  msegmentptr oldsp = segment_holding(m, old_top);
2626
  char* old_end = oldsp->base + oldsp->size;
2627
  size_t ssize = pad_request(sizeof(struct malloc_segment));
2628
  char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
2629
  size_t offset = align_offset(chunk2mem(rawsp));
2630
  char* asp = rawsp + offset;
2631
  char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
2632
  mchunkptr sp = (mchunkptr)csp;
2633
  msegmentptr ss = (msegmentptr)(chunk2mem(sp));
2634
  mchunkptr tnext = chunk_plus_offset(sp, ssize);
2635
  mchunkptr p = tnext;
2636
  int nfences = 0;
2637
 
2638
  /* reset top to new space */
2639
  init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2640
 
2641
  /* Set up segment record */
2642
  assert(is_aligned(ss));
2643
  set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
2644
  *ss = m->seg; /* Push current record */
2645
  m->seg.base = tbase;
2646
  m->seg.size = tsize;
2647
  m->seg.sflags = mmapped;
2648
  m->seg.next = ss;
2649
 
2650
  /* Insert trailing fenceposts */
2651
  for (;;) {
2652
    mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
2653
    p->head = FENCEPOST_HEAD;
2654
    ++nfences;
2655
    if ((char*)(&(nextp->head)) < old_end)
2656
      p = nextp;
2657
    else
2658
      break;
2659
  }
2660
  assert(nfences >= 2);
2661
 
2662
  /* Insert the rest of old top into a bin as an ordinary free chunk */
2663
  if (csp != old_top) {
2664
    mchunkptr q = (mchunkptr)old_top;
2665
    size_t psize = csp - old_top;
2666
    mchunkptr tn = chunk_plus_offset(q, psize);
2667
    set_free_with_pinuse(q, psize, tn);
2668
    insert_chunk(m, q, psize);
2669
  }
2670
 
2671
  check_top_chunk(m, m->top);
2672
}
2673
 
2674
/* -------------------------- System allocation -------------------------- */
2675
 
2676
/* Get memory from system using MORECORE or MMAP */
2677
static void* sys_alloc(mstate m, size_t nb) {
2678
  char* tbase = CMFAIL;
2679
  size_t tsize = 0;
2680
  flag_t mmap_flag = 0;
2681
 
2682
  init_mparams();
2683
 
2684
  /* Directly map large chunks */
2685
  if (use_mmap(m) && nb >= mparams.mmap_threshold) {
2686
    void* mem = mmap_alloc(m, nb);
2687
    if (mem != 0)
2688
      return mem;
2689
  }
2690
 
2691
  /*
2692
    Try getting memory in any of three ways (in most-preferred to
2693
    least-preferred order):
2694
    1. A call to MORECORE that can normally contiguously extend memory.
2695
       (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
2696
       or main space is mmapped or a previous contiguous call failed)
2697
    2. A call to MMAP new space (disabled if not HAVE_MMAP).
2698
       Note that under the default settings, if MORECORE is unable to
2699
       fulfill a request, and HAVE_MMAP is true, then mmap is
2700
       used as a noncontiguous system allocator. This is a useful backup
2701
       strategy for systems with holes in address spaces -- in this case
2702
       sbrk cannot contiguously expand the heap, but mmap may be able to
2703
       find space.
2704
    3. A call to MORECORE that cannot usually contiguously extend memory.
2705
       (disabled if not HAVE_MORECORE)
2706
  */
2707
 
2708
  if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
2709
    char* br = CMFAIL;
2710
    msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
2711
    size_t asize = 0;
2712
    ACQUIRE_MORECORE_LOCK();
2713
 
2714
    if (ss == 0) {  /* First time through or recovery */
2715
      char* base = (char*)CALL_MORECORE(0);
2716
      if (base != CMFAIL) {
2717
        asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2718
        /* Adjust to end on a page boundary */
2719
        if (!is_page_aligned(base))
2720
          asize += (page_align((size_t)base) - (size_t)base);
2721
        /* Can't call MORECORE if size is negative when treated as signed */
2722
        if (asize < HALF_MAX_SIZE_T &&
2723
            (br = (char*)(CALL_MORECORE(asize))) == base) {
2724
          tbase = base;
2725
          tsize = asize;
2726
        }
2727
      }
2728
    }
2729
    else {
2730
      /* Subtract out existing available top space from MORECORE request. */
2731
      asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
2732
      /* Use mem here only if it did continuously extend old space */
2733
      if (asize < HALF_MAX_SIZE_T &&
2734
          (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
2735
        tbase = br;
2736
        tsize = asize;
2737
      }
2738
    }
2739
 
2740
    if (tbase == CMFAIL) {    /* Cope with partial failure */
2741
      if (br != CMFAIL) {    /* Try to use/extend the space we did get */
2742
        if (asize < HALF_MAX_SIZE_T &&
2743
            asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
2744
          size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
2745
          if (esize < HALF_MAX_SIZE_T) {
2746
            char* end = (char*)CALL_MORECORE(esize);
2747
            if (end != CMFAIL)
2748
              asize += esize;
2749
            else {            /* Can't use; try to release */
2750
              CALL_MORECORE(-asize);
2751
              br = CMFAIL;
2752
            }
2753
          }
2754
        }
2755
      }
2756
      if (br != CMFAIL) {    /* Use the space we did get */
2757
        tbase = br;
2758
        tsize = asize;
2759
      }
2760
      else
2761
        disable_contiguous(m); /* Don't try contiguous path in the future */
2762
    }
2763
 
2764
    RELEASE_MORECORE_LOCK();
2765
  }
2766
 
2767
  if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
2768
    size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
2769
    size_t rsize = granularity_align(req);
2770
    if (rsize > nb) { /* Fail if wraps around zero */
2771
      char* mp = (char*)(gui_ksys_mem_alloc(rsize));
2772
      if (mp != CMFAIL) {
2773
        tbase = mp;
2774
        tsize = rsize;
2775
        mmap_flag = IS_MMAPPED_BIT;
2776
      }
2777
    }
2778
  }
2779
 
2780
  if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
2781
    size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
2782
    if (asize < HALF_MAX_SIZE_T) {
2783
      char* br = CMFAIL;
2784
      char* end = CMFAIL;
2785
      ACQUIRE_MORECORE_LOCK();
2786
      br = (char*)(CALL_MORECORE(asize));
2787
      end = (char*)(CALL_MORECORE(0));
2788
      RELEASE_MORECORE_LOCK();
2789
      if (br != CMFAIL && end != CMFAIL && br < end) {
2790
        size_t ssize = end - br;
2791
        if (ssize > nb + TOP_FOOT_SIZE) {
2792
          tbase = br;
2793
          tsize = ssize;
2794
        }
2795
      }
2796
    }
2797
  }
2798
 
2799
  if (tbase != CMFAIL) {
2800
 
2801
    if ((m->footprint += tsize) > m->max_footprint)
2802
      m->max_footprint = m->footprint;
2803
 
2804
    if (!is_initialized(m)) { /* first-time initialization */
2805
      m->seg.base = m->least_addr = tbase;
2806
      m->seg.size = tsize;
2807
      m->seg.sflags = mmap_flag;
2808
      m->magic = mparams.magic;
2809
      init_bins(m);
2810
      if (is_global(m))
2811
        init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
2812
      else {
2813
        /* Offset top by embedded malloc_state */
2814
        mchunkptr mn = next_chunk(mem2chunk(m));
2815
        init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
2816
      }
2817
    }
2818
 
2819
    else {
2820
      /* Try to merge with an existing segment */
2821
      msegmentptr sp = &m->seg;
2822
      while (sp != 0 && tbase != sp->base + sp->size)
2823
        sp = sp->next;
2824
      if (sp != 0 &&
2825
          !is_extern_segment(sp) &&
2826
          (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
2827
          segment_holds(sp, m->top)) { /* append */
2828
        sp->size += tsize;
2829
        init_top(m, m->top, m->topsize + tsize);
2830
      }
2831
      else {
2832
        if (tbase < m->least_addr)
2833
          m->least_addr = tbase;
2834
        sp = &m->seg;
2835
        while (sp != 0 && sp->base != tbase + tsize)
2836
          sp = sp->next;
2837
        if (sp != 0 &&
2838
            !is_extern_segment(sp) &&
2839
            (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
2840
          char* oldbase = sp->base;
2841
          sp->base = tbase;
2842
          sp->size += tsize;
2843
          return prepend_alloc(m, tbase, oldbase, nb);
2844
        }
2845
        else
2846
          add_segment(m, tbase, tsize, mmap_flag);
2847
      }
2848
    }
2849
 
2850
    if (nb < m->topsize) { /* Allocate from new or extended top space */
2851
      size_t rsize = m->topsize -= nb;
2852
      mchunkptr p = m->top;
2853
      mchunkptr r = m->top = chunk_plus_offset(p, nb);
2854
      r->head = rsize | PINUSE_BIT;
2855
      set_size_and_pinuse_of_inuse_chunk(m, p, nb);
2856
      check_top_chunk(m, m->top);
2857
      check_malloced_chunk(m, chunk2mem(p), nb);
2858
      return chunk2mem(p);
2859
    }
2860
  }
2861
 
2862
  MALLOC_FAILURE_ACTION;
2863
  return 0;
2864
}
2865
 
2866
/* -----------------------  system deallocation -------------------------- */
2867
 
2868
/* Unmap and unlink any mmapped segments that don't contain used chunks */
2869
static size_t release_unused_segments(mstate m) {
2870
  size_t released = 0;
2871
  msegmentptr pred = &m->seg;
2872
  msegmentptr sp = pred->next;
2873
  while (sp != 0) {
2874
    char* base = sp->base;
2875
    size_t size = sp->size;
2876
    msegmentptr next = sp->next;
2877
    if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
2878
      mchunkptr p = align_as_chunk(base);
2879
      size_t psize = chunksize(p);
2880
      /* Can unmap if first chunk holds entire segment and not pinned */
2881
      if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
2882
        tchunkptr tp = (tchunkptr)p;
2883
        assert(segment_holds(sp, (char*)sp));
2884
        if (p == m->dv) {
2885
          m->dv = 0;
2886
          m->dvsize = 0;
2887
        }
2888
        else {
2889
          unlink_large_chunk(m, tp);
2890
        }
2891
        if (gui_ksys_mem_free(base, size) == 0) {
2892
          released += size;
2893
          m->footprint -= size;
2894
          /* unlink obsoleted record */
2895
          sp = pred;
2896
          sp->next = next;
2897
        }
2898
        else { /* back out if cannot unmap */
2899
          insert_large_chunk(m, tp, psize);
2900
        }
2901
      }
2902
    }
2903
    pred = sp;
2904
    sp = next;
2905
  }
2906
  return released;
2907
}
2908
 
2909
static int sys_trim(mstate m, size_t pad) {
2910
  size_t released = 0;
2911
  if (pad < MAX_REQUEST && is_initialized(m)) {
2912
    pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
2913
 
2914
    if (m->topsize > pad) {
2915
      /* Shrink top space in granularity-size units, keeping at least one */
2916
      size_t unit = mparams.granularity;
2917
      size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
2918
                      SIZE_T_ONE) * unit;
2919
      msegmentptr sp = segment_holding(m, (char*)m->top);
2920
 
2921
      if (!is_extern_segment(sp)) {
2922
        if (is_mmapped_segment(sp)) {
2923
          if (HAVE_MMAP &&
2924
              sp->size >= extra &&
2925
              !has_segment_link(m, sp)) { /* can't shrink if pinned */
2926
            size_t newsize = sp->size - extra;
2927
            /* Prefer mremap, fall back to munmap */
2928
            if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
2929
                (gui_ksys_mem_free(sp->base + newsize, extra) == 0)) {
2930
              released = extra;
2931
            }
2932
          }
2933
        }
2934
        else if (HAVE_MORECORE) {
2935
          if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
2936
            extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
2937
          ACQUIRE_MORECORE_LOCK();
2938
          {
2939
            /* Make sure end of memory is where we last set it. */
2940
            char* old_br = (char*)(CALL_MORECORE(0));
2941
            if (old_br == sp->base + sp->size) {
2942
              char* rel_br = (char*)(CALL_MORECORE(-extra));
2943
              char* new_br = (char*)(CALL_MORECORE(0));
2944
              if (rel_br != CMFAIL && new_br < old_br)
2945
                released = old_br - new_br;
2946
            }
2947
          }
2948
          RELEASE_MORECORE_LOCK();
2949
        }
2950
      }
2951
 
2952
      if (released != 0) {
2953
        sp->size -= released;
2954
        m->footprint -= released;
2955
        init_top(m, m->top, m->topsize - released);
2956
        check_top_chunk(m, m->top);
2957
      }
2958
    }
2959
 
2960
    /* Unmap any unused mmapped segments */
2961
    if (HAVE_MMAP)
2962
      released += release_unused_segments(m);
2963
 
2964
    /* On failure, disable autotrim to avoid repeated failed future calls */
2965
    if (released == 0)
2966
      m->trim_check = MAX_SIZE_T;
2967
  }
2968
 
2969
  return (released != 0)? 1 : 0;
2970
}
2971
 
2972
/* ---------------------------- malloc support --------------------------- */
2973
 
2974
/* allocate a large request from the best fitting chunk in a treebin */
2975
static void* tmalloc_large(mstate m, size_t nb) {
2976
  tchunkptr v = 0;
2977
  size_t rsize = -nb; /* Unsigned negation */
2978
  tchunkptr t;
2979
  bindex_t idx;
2980
  compute_tree_index(nb, idx);
2981
 
2982
  if ((t = *treebin_at(m, idx)) != 0) {
2983
    /* Traverse tree for this bin looking for node with size == nb */
2984
    size_t sizebits = nb << leftshift_for_tree_index(idx);
2985
    tchunkptr rst = 0;  /* The deepest untaken right subtree */
2986
    for (;;) {
2987
      tchunkptr rt;
2988
      size_t trem = chunksize(t) - nb;
2989
      if (trem < rsize) {
2990
        v = t;
2991
        if ((rsize = trem) == 0)
2992
          break;
2993
      }
2994
      rt = t->child[1];
2995
      t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2996
      if (rt != 0 && rt != t)
2997
        rst = rt;
2998
      if (t == 0) {
2999
        t = rst; /* set t to least subtree holding sizes > nb */
3000
        break;
3001
      }
3002
      sizebits <<= 1;
3003
    }
3004
  }
3005
 
3006
  if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3007
    binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3008
    if (leftbits != 0) {
3009
      bindex_t i;
3010
      binmap_t leastbit = least_bit(leftbits);
3011
      compute_bit2idx(leastbit, i);
3012
      t = *treebin_at(m, i);
3013
    }
3014
  }
3015
 
3016
  while (t != 0) { /* find smallest of tree or subtree */
3017
    size_t trem = chunksize(t) - nb;
3018
    if (trem < rsize) {
3019
      rsize = trem;
3020
      v = t;
3021
    }
3022
    t = leftmost_child(t);
3023
  }
3024
 
3025
  /*  If dv is a better fit, return 0 so malloc will use it */
3026
  if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3027
    if (RTCHECK(ok_address(m, v))) { /* split */
3028
      mchunkptr r = chunk_plus_offset(v, nb);
3029
      assert(chunksize(v) == rsize + nb);
3030
      if (RTCHECK(ok_next(v, r))) {
3031
        unlink_large_chunk(m, v);
3032
        if (rsize < MIN_CHUNK_SIZE)
3033
          set_inuse_and_pinuse(m, v, (rsize + nb));
3034
        else {
3035
          set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3036
          set_size_and_pinuse_of_free_chunk(r, rsize);
3037
          insert_chunk(m, r, rsize);
3038
        }
3039
        return chunk2mem(v);
3040
      }
3041
    }
3042
    CORRUPTION_ERROR_ACTION(m);
3043
  }
3044
  return 0;
3045
}
3046
 
3047
/* allocate a small request from the best fitting chunk in a treebin */
3048
static void* tmalloc_small(mstate m, size_t nb) {
3049
  tchunkptr t, v;
3050
  size_t rsize;
3051
  bindex_t i;
3052
  binmap_t leastbit = least_bit(m->treemap);
3053
  compute_bit2idx(leastbit, i);
3054
 
3055
  v = t = *treebin_at(m, i);
3056
  rsize = chunksize(t) - nb;
3057
 
3058
  while ((t = leftmost_child(t)) != 0) {
3059
    size_t trem = chunksize(t) - nb;
3060
    if (trem < rsize) {
3061
      rsize = trem;
3062
      v = t;
3063
    }
3064
  }
3065
 
3066
  if (RTCHECK(ok_address(m, v))) {
3067
    mchunkptr r = chunk_plus_offset(v, nb);
3068
    assert(chunksize(v) == rsize + nb);
3069
    if (RTCHECK(ok_next(v, r))) {
3070
      unlink_large_chunk(m, v);
3071
      if (rsize < MIN_CHUNK_SIZE)
3072
        set_inuse_and_pinuse(m, v, (rsize + nb));
3073
      else {
3074
        set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3075
        set_size_and_pinuse_of_free_chunk(r, rsize);
3076
        replace_dv(m, r, rsize);
3077
      }
3078
      return chunk2mem(v);
3079
    }
3080
  }
3081
 
3082
  CORRUPTION_ERROR_ACTION(m);
3083
  return 0;
3084
}
3085
 
3086
/* --------------------------- realloc support --------------------------- */
3087
 
3088
static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3089
  if (bytes >= MAX_REQUEST) {
3090
    MALLOC_FAILURE_ACTION;
3091
    return 0;
3092
  }
3093
  if (!PREACTION(m)) {
3094
    mchunkptr oldp = mem2chunk(oldmem);
3095
    size_t oldsize = chunksize(oldp);
3096
    mchunkptr next = chunk_plus_offset(oldp, oldsize);
3097
    mchunkptr newp = 0;
3098
    void* extra = 0;
3099
 
3100
    /* Try to either shrink or extend into top. Else malloc-copy-free */
3101
 
3102
    if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3103
                ok_next(oldp, next) && ok_pinuse(next))) {
3104
      size_t nb = request2size(bytes);
3105
      if (is_mmapped(oldp))
3106
        newp = mmap_resize(m, oldp, nb);
3107
      else if (oldsize >= nb) { /* already big enough */
3108
        size_t rsize = oldsize - nb;
3109
        newp = oldp;
3110
        if (rsize >= MIN_CHUNK_SIZE) {
3111
          mchunkptr remainder = chunk_plus_offset(newp, nb);
3112
          set_inuse(m, newp, nb);
3113
          set_inuse(m, remainder, rsize);
3114
          extra = chunk2mem(remainder);
3115
        }
3116
      }
3117
      else if (next == m->top && oldsize + m->topsize > nb) {
3118
        /* Expand into top */
3119
        size_t newsize = oldsize + m->topsize;
3120
        size_t newtopsize = newsize - nb;
3121
        mchunkptr newtop = chunk_plus_offset(oldp, nb);
3122
        set_inuse(m, oldp, nb);
3123
        newtop->head = newtopsize |PINUSE_BIT;
3124
        m->top = newtop;
3125
        m->topsize = newtopsize;
3126
        newp = oldp;
3127
      }
3128
    }
3129
    else {
3130
      USAGE_ERROR_ACTION(m, oldmem);
3131
      POSTACTION(m);
3132
      return 0;
3133
    }
3134
 
3135
    POSTACTION(m);
3136
 
3137
    if (newp != 0) {
3138
      if (extra != 0) {
3139
        internal_free(m, extra);
3140
      }
3141
      check_inuse_chunk(m, newp);
3142
      return chunk2mem(newp);
3143
    }
3144
    else {
3145
      void* newmem = internal_malloc(m, bytes);
3146
      if (newmem != 0) {
3147
        size_t oc = oldsize - overhead_for(oldp);
3148
        memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3149
        internal_free(m, oldmem);
3150
      }
3151
      return newmem;
3152
    }
3153
  }
3154
  return 0;
3155
}
3156
 
3157
/* --------------------------- memalign support -------------------------- */
3158
 
3159
static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3160
  if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3161
    return internal_malloc(m, bytes);
3162
  if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3163
    alignment = MIN_CHUNK_SIZE;
3164
  if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3165
    size_t a = MALLOC_ALIGNMENT << 1;
3166
    while (a < alignment) a <<= 1;
3167
    alignment = a;
3168
  }
3169
 
3170
  if (bytes >= MAX_REQUEST - alignment) {
3171
    if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3172
      MALLOC_FAILURE_ACTION;
3173
    }
3174
  }
3175
  else {
3176
    size_t nb = request2size(bytes);
3177
    size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3178
    char* mem = (char*)internal_malloc(m, req);
3179
    if (mem != 0) {
3180
      void* leader = 0;
3181
      void* trailer = 0;
3182
      mchunkptr p = mem2chunk(mem);
3183
 
3184
      if (PREACTION(m)) return 0;
3185
      if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3186
        /*
3187
          Find an aligned spot inside chunk.  Since we need to give
3188
          back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3189
          the first calculation places us at a spot with less than
3190
          MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3191
          We've allocated enough total room so that this is always
3192
          possible.
3193
        */
3194
        char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3195
                                                       alignment -
3196
                                                       SIZE_T_ONE)) &
3197
                                             -alignment));
3198
        char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3199
          br : br+alignment;
3200
        mchunkptr newp = (mchunkptr)pos;
3201
        size_t leadsize = pos - (char*)(p);
3202
        size_t newsize = chunksize(p) - leadsize;
3203
 
3204
        if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3205
          newp->prev_foot = p->prev_foot + leadsize;
3206
          newp->head = (newsize|CINUSE_BIT);
3207
        }
3208
        else { /* Otherwise, give back leader, use the rest */
3209
          set_inuse(m, newp, newsize);
3210
          set_inuse(m, p, leadsize);
3211
          leader = chunk2mem(p);
3212
        }
3213
        p = newp;
3214
      }
3215
 
3216
      /* Give back spare room at the end */
3217
      if (!is_mmapped(p)) {
3218
        size_t size = chunksize(p);
3219
        if (size > nb + MIN_CHUNK_SIZE) {
3220
          size_t remainder_size = size - nb;
3221
          mchunkptr remainder = chunk_plus_offset(p, nb);
3222
          set_inuse(m, p, nb);
3223
          set_inuse(m, remainder, remainder_size);
3224
          trailer = chunk2mem(remainder);
3225
        }
3226
      }
3227
 
3228
      assert (chunksize(p) >= nb);
3229
      assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3230
      check_inuse_chunk(m, p);
3231
      POSTACTION(m);
3232
      if (leader != 0) {
3233
        internal_free(m, leader);
3234
      }
3235
      if (trailer != 0) {
3236
        internal_free(m, trailer);
3237
      }
3238
      return chunk2mem(p);
3239
    }
3240
  }
3241
  return 0;
3242
}
3243
 
3244
/* ------------------------ comalloc/coalloc support --------------------- */
3245
 
3246
static void** ialloc(mstate m,
3247
                     size_t n_elements,
3248
                     size_t* sizes,
3249
                     int opts,
3250
                     void* chunks[]) {
3251
  /*
3252
    This provides common support for independent_X routines, handling
3253
    all of the combinations that can result.
3254
 
3255
    The opts arg has:
3256
    bit 0 set if all elements are same size (using sizes[0])
3257
    bit 1 set if elements should be zeroed
3258
  */
3259
 
3260
  size_t    element_size;   /* chunksize of each element, if all same */
3261
  size_t    contents_size;  /* total size of elements */
3262
  size_t    array_size;     /* request size of pointer array */
3263
  void*     mem;            /* malloced aggregate space */
3264
  mchunkptr p;              /* corresponding chunk */
3265
  size_t    remainder_size; /* remaining bytes while splitting */
3266
  void**    marray;         /* either "chunks" or malloced ptr array */
3267
  mchunkptr array_chunk;    /* chunk for malloced ptr array */
3268
  flag_t    was_enabled;    /* to disable mmap */
3269
  size_t    size;
3270
  size_t    i;
3271
 
3272
  /* compute array length, if needed */
3273
  if (chunks != 0) {
3274
    if (n_elements == 0)
3275
      return chunks; /* nothing to do */
3276
    marray = chunks;
3277
    array_size = 0;
3278
  }
3279
  else {
3280
    /* if empty req, must still return chunk representing empty array */
3281
    if (n_elements == 0)
3282
      return (void**)internal_malloc(m, 0);
3283
    marray = 0;
3284
    array_size = request2size(n_elements * (sizeof(void*)));
3285
  }
3286
 
3287
  /* compute total element size */
3288
  if (opts & 0x1) { /* all-same-size */
3289
    element_size = request2size(*sizes);
3290
    contents_size = n_elements * element_size;
3291
  }
3292
  else { /* add up all the sizes */
3293
    element_size = 0;
3294
    contents_size = 0;
3295
    for (i = 0; i != n_elements; ++i)
3296
      contents_size += request2size(sizes[i]);
3297
  }
3298
 
3299
  size = contents_size + array_size;
3300
 
3301
  /*
3302
     Allocate the aggregate chunk.  First disable direct-mmapping so
3303
     malloc won't use it, since we would not be able to later
3304
     free/realloc space internal to a segregated mmap region.
3305
  */
3306
  was_enabled = use_mmap(m);
3307
  disable_mmap(m);
3308
  mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3309
  if (was_enabled)
3310
    enable_mmap(m);
3311
  if (mem == 0)
3312
    return 0;
3313
 
3314
  if (PREACTION(m)) return 0;
3315
  p = mem2chunk(mem);
3316
  remainder_size = chunksize(p);
3317
 
3318
  assert(!is_mmapped(p));
3319
 
3320
  if (opts & 0x2) {       /* optionally clear the elements */
3321
    memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3322
  }
3323
 
3324
  /* If not provided, allocate the pointer array as final part of chunk */
3325
  if (marray == 0) {
3326
    size_t  array_chunk_size;
3327
    array_chunk = chunk_plus_offset(p, contents_size);
3328
    array_chunk_size = remainder_size - contents_size;
3329
    marray = (void**) (chunk2mem(array_chunk));
3330
    set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
3331
    remainder_size = contents_size;
3332
  }
3333
 
3334
  /* split out elements */
3335
  for (i = 0; ; ++i) {
3336
    marray[i] = chunk2mem(p);
3337
    if (i != n_elements-1) {
3338
      if (element_size != 0)
3339
        size = element_size;
3340
      else
3341
        size = request2size(sizes[i]);
3342
      remainder_size -= size;
3343
      set_size_and_pinuse_of_inuse_chunk(m, p, size);
3344
      p = chunk_plus_offset(p, size);
3345
    }
3346
    else { /* the final element absorbs any overallocation slop */
3347
      set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
3348
      break;
3349
    }
3350
  }
3351
 
3352
#ifdef DEBUG
3353
//  if (marray != chunks) {
3354
    /* final element must have exactly exhausted chunk */
3355
//    if (element_size != 0) {
3356
//      assert(remainder_size == element_size);
3357
//    }
3358
//    else {
3359
//      assert(remainder_size == request2size(sizes[i]));
3360
//    }
3361
//    check_inuse_chunk(m, mem2chunk(marray));
3362
//  }
3363
//  for (i = 0; i != n_elements; ++i)
3364
//    check_inuse_chunk(m, mem2chunk(marray[i]));
3365
 
3366
#endif /* DEBUG */
3367
 
3368
  POSTACTION(m);
3369
  return marray;
3370
}
3371
 
3372
 
3373
/* -------------------------- public routines ---------------------------- */
3374
 
3375
#if !ONLY_MSPACES
3376
 
3377
static void* dlmalloc(size_t bytes) {
3378
  /*
3379
     Basic algorithm:
3380
     If a small request (< 256 bytes minus per-chunk overhead):
3381
       1. If one exists, use a remainderless chunk in associated smallbin.
3382
          (Remainderless means that there are too few excess bytes to
3383
          represent as a chunk.)
3384
       2. If it is big enough, use the dv chunk, which is normally the
3385
          chunk adjacent to the one used for the most recent small request.
3386
       3. If one exists, split the smallest available chunk in a bin,
3387
          saving remainder in dv.
3388
       4. If it is big enough, use the top chunk.
3389
       5. If available, get memory from system and use it
3390
     Otherwise, for a large request:
3391
       1. Find the smallest available binned chunk that fits, and use it
3392
          if it is better fitting than dv chunk, splitting if necessary.
3393
       2. If better fitting than any binned chunk, use the dv chunk.
3394
       3. If it is big enough, use the top chunk.
3395
       4. If request size >= mmap threshold, try to directly mmap this chunk.
3396
       5. If available, get memory from system and use it
3397
 
3398
     The ugly goto's here ensure that postaction occurs along all paths.
3399
  */
3400
 
3401
  if (!PREACTION(gm)) {
3402
    void* mem;
3403
    size_t nb;
3404
    if (bytes <= MAX_SMALL_REQUEST) {
3405
      bindex_t idx;
3406
      binmap_t smallbits;
3407
      nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
3408
      idx = small_index(nb);
3409
      smallbits = gm->smallmap >> idx;
3410
 
3411
      if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
3412
        mchunkptr b, p;
3413
        idx += ~smallbits & 1;       /* Uses next bin if idx empty */
3414
        b = smallbin_at(gm, idx);
3415
        p = b->fd;
3416
        assert(chunksize(p) == small_index2size(idx));
3417
        unlink_first_small_chunk(gm, b, p, idx);
3418
        set_inuse_and_pinuse(gm, p, small_index2size(idx));
3419
        mem = chunk2mem(p);
3420
        check_malloced_chunk(gm, mem, nb);
3421
        goto postaction;
3422
      }
3423
 
3424
      else if (nb > gm->dvsize) {
3425
        if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
3426
          mchunkptr b, p, r;
3427
          size_t rsize;
3428
          bindex_t i;
3429
          binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
3430
          binmap_t leastbit = least_bit(leftbits);
3431
          compute_bit2idx(leastbit, i);
3432
          b = smallbin_at(gm, i);
3433
          p = b->fd;
3434
          assert(chunksize(p) == small_index2size(i));
3435
          unlink_first_small_chunk(gm, b, p, i);
3436
          rsize = small_index2size(i) - nb;
3437
          /* Fit here cannot be remainderless if 4byte sizes */
3438
          if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
3439
            set_inuse_and_pinuse(gm, p, small_index2size(i));
3440
          else {
3441
            set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3442
            r = chunk_plus_offset(p, nb);
3443
            set_size_and_pinuse_of_free_chunk(r, rsize);
3444
            replace_dv(gm, r, rsize);
3445
          }
3446
          mem = chunk2mem(p);
3447
          check_malloced_chunk(gm, mem, nb);
3448
          goto postaction;
3449
        }
3450
 
3451
        else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
3452
          check_malloced_chunk(gm, mem, nb);
3453
          goto postaction;
3454
        }
3455
      }
3456
    }
3457
    else if (bytes >= MAX_REQUEST)
3458
      nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
3459
    else {
3460
      nb = pad_request(bytes);
3461
      if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
3462
        check_malloced_chunk(gm, mem, nb);
3463
        goto postaction;
3464
      }
3465
    }
3466
 
3467
    if (nb <= gm->dvsize) {
3468
      size_t rsize = gm->dvsize - nb;
3469
      mchunkptr p = gm->dv;
3470
      if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
3471
        mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
3472
        gm->dvsize = rsize;
3473
        set_size_and_pinuse_of_free_chunk(r, rsize);
3474
        set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3475
      }
3476
      else { /* exhaust dv */
3477
        size_t dvs = gm->dvsize;
3478
        gm->dvsize = 0;
3479
        gm->dv = 0;
3480
        set_inuse_and_pinuse(gm, p, dvs);
3481
      }
3482
      mem = chunk2mem(p);
3483
      check_malloced_chunk(gm, mem, nb);
3484
      goto postaction;
3485
    }
3486
 
3487
    else if (nb < gm->topsize) { /* Split top */
3488
      size_t rsize = gm->topsize -= nb;
3489
      mchunkptr p = gm->top;
3490
      mchunkptr r = gm->top = chunk_plus_offset(p, nb);
3491
      r->head = rsize | PINUSE_BIT;
3492
      set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
3493
      mem = chunk2mem(p);
3494
      check_top_chunk(gm, gm->top);
3495
      check_malloced_chunk(gm, mem, nb);
3496
      goto postaction;
3497
    }
3498
 
3499
    mem = sys_alloc(gm, nb);
3500
 
3501
  postaction:
3502
    POSTACTION(gm);
3503
    return mem;
3504
  }
3505
 
3506
  return 0;
3507
}
3508
 
3509
static void dlfree(void* mem) {
3510
  /*
3511
     Consolidate freed chunks with preceeding or succeeding bordering
3512
     free chunks, if they exist, and then place in a bin.  Intermixed
3513
     with special cases for top, dv, mmapped chunks, and usage errors.
3514
  */
3515
 
3516
  if (mem != 0) {
3517
    mchunkptr p  = mem2chunk(mem);
3518
#if FOOTERS
3519
    mstate fm = get_mstate_for(p);
3520
    if (!ok_magic(fm)) {
3521
      USAGE_ERROR_ACTION(fm, p);
3522
      return;
3523
    }
3524
#else /* FOOTERS */
3525
#define fm gm
3526
#endif /* FOOTERS */
3527
    if (!PREACTION(fm)) {
3528
      check_inuse_chunk(fm, p);
3529
      if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
3530
        size_t psize = chunksize(p);
3531
        mchunkptr next = chunk_plus_offset(p, psize);
3532
        if (!pinuse(p)) {
3533
          size_t prevsize = p->prev_foot;
3534
          if ((prevsize & IS_MMAPPED_BIT) != 0) {
3535
            prevsize &= ~IS_MMAPPED_BIT;
3536
            psize += prevsize + MMAP_FOOT_PAD;
3537
            if (gui_ksys_mem_free((char*)p - prevsize, psize) == 0)
3538
              fm->footprint -= psize;
3539
            goto postaction;
3540
          }
3541
          else {
3542
            mchunkptr prev = chunk_minus_offset(p, prevsize);
3543
            psize += prevsize;
3544
            p = prev;
3545
            if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
3546
              if (p != fm->dv) {
3547
                unlink_chunk(fm, p, prevsize);
3548
              }
3549
              else if ((next->head & INUSE_BITS) == INUSE_BITS) {
3550
                fm->dvsize = psize;
3551
                set_free_with_pinuse(p, psize, next);
3552
                goto postaction;
3553
              }
3554
            }
3555
            else
3556
              goto erroraction;
3557
          }
3558
        }
3559
 
3560
        if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
3561
          if (!cinuse(next)) {  /* consolidate forward */
3562
            if (next == fm->top) {
3563
              size_t tsize = fm->topsize += psize;
3564
              fm->top = p;
3565
              p->head = tsize | PINUSE_BIT;
3566
              if (p == fm->dv) {
3567
                fm->dv = 0;
3568
                fm->dvsize = 0;
3569
              }
3570
              if (should_trim(fm, tsize))
3571
                sys_trim(fm, 0);
3572
              goto postaction;
3573
            }
3574
            else if (next == fm->dv) {
3575
              size_t dsize = fm->dvsize += psize;
3576
              fm->dv = p;
3577
              set_size_and_pinuse_of_free_chunk(p, dsize);
3578
              goto postaction;
3579
            }
3580
            else {
3581
              size_t nsize = chunksize(next);
3582
              psize += nsize;
3583
              unlink_chunk(fm, next, nsize);
3584
              set_size_and_pinuse_of_free_chunk(p, psize);
3585
              if (p == fm->dv) {
3586
                fm->dvsize = psize;
3587
                goto postaction;
3588
              }
3589
            }
3590
          }
3591
          else
3592
            set_free_with_pinuse(p, psize, next);
3593
          insert_chunk(fm, p, psize);
3594
          check_free_chunk(fm, p);
3595
          goto postaction;
3596
        }
3597
      }
3598
    erroraction:
3599
      USAGE_ERROR_ACTION(fm, p);
3600
    postaction:
3601
      POSTACTION(fm);
3602
    }
3603
  }
3604
#if !FOOTERS
3605
#undef fm
3606
#endif /* FOOTERS */
3607
}
3608
 
3609
static void* dlcalloc(size_t n_elements, size_t elem_size) {
3610
  void* mem;
3611
  size_t req = 0;
3612
  if (n_elements != 0) {
3613
    req = n_elements * elem_size;
3614
    if (((n_elements | elem_size) & ~(size_t)0xffff) &&
3615
        (req / n_elements != elem_size))
3616
      req = MAX_SIZE_T; /* force downstream failure on overflow */
3617
  }
3618
  mem = dlmalloc(req);
3619
  if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
3620
    memset(mem, 0, req);
3621
  return mem;
3622
}
3623
 
3624
static void* dlrealloc(void* oldmem, size_t bytes) {
3625
  if (oldmem == 0)
3626
    return dlmalloc(bytes);
3627
#ifdef REALLOC_ZERO_BYTES_FREES
3628
  if (bytes == 0) {
3629
    dlfree(oldmem);
3630
    return 0;
3631
  }
3632
#endif /* REALLOC_ZERO_BYTES_FREES */
3633
  else {
3634
#if ! FOOTERS
3635
    mstate m = gm;
3636
#else /* FOOTERS */
3637
    mstate m = get_mstate_for(mem2chunk(oldmem));
3638
    if (!ok_magic(m)) {
3639
      USAGE_ERROR_ACTION(m, oldmem);
3640
      return 0;
3641
    }
3642
#endif /* FOOTERS */
3643
    return internal_realloc(m, oldmem, bytes);
3644
  }
3645
}
3646
 
3647
static void* dlmemalign(size_t alignment, size_t bytes) {
3648
  return internal_memalign(gm, alignment, bytes);
3649
}
3650
 
3651
static void** dlindependent_calloc(size_t n_elements, size_t elem_size,
3652
                                 void* chunks[]) {
3653
  size_t sz = elem_size; /* serves as 1-element array */
3654
  return ialloc(gm, n_elements, &sz, 3, chunks);
3655
}
3656
 
3657
static void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
3658
                                   void* chunks[]) {
3659
  return ialloc(gm, n_elements, sizes, 0, chunks);
3660
}
3661
 
3662
static void* dlvalloc(size_t bytes) {
3663
  size_t pagesz;
3664
  init_mparams();
3665
  pagesz = mparams.page_size;
3666
  return dlmemalign(pagesz, bytes);
3667
}
3668
 
3669
static void* dlpvalloc(size_t bytes) {
3670
  size_t pagesz;
3671
  init_mparams();
3672
  pagesz = mparams.page_size;
3673
  return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
3674
}
3675
 
3676
static int dlmalloc_trim(size_t pad) {
3677
  int result = 0;
3678
  if (!PREACTION(gm)) {
3679
    result = sys_trim(gm, pad);
3680
    POSTACTION(gm);
3681
  }
3682
  return result;
3683
}
3684
 
3685
static size_t dlmalloc_footprint(void) {
3686
  return gm->footprint;
3687
}
3688
 
3689
static size_t dlmalloc_max_footprint(void) {
3690
  return gm->max_footprint;
3691
}
3692
 
3693
#if !NO_MALLINFO
3694
struct mallinfo dlmallinfo(void) {
3695
  return internal_mallinfo(gm);
3696
}
3697
#endif /* NO_MALLINFO */
3698
 
3699
//void dlmalloc_stats() {
3700
//  internal_malloc_stats(gm);
3701
//}
3702
 
3703
static size_t dlmalloc_usable_size(void* mem) {
3704
  if (mem != 0) {
3705
    mchunkptr p = mem2chunk(mem);
3706
    if (cinuse(p))
3707
      return chunksize(p) - overhead_for(p);
3708
  }
3709
  return 0;
3710
}
3711
 
3712
static int dlmallopt(int param_number, int value) {
3713
  return change_mparam(param_number, value);
3714
}
3715
 
3716
#endif /* !ONLY_MSPACES */
3717
 
3718
/* ----------------------------- user mspaces ---------------------------- */
3719
 
3720
#if MSPACES
3721
#endif /* MSPACES */
3722
 
3723
/* -------------------- Alternative MORECORE functions ------------------- */
3724
 
3725
/*
3726
  Guidelines for creating a custom version of MORECORE:
3727
 
3728
  * For best performance, MORECORE should allocate in multiples of pagesize.
3729
  * MORECORE may allocate more memory than requested. (Or even less,
3730
      but this will usually result in a malloc failure.)
3731
  * MORECORE must not allocate memory when given argument zero, but
3732
      instead return one past the end address of memory from previous
3733
      nonzero call.
3734
  * For best performance, consecutive calls to MORECORE with positive
3735
      arguments should return increasing addresses, indicating that
3736
      space has been contiguously extended.
3737
  * Even though consecutive calls to MORECORE need not return contiguous
3738
      addresses, it must be OK for malloc'ed chunks to span multiple
3739
      regions in those cases where they do happen to be contiguous.
3740
  * MORECORE need not handle negative arguments -- it may instead
3741
      just return MFAIL when given negative arguments.
3742
      Negative arguments are always multiples of pagesize. MORECORE
3743
      must not misinterpret negative args as large positive unsigned
3744
      args. You can suppress all such calls from even occurring by defining
3745
      MORECORE_CANNOT_TRIM,
3746
 
3747
  As an example alternative MORECORE, here is a custom allocator
3748
  kindly contributed for pre-OSX macOS.  It uses virtually but not
3749
  necessarily physically contiguous non-paged memory (locked in,
3750
  present and won't get swapped out).  You can use it by uncommenting
3751
  this section, adding some #includes, and setting up the appropriate
3752
  defines above:
3753
 
3754
      #define MORECORE osMoreCore
3755
 
3756
  There is also a shutdown routine that should somehow be called for
3757
  cleanup upon program exit.
3758
 
3759
  #define MAX_POOL_ENTRIES 100
3760
  #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
3761
  static int next_os_pool;
3762
  void *our_os_pools[MAX_POOL_ENTRIES];
3763
 
3764
  void *osMoreCore(int size)
3765
  {
3766
    void *ptr = 0;
3767
    static void *sbrk_top = 0;
3768
 
3769
    if (size > 0)
3770
    {
3771
      if (size < MINIMUM_MORECORE_SIZE)
3772
         size = MINIMUM_MORECORE_SIZE;
3773
      if (CurrentExecutionLevel() == kTaskLevel)
3774
         ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
3775
      if (ptr == 0)
3776
      {
3777
        return (void *) MFAIL;
3778
      }
3779
      // save ptrs so they can be freed during cleanup
3780
      our_os_pools[next_os_pool] = ptr;
3781
      next_os_pool++;
3782
      ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
3783
      sbrk_top = (char *) ptr + size;
3784
      return ptr;
3785
    }
3786
    else if (size < 0)
3787
    {
3788
      // we don't currently support shrink behavior
3789
      return (void *) MFAIL;
3790
    }
3791
    else
3792
    {
3793
      return sbrk_top;
3794
    }
3795
  }
3796
 
3797
  // cleanup any allocated memory pools
3798
  // called as last thing before shutting down driver
3799
 
3800
  void osCleanupMem(void)
3801
  {
3802
    void **ptr;
3803
 
3804
    for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
3805
      if (*ptr)
3806
      {
3807
         PoolDeallocate(*ptr);
3808
         *ptr = 0;
3809
      }
3810
  }
3811
 
3812
*/
3813
 
3814
 
3815
/* -----------------------------------------------------------------------
3816
History:
3817
    V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
3818
      * Add max_footprint functions
3819
      * Ensure all appropriate literals are size_t
3820
      * Fix conditional compilation problem for some #define settings
3821
      * Avoid concatenating segments with the one provided
3822
        in create_mspace_with_base
3823
      * Rename some variables to avoid compiler shadowing warnings
3824
      * Use explicit lock initialization.
3825
      * Better handling of sbrk interference.
3826
      * Simplify and fix segment insertion, trimming and mspace_destroy
3827
      * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
3828
      * Thanks especially to Dennis Flanagan for help on these.
3829
 
3830
    V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
3831
      * Fix memalign brace error.
3832
 
3833
    V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
3834
      * Fix improper #endif nesting in C++
3835
      * Add explicit casts needed for C++
3836
 
3837
    V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
3838
      * Use trees for large bins
3839
      * Support mspaces
3840
      * Use segments to unify sbrk-based and mmap-based system allocation,
3841
        removing need for emulation on most platforms without sbrk.
3842
      * Default safety checks
3843
      * Optional footer checks. Thanks to William Robertson for the idea.
3844
      * Internal code refactoring
3845
      * Incorporate suggestions and platform-specific changes.
3846
        Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
3847
        Aaron Bachmann,  Emery Berger, and others.
3848
      * Speed up non-fastbin processing enough to remove fastbins.
3849
      * Remove useless cfree() to avoid conflicts with other apps.
3850
      * Remove internal memcpy, memset. Compilers handle builtins better.
3851
      * Remove some options that no one ever used and rename others.
3852
 
3853
    V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
3854
      * Fix malloc_state bitmap array misdeclaration
3855
 
3856
    V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
3857
      * Allow tuning of FIRST_SORTED_BIN_SIZE
3858
      * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
3859
      * Better detection and support for non-contiguousness of MORECORE.
3860
        Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
3861
      * Bypass most of malloc if no frees. Thanks To Emery Berger.
3862
      * Fix freeing of old top non-contiguous chunk im sysmalloc.
3863
      * Raised default trim and map thresholds to 256K.
3864
      * Fix mmap-related #defines. Thanks to Lubos Lunak.
3865
      * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
3866
      * Branch-free bin calculation
3867
      * Default trim and mmap thresholds now 256K.
3868
 
3869
    V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
3870
      * Introduce independent_comalloc and independent_calloc.
3871
        Thanks to Michael Pachos for motivation and help.
3872
      * Make optional .h file available
3873
      * Allow > 2GB requests on 32bit systems.
3874
      * new WIN32 sbrk, mmap, munmap, lock code from .
3875
        Thanks also to Andreas Mueller ,
3876
        and Anonymous.
3877
      * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
3878
        helping test this.)
3879
      * memalign: check alignment arg
3880
      * realloc: don't try to shift chunks backwards, since this
3881
        leads to  more fragmentation in some programs and doesn't
3882
        seem to help in any others.
3883
      * Collect all cases in malloc requiring system memory into sysmalloc
3884
      * Use mmap as backup to sbrk
3885
      * Place all internal state in malloc_state
3886
      * Introduce fastbins (although similar to 2.5.1)
3887
      * Many minor tunings and cosmetic improvements
3888
      * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
3889
      * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
3890
        Thanks to Tony E. Bennett  and others.
3891
      * Include errno.h to support default failure action.
3892
 
3893
    V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
3894
      * return null for negative arguments
3895
      * Added Several WIN32 cleanups from Martin C. Fong 
3896
         * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
3897
          (e.g. WIN32 platforms)
3898
         * Cleanup header file inclusion for WIN32 platforms
3899
         * Cleanup code to avoid Microsoft Visual C++ compiler complaints
3900
         * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
3901
           memory allocation routines
3902
         * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
3903
         * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
3904
           usage of 'assert' in non-WIN32 code
3905
         * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
3906
           avoid infinite loop
3907
      * Always call 'fREe()' rather than 'free()'
3908
 
3909
    V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
3910
      * Fixed ordering problem with boundary-stamping
3911
 
3912
    V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
3913
      * Added pvalloc, as recommended by H.J. Liu
3914
      * Added 64bit pointer support mainly from Wolfram Gloger
3915
      * Added anonymously donated WIN32 sbrk emulation
3916
      * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3917
      * malloc_extend_top: fix mask error that caused wastage after
3918
        foreign sbrks
3919
      * Add linux mremap support code from HJ Liu
3920
 
3921
    V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
3922
      * Integrated most documentation with the code.
3923
      * Add support for mmap, with help from
3924
        Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3925
      * Use last_remainder in more cases.
3926
      * Pack bins using idea from  colin@nyx10.cs.du.edu
3927
      * Use ordered bins instead of best-fit threshhold
3928
      * Eliminate block-local decls to simplify tracing and debugging.
3929
      * Support another case of realloc via move into top
3930
      * Fix error occuring when initial sbrk_base not word-aligned.
3931
      * Rely on page size for units instead of SBRK_UNIT to
3932
        avoid surprises about sbrk alignment conventions.
3933
      * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3934
        (raymond@es.ele.tue.nl) for the suggestion.
3935
      * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3936
      * More precautions for cases where other routines call sbrk,
3937
        courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3938
      * Added macros etc., allowing use in linux libc from
3939
        H.J. Lu (hjl@gnu.ai.mit.edu)
3940
      * Inverted this history list
3941
 
3942
    V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
3943
      * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3944
      * Removed all preallocation code since under current scheme
3945
        the work required to undo bad preallocations exceeds
3946
        the work saved in good cases for most test programs.
3947
      * No longer use return list or unconsolidated bins since
3948
        no scheme using them consistently outperforms those that don't
3949
        given above changes.
3950
      * Use best fit for very large chunks to prevent some worst-cases.
3951
      * Added some support for debugging
3952
 
3953
    V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
3954
      * Removed footers when chunks are in use. Thanks to
3955
        Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3956
 
3957
    V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
3958
      * Added malloc_trim, with help from Wolfram Gloger
3959
        (wmglo@Dent.MED.Uni-Muenchen.DE).
3960
 
3961
    V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
3962
 
3963
    V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
3964
      * realloc: try to expand in both directions
3965
      * malloc: swap order of clean-bin strategy;
3966
      * realloc: only conditionally expand backwards
3967
      * Try not to scavenge used bins
3968
      * Use bin counts as a guide to preallocation
3969
      * Occasionally bin return list chunks in first scan
3970
      * Add a few optimizations from colin@nyx10.cs.du.edu
3971
 
3972
    V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
3973
      * faster bin computation & slightly different binning
3974
      * merged all consolidations to one part of malloc proper
3975
         (eliminating old malloc_find_space & malloc_clean_bin)
3976
      * Scan 2 returns chunks (not just 1)
3977
      * Propagate failure in realloc if malloc returns 0
3978
      * Add stuff to allow compilation on non-ANSI compilers
3979
          from kpv@research.att.com
3980
 
3981
    V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
3982
      * removed potential for odd address access in prev_chunk
3983
      * removed dependency on getpagesize.h
3984
      * misc cosmetics and a bit more internal documentation
3985
      * anticosmetics: mangled names in macros to evade debugger strangeness
3986
      * tested on sparc, hp-700, dec-mips, rs6000
3987
          with gcc & native cc (hp, dec only) allowing
3988
          Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3989
 
3990
    Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
3991
      * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3992
         structure of old version,  but most details differ.)
3993
 
3994
*/