Rev 3039 | Rev 3391 | Go to most recent revision | Show entire file | Regard whitespace | Details | Blame | Last modification | View Log | RSS feed
Rev 3039 | Rev 3297 | ||
---|---|---|---|
Line 1... | Line 1... | ||
1 | /* |
1 | /* |
2 | This is a version (aka dlmalloc) of malloc/free/realloc written by |
2 | This is a version (aka dlmalloc) of malloc/free/realloc written by |
3 | Doug Lea and released to the public domain, as explained at |
3 | Doug Lea and released to the public domain, as explained at |
4 | http://creativecommons.org/licenses/publicdomain. Send questions, |
4 | http://creativecommons.org/publicdomain/zero/1.0/ Send questions, |
5 | comments, complaints, performance data, etc to dl@cs.oswego.edu |
5 | comments, complaints, performance data, etc to dl@cs.oswego.edu |
Line 6... | Line 6... | ||
6 | 6 | ||
7 | * Version 2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee) |
- | |
8 | 7 | * Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea |
|
9 | Note: There may be an updated version of this malloc obtainable at |
8 | Note: There may be an updated version of this malloc obtainable at |
10 | ftp://gee.cs.oswego.edu/pub/misc/malloc.c |
9 | ftp://gee.cs.oswego.edu/pub/misc/malloc.c |
Line 11... | Line 10... | ||
11 | Check before installing! |
10 | Check before installing! |
Line 17... | Line 16... | ||
17 | the compile-time options default to reasonable values for use on |
16 | the compile-time options default to reasonable values for use on |
18 | most platforms. You might later want to step through various |
17 | most platforms. You might later want to step through various |
19 | compile-time and dynamic tuning options. |
18 | compile-time and dynamic tuning options. |
Line 20... | Line 19... | ||
20 | 19 | ||
21 | For convenience, an include file for code using this malloc is at: |
20 | For convenience, an include file for code using this malloc is at: |
22 | ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.4.h |
21 | ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h |
23 | You don't really need this .h file unless you call functions not |
22 | You don't really need this .h file unless you call functions not |
24 | defined in your system include files. The .h file contains only the |
23 | defined in your system include files. The .h file contains only the |
25 | excerpts from this file needed for using this malloc on ANSI C/C++ |
24 | excerpts from this file needed for using this malloc on ANSI C/C++ |
26 | systems, so long as you haven't changed compile-time options about |
25 | systems, so long as you haven't changed compile-time options about |
Line 30... | Line 29... | ||
30 | library containing a malloc that is based on some version of this |
29 | library containing a malloc that is based on some version of this |
31 | malloc (for example in linux). You might still want to use the one |
30 | malloc (for example in linux). You might still want to use the one |
32 | in this file to customize settings or to avoid overheads associated |
31 | in this file to customize settings or to avoid overheads associated |
33 | with library versions. |
32 | with library versions. |
Line -... | Line 33... | ||
- | 33 | ||
- | 34 | * Vital statistics: |
|
- | 35 | ||
- | 36 | Supported pointer/size_t representation: 4 or 8 bytes |
|
- | 37 | size_t MUST be an unsigned type of the same width as |
|
- | 38 | pointers. (If you are using an ancient system that declares |
|
- | 39 | size_t as a signed type, or need it to be a different width |
|
- | 40 | than pointers, you can use a previous release of this malloc |
|
- | 41 | (e.g. 2.7.2) supporting these.) |
|
- | 42 | ||
- | 43 | Alignment: 8 bytes (minimum) |
|
- | 44 | This suffices for nearly all current machines and C compilers. |
|
- | 45 | However, you can define MALLOC_ALIGNMENT to be wider than this |
|
- | 46 | if necessary (up to 128bytes), at the expense of using more space. |
|
- | 47 | ||
- | 48 | Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes) |
|
- | 49 | 8 or 16 bytes (if 8byte sizes) |
|
- | 50 | Each malloced chunk has a hidden word of overhead holding size |
|
- | 51 | and status information, and additional cross-check word |
|
- | 52 | if FOOTERS is defined. |
|
- | 53 | ||
- | 54 | Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead) |
|
- | 55 | 8-byte ptrs: 32 bytes (including overhead) |
|
- | 56 | ||
- | 57 | Even a request for zero bytes (i.e., malloc(0)) returns a |
|
- | 58 | pointer to something of the minimum allocatable size. |
|
- | 59 | The maximum overhead wastage (i.e., number of extra bytes |
|
- | 60 | allocated than were requested in malloc) is less than or equal |
|
- | 61 | to the minimum size, except for requests >= mmap_threshold that |
|
- | 62 | are serviced via mmap(), where the worst case wastage is about |
|
- | 63 | 32 bytes plus the remainder from a system page (the minimal |
|
- | 64 | mmap unit); typically 4096 or 8192 bytes. |
|
- | 65 | ||
- | 66 | Security: static-safe; optionally more or less |
|
- | 67 | The "security" of malloc refers to the ability of malicious |
|
- | 68 | code to accentuate the effects of errors (for example, freeing |
|
- | 69 | space that is not currently malloc'ed or overwriting past the |
|
- | 70 | ends of chunks) in code that calls malloc. This malloc |
|
- | 71 | guarantees not to modify any memory locations below the base of |
|
- | 72 | heap, i.e., static variables, even in the presence of usage |
|
- | 73 | errors. The routines additionally detect most improper frees |
|
- | 74 | and reallocs. All this holds as long as the static bookkeeping |
|
- | 75 | for malloc itself is not corrupted by some other means. This |
|
- | 76 | is only one aspect of security -- these checks do not, and |
|
- | 77 | cannot, detect all possible programming errors. |
|
- | 78 | ||
- | 79 | If FOOTERS is defined nonzero, then each allocated chunk |
|
- | 80 | carries an additional check word to verify that it was malloced |
|
- | 81 | from its space. These check words are the same within each |
|
- | 82 | execution of a program using malloc, but differ across |
|
- | 83 | executions, so externally crafted fake chunks cannot be |
|
- | 84 | freed. This improves security by rejecting frees/reallocs that |
|
- | 85 | could corrupt heap memory, in addition to the checks preventing |
|
- | 86 | writes to statics that are always on. This may further improve |
|
- | 87 | security at the expense of time and space overhead. (Note that |
|
- | 88 | FOOTERS may also be worth using with MSPACES.) |
|
- | 89 | ||
- | 90 | By default detected errors cause the program to abort (calling |
|
- | 91 | "abort()"). You can override this to instead proceed past |
|
- | 92 | errors by defining PROCEED_ON_ERROR. In this case, a bad free |
|
- | 93 | has no effect, and a malloc that encounters a bad address |
|
- | 94 | caused by user overwrites will ignore the bad address by |
|
- | 95 | dropping pointers and indices to all known memory. This may |
|
- | 96 | be appropriate for programs that should continue if at all |
|
- | 97 | possible in the face of programming errors, although they may |
|
- | 98 | run out of memory because dropped memory is never reclaimed. |
|
- | 99 | ||
- | 100 | If you don't like either of these options, you can define |
|
- | 101 | CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything |
|
- | 102 | else. And if if you are sure that your program using malloc has |
|
- | 103 | no errors or vulnerabilities, you can define INSECURE to 1, |
|
- | 104 | which might (or might not) provide a small performance improvement. |
|
- | 105 | ||
- | 106 | It is also possible to limit the maximum total allocatable |
|
- | 107 | space, using malloc_set_footprint_limit. This is not |
|
- | 108 | designed as a security feature in itself (calls to set limits |
|
- | 109 | are not screened or privileged), but may be useful as one |
|
- | 110 | aspect of a secure implementation. |
|
- | 111 | ||
- | 112 | Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero |
|
- | 113 | When USE_LOCKS is defined, each public call to malloc, free, |
|
- | 114 | etc is surrounded with a lock. By default, this uses a plain |
|
- | 115 | pthread mutex, win32 critical section, or a spin-lock if if |
|
- | 116 | available for the platform and not disabled by setting |
|
- | 117 | USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined, |
|
- | 118 | recursive versions are used instead (which are not required for |
|
- | 119 | base functionality but may be needed in layered extensions). |
|
- | 120 | Using a global lock is not especially fast, and can be a major |
|
- | 121 | bottleneck. It is designed only to provide minimal protection |
|
- | 122 | in concurrent environments, and to provide a basis for |
|
- | 123 | extensions. If you are using malloc in a concurrent program, |
|
- | 124 | consider instead using nedmalloc |
|
- | 125 | (http://www.nedprod.com/programs/portable/nedmalloc/) or |
|
- | 126 | ptmalloc (See http://www.malloc.de), which are derived from |
|
- | 127 | versions of this malloc. |
|
- | 128 | ||
- | 129 | System requirements: Any combination of MORECORE and/or MMAP/MUNMAP |
|
- | 130 | This malloc can use unix sbrk or any emulation (invoked using |
|
- | 131 | the CALL_MORECORE macro) and/or mmap/munmap or any emulation |
|
- | 132 | (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system |
|
- | 133 | memory. On most unix systems, it tends to work best if both |
|
- | 134 | MORECORE and MMAP are enabled. On Win32, it uses emulations |
|
- | 135 | based on VirtualAlloc. It also uses common C library functions |
|
- | 136 | like memset. |
|
- | 137 | ||
- | 138 | Compliance: I believe it is compliant with the Single Unix Specification |
|
- | 139 | (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably |
|
- | 140 | others as well. |
|
- | 141 | ||
- | 142 | * Overview of algorithms |
|
- | 143 | ||
- | 144 | This is not the fastest, most space-conserving, most portable, or |
|
- | 145 | most tunable malloc ever written. However it is among the fastest |
|
- | 146 | while also being among the most space-conserving, portable and |
|
- | 147 | tunable. Consistent balance across these factors results in a good |
|
- | 148 | general-purpose allocator for malloc-intensive programs. |
|
- | 149 | ||
- | 150 | In most ways, this malloc is a best-fit allocator. Generally, it |
|
- | 151 | chooses the best-fitting existing chunk for a request, with ties |
|
- | 152 | broken in approximately least-recently-used order. (This strategy |
|
- | 153 | normally maintains low fragmentation.) However, for requests less |
|
- | 154 | than 256bytes, it deviates from best-fit when there is not an |
|
- | 155 | exactly fitting available chunk by preferring to use space adjacent |
|
- | 156 | to that used for the previous small request, as well as by breaking |
|
- | 157 | ties in approximately most-recently-used order. (These enhance |
|
- | 158 | locality of series of small allocations.) And for very large requests |
|
- | 159 | (>= 256Kb by default), it relies on system memory mapping |
|
- | 160 | facilities, if supported. (This helps avoid carrying around and |
|
- | 161 | possibly fragmenting memory used only for large chunks.) |
|
- | 162 | ||
- | 163 | All operations (except malloc_stats and mallinfo) have execution |
|
- | 164 | times that are bounded by a constant factor of the number of bits in |
|
- | 165 | a size_t, not counting any clearing in calloc or copying in realloc, |
|
- | 166 | or actions surrounding MORECORE and MMAP that have times |
|
- | 167 | proportional to the number of non-contiguous regions returned by |
|
- | 168 | system allocation routines, which is often just 1. In real-time |
|
- | 169 | applications, you can optionally suppress segment traversals using |
|
- | 170 | NO_SEGMENT_TRAVERSAL, which assures bounded execution even when |
|
- | 171 | system allocators return non-contiguous spaces, at the typical |
|
- | 172 | expense of carrying around more memory and increased fragmentation. |
|
- | 173 | ||
- | 174 | The implementation is not very modular and seriously overuses |
|
- | 175 | macros. Perhaps someday all C compilers will do as good a job |
|
- | 176 | inlining modular code as can now be done by brute-force expansion, |
|
- | 177 | but now, enough of them seem not to. |
|
- | 178 | ||
- | 179 | Some compilers issue a lot of warnings about code that is |
|
- | 180 | dead/unreachable only on some platforms, and also about intentional |
|
- | 181 | uses of negation on unsigned types. All known cases of each can be |
|
- | 182 | ignored. |
|
- | 183 | ||
- | 184 | For a longer but out of date high-level description, see |
|
- | 185 | http://gee.cs.oswego.edu/dl/html/malloc.html |
|
- | 186 | ||
- | 187 | * MSPACES |
|
- | 188 | If MSPACES is defined, then in addition to malloc, free, etc., |
|
- | 189 | this file also defines mspace_malloc, mspace_free, etc. These |
|
- | 190 | are versions of malloc routines that take an "mspace" argument |
|
- | 191 | obtained using create_mspace, to control all internal bookkeeping. |
|
- | 192 | If ONLY_MSPACES is defined, only these versions are compiled. |
|
- | 193 | So if you would like to use this allocator for only some allocations, |
|
- | 194 | and your system malloc for others, you can compile with |
|
- | 195 | ONLY_MSPACES and then do something like... |
|
- | 196 | static mspace mymspace = create_mspace(0,0); // for example |
|
- | 197 | #define mymalloc(bytes) mspace_malloc(mymspace, bytes) |
|
- | 198 | ||
- | 199 | (Note: If you only need one instance of an mspace, you can instead |
|
- | 200 | use "USE_DL_PREFIX" to relabel the global malloc.) |
|
- | 201 | ||
- | 202 | You can similarly create thread-local allocators by storing |
|
- | 203 | mspaces as thread-locals. For example: |
|
- | 204 | static __thread mspace tlms = 0; |
|
- | 205 | void* tlmalloc(size_t bytes) { |
|
- | 206 | if (tlms == 0) tlms = create_mspace(0, 0); |
|
- | 207 | return mspace_malloc(tlms, bytes); |
|
- | 208 | } |
|
- | 209 | void tlfree(void* mem) { mspace_free(tlms, mem); } |
|
- | 210 | ||
- | 211 | Unless FOOTERS is defined, each mspace is completely independent. |
|
- | 212 | You cannot allocate from one and free to another (although |
|
- | 213 | conformance is only weakly checked, so usage errors are not always |
|
- | 214 | caught). If FOOTERS is defined, then each chunk carries around a tag |
|
- | 215 | indicating its originating mspace, and frees are directed to their |
|
- | 216 | originating spaces. Normally, this requires use of locks. |
|
- | 217 | ||
- | 218 | ------------------------- Compile-time options --------------------------- |
|
- | 219 | ||
- | 220 | Be careful in setting #define values for numerical constants of type |
|
- | 221 | size_t. On some systems, literal values are not automatically extended |
|
- | 222 | to size_t precision unless they are explicitly casted. You can also |
|
- | 223 | use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below. |
|
- | 224 | ||
- | 225 | WIN32 default: defined if _WIN32 defined |
|
- | 226 | Defining WIN32 sets up defaults for MS environment and compilers. |
|
- | 227 | Otherwise defaults are for unix. Beware that there seem to be some |
|
- | 228 | cases where this malloc might not be a pure drop-in replacement for |
|
- | 229 | Win32 malloc: Random-looking failures from Win32 GDI API's (eg; |
|
- | 230 | SetDIBits()) may be due to bugs in some video driver implementations |
|
- | 231 | when pixel buffers are malloc()ed, and the region spans more than |
|
- | 232 | one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb) |
|
- | 233 | default granularity, pixel buffers may straddle virtual allocation |
|
- | 234 | regions more often than when using the Microsoft allocator. You can |
|
- | 235 | avoid this by using VirtualAlloc() and VirtualFree() for all pixel |
|
- | 236 | buffers rather than using malloc(). If this is not possible, |
|
- | 237 | recompile this malloc with a larger DEFAULT_GRANULARITY. Note: |
|
- | 238 | in cases where MSC and gcc (cygwin) are known to differ on WIN32, |
|
- | 239 | conditions use _MSC_VER to distinguish them. |
|
- | 240 | ||
- | 241 | DLMALLOC_EXPORT default: extern |
|
- | 242 | Defines how public APIs are declared. If you want to export via a |
|
- | 243 | Windows DLL, you might define this as |
|
- | 244 | #define DLMALLOC_EXPORT extern __declspec(dllexport) |
|
- | 245 | If you want a POSIX ELF shared object, you might use |
|
- | 246 | #define DLMALLOC_EXPORT extern __attribute__((visibility("default"))) |
|
- | 247 | ||
- | 248 | MALLOC_ALIGNMENT default: (size_t)(2 * sizeof(void *)) |
|
- | 249 | Controls the minimum alignment for malloc'ed chunks. It must be a |
|
- | 250 | power of two and at least 8, even on machines for which smaller |
|
- | 251 | alignments would suffice. It may be defined as larger than this |
|
- | 252 | though. Note however that code and data structures are optimized for |
|
- | 253 | the case of 8-byte alignment. |
|
- | 254 | ||
- | 255 | MSPACES default: 0 (false) |
|
- | 256 | If true, compile in support for independent allocation spaces. |
|
- | 257 | This is only supported if HAVE_MMAP is true. |
|
- | 258 | ||
- | 259 | ONLY_MSPACES default: 0 (false) |
|
- | 260 | If true, only compile in mspace versions, not regular versions. |
|
- | 261 | ||
- | 262 | USE_LOCKS default: 0 (false) |
|
- | 263 | Causes each call to each public routine to be surrounded with |
|
- | 264 | pthread or WIN32 mutex lock/unlock. (If set true, this can be |
|
- | 265 | overridden on a per-mspace basis for mspace versions.) If set to a |
|
- | 266 | non-zero value other than 1, locks are used, but their |
|
- | 267 | implementation is left out, so lock functions must be supplied manually, |
|
- | 268 | as described below. |
|
- | 269 | ||
- | 270 | USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available |
|
- | 271 | If true, uses custom spin locks for locking. This is currently |
|
- | 272 | supported only gcc >= 4.1, older gccs on x86 platforms, and recent |
|
- | 273 | MS compilers. Otherwise, posix locks or win32 critical sections are |
|
- | 274 | used. |
|
- | 275 | ||
- | 276 | USE_RECURSIVE_LOCKS default: not defined |
|
- | 277 | If defined nonzero, uses recursive (aka reentrant) locks, otherwise |
|
- | 278 | uses plain mutexes. This is not required for malloc proper, but may |
|
- | 279 | be needed for layered allocators such as nedmalloc. |
|
- | 280 | ||
- | 281 | LOCK_AT_FORK default: not defined |
|
- | 282 | If defined nonzero, performs pthread_atfork upon initialization |
|
- | 283 | to initialize child lock while holding parent lock. The implementation |
|
- | 284 | assumes that pthread locks (not custom locks) are being used. In other |
|
- | 285 | cases, you may need to customize the implementation. |
|
- | 286 | ||
- | 287 | FOOTERS default: 0 |
|
- | 288 | If true, provide extra checking and dispatching by placing |
|
- | 289 | information in the footers of allocated chunks. This adds |
|
- | 290 | space and time overhead. |
|
- | 291 | ||
- | 292 | INSECURE default: 0 |
|
- | 293 | If true, omit checks for usage errors and heap space overwrites. |
|
- | 294 | ||
- | 295 | USE_DL_PREFIX default: NOT defined |
|
- | 296 | Causes compiler to prefix all public routines with the string 'dl'. |
|
- | 297 | This can be useful when you only want to use this malloc in one part |
|
- | 298 | of a program, using your regular system malloc elsewhere. |
|
- | 299 | ||
- | 300 | MALLOC_INSPECT_ALL default: NOT defined |
|
- | 301 | If defined, compiles malloc_inspect_all and mspace_inspect_all, that |
|
- | 302 | perform traversal of all heap space. Unless access to these |
|
- | 303 | functions is otherwise restricted, you probably do not want to |
|
- | 304 | include them in secure implementations. |
|
- | 305 | ||
- | 306 | ABORT default: defined as abort() |
|
- | 307 | Defines how to abort on failed checks. On most systems, a failed |
|
- | 308 | check cannot die with an "assert" or even print an informative |
|
- | 309 | message, because the underlying print routines in turn call malloc, |
|
- | 310 | which will fail again. Generally, the best policy is to simply call |
|
- | 311 | abort(). It's not very useful to do more than this because many |
|
- | 312 | errors due to overwriting will show up as address faults (null, odd |
|
- | 313 | addresses etc) rather than malloc-triggered checks, so will also |
|
- | 314 | abort. Also, most compilers know that abort() does not return, so |
|
- | 315 | can better optimize code conditionally calling it. |
|
- | 316 | ||
- | 317 | PROCEED_ON_ERROR default: defined as 0 (false) |
|
- | 318 | Controls whether detected bad addresses cause them to bypassed |
|
- | 319 | rather than aborting. If set, detected bad arguments to free and |
|
- | 320 | realloc are ignored. And all bookkeeping information is zeroed out |
|
- | 321 | upon a detected overwrite of freed heap space, thus losing the |
|
- | 322 | ability to ever return it from malloc again, but enabling the |
|
- | 323 | application to proceed. If PROCEED_ON_ERROR is defined, the |
|
- | 324 | static variable malloc_corruption_error_count is compiled in |
|
- | 325 | and can be examined to see if errors have occurred. This option |
|
- | 326 | generates slower code than the default abort policy. |
|
- | 327 | ||
- | 328 | DEBUG default: NOT defined |
|
- | 329 | The DEBUG setting is mainly intended for people trying to modify |
|
- | 330 | this code or diagnose problems when porting to new platforms. |
|
- | 331 | However, it may also be able to better isolate user errors than just |
|
- | 332 | using runtime checks. The assertions in the check routines spell |
|
- | 333 | out in more detail the assumptions and invariants underlying the |
|
- | 334 | algorithms. The checking is fairly extensive, and will slow down |
|
- | 335 | execution noticeably. Calling malloc_stats or mallinfo with DEBUG |
|
- | 336 | set will attempt to check every non-mmapped allocated and free chunk |
|
- | 337 | in the course of computing the summaries. |
|
- | 338 | ||
- | 339 | ABORT_ON_ASSERT_FAILURE default: defined as 1 (true) |
|
- | 340 | Debugging assertion failures can be nearly impossible if your |
|
- | 341 | version of the assert macro causes malloc to be called, which will |
|
- | 342 | lead to a cascade of further failures, blowing the runtime stack. |
|
- | 343 | ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(), |
|
- | 344 | which will usually make debugging easier. |
|
- | 345 | ||
- | 346 | MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32 |
|
- | 347 | The action to take before "return 0" when malloc fails to be able to |
|
- | 348 | return memory because there is none available. |
|
- | 349 | ||
- | 350 | HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES |
|
- | 351 | True if this system supports sbrk or an emulation of it. |
|
- | 352 | ||
- | 353 | MORECORE default: sbrk |
|
- | 354 | The name of the sbrk-style system routine to call to obtain more |
|
- | 355 | memory. See below for guidance on writing custom MORECORE |
|
- | 356 | functions. The type of the argument to sbrk/MORECORE varies across |
|
- | 357 | systems. It cannot be size_t, because it supports negative |
|
- | 358 | arguments, so it is normally the signed type of the same width as |
|
- | 359 | size_t (sometimes declared as "intptr_t"). It doesn't much matter |
|
- | 360 | though. Internally, we only call it with arguments less than half |
|
- | 361 | the max value of a size_t, which should work across all reasonable |
|
- | 362 | possibilities, although sometimes generating compiler warnings. |
|
- | 363 | ||
- | 364 | MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE |
|
- | 365 | If true, take advantage of fact that consecutive calls to MORECORE |
|
- | 366 | with positive arguments always return contiguous increasing |
|
- | 367 | addresses. This is true of unix sbrk. It does not hurt too much to |
|
- | 368 | set it true anyway, since malloc copes with non-contiguities. |
|
- | 369 | Setting it false when definitely non-contiguous saves time |
|
- | 370 | and possibly wasted space it would take to discover this though. |
|
- | 371 | ||
- | 372 | MORECORE_CANNOT_TRIM default: NOT defined |
|
- | 373 | True if MORECORE cannot release space back to the system when given |
|
- | 374 | negative arguments. This is generally necessary only if you are |
|
- | 375 | using a hand-crafted MORECORE function that cannot handle negative |
|
- | 376 | arguments. |
|
- | 377 | ||
- | 378 | NO_SEGMENT_TRAVERSAL default: 0 |
|
- | 379 | If non-zero, suppresses traversals of memory segments |
|
- | 380 | returned by either MORECORE or CALL_MMAP. This disables |
|
- | 381 | merging of segments that are contiguous, and selectively |
|
- | 382 | releasing them to the OS if unused, but bounds execution times. |
|
- | 383 | ||
- | 384 | HAVE_MMAP default: 1 (true) |
|
- | 385 | True if this system supports mmap or an emulation of it. If so, and |
|
- | 386 | HAVE_MORECORE is not true, MMAP is used for all system |
|
- | 387 | allocation. If set and HAVE_MORECORE is true as well, MMAP is |
|
- | 388 | primarily used to directly allocate very large blocks. It is also |
|
- | 389 | used as a backup strategy in cases where MORECORE fails to provide |
|
- | 390 | space from system. Note: A single call to MUNMAP is assumed to be |
|
- | 391 | able to unmap memory that may have be allocated using multiple calls |
|
- | 392 | to MMAP, so long as they are adjacent. |
|
- | 393 | ||
- | 394 | HAVE_MREMAP default: 1 on linux, else 0 |
|
- | 395 | If true realloc() uses mremap() to re-allocate large blocks and |
|
- | 396 | extend or shrink allocation spaces. |
|
- | 397 | ||
- | 398 | MMAP_CLEARS default: 1 except on WINCE. |
|
- | 399 | True if mmap clears memory so calloc doesn't need to. This is true |
|
- | 400 | for standard unix mmap using /dev/zero and on WIN32 except for WINCE. |
|
- | 401 | ||
- | 402 | USE_BUILTIN_FFS default: 0 (i.e., not used) |
|
- | 403 | Causes malloc to use the builtin ffs() function to compute indices. |
|
- | 404 | Some compilers may recognize and intrinsify ffs to be faster than the |
|
- | 405 | supplied C version. Also, the case of x86 using gcc is special-cased |
|
- | 406 | to an asm instruction, so is already as fast as it can be, and so |
|
- | 407 | this setting has no effect. Similarly for Win32 under recent MS compilers. |
|
- | 408 | (On most x86s, the asm version is only slightly faster than the C version.) |
|
- | 409 | ||
- | 410 | malloc_getpagesize default: derive from system includes, or 4096. |
|
- | 411 | The system page size. To the extent possible, this malloc manages |
|
- | 412 | memory from the system in page-size units. This may be (and |
|
- | 413 | usually is) a function rather than a constant. This is ignored |
|
- | 414 | if WIN32, where page size is determined using getSystemInfo during |
|
- | 415 | initialization. |
|
- | 416 | ||
- | 417 | USE_DEV_RANDOM default: 0 (i.e., not used) |
|
- | 418 | Causes malloc to use /dev/random to initialize secure magic seed for |
|
- | 419 | stamping footers. Otherwise, the current time is used. |
|
- | 420 | ||
- | 421 | NO_MALLINFO default: 0 |
|
- | 422 | If defined, don't compile "mallinfo". This can be a simple way |
|
- | 423 | of dealing with mismatches between system declarations and |
|
- | 424 | those in this file. |
|
- | 425 | ||
- | 426 | MALLINFO_FIELD_TYPE default: size_t |
|
- | 427 | The type of the fields in the mallinfo struct. This was originally |
|
- | 428 | defined as "int" in SVID etc, but is more usefully defined as |
|
- | 429 | size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set |
|
- | 430 | ||
- | 431 | NO_MALLOC_STATS default: 0 |
|
- | 432 | If defined, don't compile "malloc_stats". This avoids calls to |
|
- | 433 | fprintf and bringing in stdio dependencies you might not want. |
|
- | 434 | ||
- | 435 | REALLOC_ZERO_BYTES_FREES default: not defined |
|
- | 436 | This should be set if a call to realloc with zero bytes should |
|
- | 437 | be the same as a call to free. Some people think it should. Otherwise, |
|
- | 438 | since this malloc returns a unique pointer for malloc(0), so does |
|
- | 439 | realloc(p, 0). |
|
- | 440 | ||
- | 441 | LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H |
|
- | 442 | LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H |
|
- | 443 | LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32 |
|
- | 444 | Define these if your system does not have these header files. |
|
- | 445 | You might need to manually insert some of the declarations they provide. |
|
- | 446 | ||
- | 447 | DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS, |
|
- | 448 | system_info.dwAllocationGranularity in WIN32, |
|
- | 449 | otherwise 64K. |
|
- | 450 | Also settable using mallopt(M_GRANULARITY, x) |
|
- | 451 | The unit for allocating and deallocating memory from the system. On |
|
- | 452 | most systems with contiguous MORECORE, there is no reason to |
|
- | 453 | make this more than a page. However, systems with MMAP tend to |
|
- | 454 | either require or encourage larger granularities. You can increase |
|
- | 455 | this value to prevent system allocation functions to be called so |
|
- | 456 | often, especially if they are slow. The value must be at least one |
|
- | 457 | page and must be a power of two. Setting to 0 causes initialization |
|
- | 458 | to either page size or win32 region size. (Note: In previous |
|
- | 459 | versions of malloc, the equivalent of this option was called |
|
- | 460 | "TOP_PAD") |
|
- | 461 | ||
- | 462 | DEFAULT_TRIM_THRESHOLD default: 2MB |
|
- | 463 | Also settable using mallopt(M_TRIM_THRESHOLD, x) |
|
- | 464 | The maximum amount of unused top-most memory to keep before |
|
- | 465 | releasing via malloc_trim in free(). Automatic trimming is mainly |
|
- | 466 | useful in long-lived programs using contiguous MORECORE. Because |
|
- | 467 | trimming via sbrk can be slow on some systems, and can sometimes be |
|
- | 468 | wasteful (in cases where programs immediately afterward allocate |
|
- | 469 | more large chunks) the value should be high enough so that your |
|
- | 470 | overall system performance would improve by releasing this much |
|
- | 471 | memory. As a rough guide, you might set to a value close to the |
|
- | 472 | average size of a process (program) running on your system. |
|
- | 473 | Releasing this much memory would allow such a process to run in |
|
- | 474 | memory. Generally, it is worth tuning trim thresholds when a |
|
- | 475 | program undergoes phases where several large chunks are allocated |
|
- | 476 | and released in ways that can reuse each other's storage, perhaps |
|
- | 477 | mixed with phases where there are no such chunks at all. The trim |
|
- | 478 | value must be greater than page size to have any useful effect. To |
|
- | 479 | disable trimming completely, you can set to MAX_SIZE_T. Note that the trick |
|
- | 480 | some people use of mallocing a huge space and then freeing it at |
|
- | 481 | program startup, in an attempt to reserve system memory, doesn't |
|
- | 482 | have the intended effect under automatic trimming, since that memory |
|
- | 483 | will immediately be returned to the system. |
|
- | 484 | ||
- | 485 | DEFAULT_MMAP_THRESHOLD default: 256K |
|
- | 486 | Also settable using mallopt(M_MMAP_THRESHOLD, x) |
|
- | 487 | The request size threshold for using MMAP to directly service a |
|
- | 488 | request. Requests of at least this size that cannot be allocated |
|
- | 489 | using already-existing space will be serviced via mmap. (If enough |
|
- | 490 | normal freed space already exists it is used instead.) Using mmap |
|
- | 491 | segregates relatively large chunks of memory so that they can be |
|
- | 492 | individually obtained and released from the host system. A request |
|
- | 493 | serviced through mmap is never reused by any other request (at least |
|
- | 494 | not directly; the system may just so happen to remap successive |
|
- | 495 | requests to the same locations). Segregating space in this way has |
|
- | 496 | the benefits that: Mmapped space can always be individually released |
|
- | 497 | back to the system, which helps keep the system level memory demands |
|
- | 498 | of a long-lived program low. Also, mapped memory doesn't become |
|
- | 499 | `locked' between other chunks, as can happen with normally allocated |
|
- | 500 | chunks, which means that even trimming via malloc_trim would not |
|
- | 501 | release them. However, it has the disadvantage that the space |
|
- | 502 | cannot be reclaimed, consolidated, and then used to service later |
|
- | 503 | requests, as happens with normal chunks. The advantages of mmap |
|
- | 504 | nearly always outweigh disadvantages for "large" chunks, but the |
|
- | 505 | value of "large" may vary across systems. The default is an |
|
- | 506 | empirically derived value that works well in most systems. You can |
|
- | 507 | disable mmap by setting to MAX_SIZE_T. |
|
- | 508 | ||
- | 509 | MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP |
|
- | 510 | The number of consolidated frees between checks to release |
|
- | 511 | unused segments when freeing. When using non-contiguous segments, |
|
- | 512 | especially with multiple mspaces, checking only for topmost space |
|
- | 513 | doesn't always suffice to trigger trimming. To compensate for this, |
|
- | 514 | free() will, with a period of MAX_RELEASE_CHECK_RATE (or the |
|
- | 515 | current number of segments, if greater) try to release unused |
|
- | 516 | segments to the OS when freeing chunks that result in |
|
- | 517 | consolidation. The best value for this parameter is a compromise |
|
- | 518 | between slowing down frees with relatively costly checks that |
|
- | 519 | rarely trigger versus holding on to unused memory. To effectively |
|
- | 520 | disable, set to MAX_SIZE_T. This may lead to a very slight speed |
|
34 | 521 | improvement at the expense of carrying around more memory. |
|
Line 35... | Line 522... | ||
35 | */ |
522 | */ |
36 | 523 | ||
37 | #include |
524 | #include |
Line 38... | Line -... | ||
38 | #include |
- | |
39 | #include |
525 | #include |
40 | 526 | #include |
|
41 | struct malloc_chunk { |
527 | |
42 | size_t prev_foot; /* Size of previous chunk (if free). */ |
528 | /* Version identifier to allow people to support multiple versions */ |
43 | size_t head; /* Size and inuse bits. */ |
- | |
Line 44... | Line -... | ||
44 | struct malloc_chunk* fd; /* double links -- used only if free. */ |
- | |
45 | struct malloc_chunk* bk; |
- | |
46 | }; |
- | |
47 | - | ||
48 | typedef struct malloc_chunk mchunk; |
- | |
49 | typedef struct malloc_chunk* mchunkptr; |
- | |
Line -... | Line 529... | ||
- | 529 | #ifndef DLMALLOC_VERSION |
|
- | 530 | #define DLMALLOC_VERSION 20806 |
|
- | 531 | #endif /* DLMALLOC_VERSION */ |
|
- | 532 | ||
- | 533 | ||
- | 534 | /* |
|
- | 535 | malloc(size_t n) |
|
- | 536 | Returns a pointer to a newly allocated chunk of at least n bytes, or |
|
- | 537 | null if no space is available, in which case errno is set to ENOMEM |
|
- | 538 | on ANSI C systems. |
|
- | 539 | ||
- | 540 | If n is zero, malloc returns a minimum-sized chunk. (The minimum |
|
- | 541 | size is 16 bytes on most 32bit systems, and 32 bytes on 64bit |
|
- | 542 | systems.) Note that size_t is an unsigned type, so calls with |
|
Line -... | Line 543... | ||
- | 543 | arguments that would be negative if signed are interpreted as |
|
- | 544 | requests for huge amounts of space, which will often fail. The |
|
- | 545 | maximum supported value of n differs across systems, but is in all |
|
- | 546 | cases less than the maximum representable value of a size_t. |
|
- | 547 | */ |
|
- | 548 | ||
- | 549 | /* |
|
Line -... | Line 550... | ||
- | 550 | free(void* p) |
|
- | 551 | Releases the chunk of memory pointed to by p, that had been previously |
|
- | 552 | allocated using malloc or a related routine such as realloc. |
|
- | 553 | It has no effect if p is null. If p was not malloced or already |
|
- | 554 | freed, free(p) will by default cause the current program to abort. |
|
- | 555 | */ |
|
- | 556 | ||
- | 557 | /* |
|
- | 558 | calloc(size_t n_elements, size_t element_size); |
|
- | 559 | Returns a pointer to n_elements * element_size bytes, with all locations |
|
- | 560 | set to zero. |
|
- | 561 | */ |
|
- | 562 | ||
- | 563 | /* |
|
- | 564 | realloc(void* p, size_t n) |
|
- | 565 | Returns a pointer to a chunk of size n that contains the same data |
|
- | 566 | as does chunk p up to the minimum of (n, p's size) bytes, or null |
|
- | 567 | if no space is available. |
|
- | 568 | ||
- | 569 | The returned pointer may or may not be the same as p. The algorithm |
|
- | 570 | prefers extending p in most cases when possible, otherwise it |
|
- | 571 | employs the equivalent of a malloc-copy-free sequence. |
|
- | 572 | ||
- | 573 | If p is null, realloc is equivalent to malloc. |
|
- | 574 | ||
- | 575 | If space is not available, realloc returns null, errno is set (if on |
|
- | 576 | ANSI) and p is NOT freed. |
|
- | 577 | ||
- | 578 | if n is for fewer bytes than already held by p, the newly unused |
|
- | 579 | space is lopped off and freed if possible. realloc with a size |
|
- | 580 | argument of zero (re)allocates a minimum-sized chunk. |
|
- | 581 | ||
- | 582 | The old unix realloc convention of allowing the last-free'd chunk |
|
50 | typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */ |
583 | to be used as an argument to realloc is not supported. |
- | 584 | */ |
|
- | 585 | /* |
|
- | 586 | memalign(size_t alignment, size_t n); |
|
- | 587 | Returns a pointer to a newly allocated chunk of n bytes, aligned |
|
- | 588 | in accord with the alignment argument. |
|
- | 589 | ||
- | 590 | The alignment argument should be a power of two. If the argument is |
|
- | 591 | not a power of two, the nearest greater power is used. |
|
- | 592 | 8-byte alignment is guaranteed by normal malloc calls, so don't |
|
- | 593 | bother calling memalign with an argument of 8 or less. |
|
- | 594 | ||
- | 595 | Overreliance on memalign is a sure way to fragment space. |
|
- | 596 | */ |
|
- | 597 | ||
- | 598 | ||
- | 599 | #define DEBUG 1 |
|
- | 600 | ||
- | 601 | ||
- | 602 | ||
- | 603 | ||
- | 604 | ||
- | 605 | ||
Line 51... | Line -... | ||
51 | typedef unsigned int bindex_t; /* Described below */ |
- | |
52 | typedef unsigned int binmap_t; /* Described below */ |
606 | |
Line -... | Line 607... | ||
- | 607 | ||
- | 608 | ||
- | 609 | ||
- | 610 | ||
53 | typedef unsigned int flag_t; /* The type of various bit flag sets */ |
611 | |
54 | 612 | #define assert(x) |
|
55 | 613 | ||
Line 56... | Line 614... | ||
56 | 614 | #define MAX_SIZE_T (~(size_t)0) |
|
Line 80... | Line 638... | ||
80 | 638 | ||
81 | /* segment bit set in create_mspace_with_base */ |
639 | /* segment bit set in create_mspace_with_base */ |
Line 82... | Line 640... | ||
82 | #define EXTERN_BIT (8U) |
640 | #define EXTERN_BIT (8U) |
- | 641 | ||
- | 642 | #define HAVE_MMAP 1 |
|
83 | 643 | #define HAVE_MORECORE 0 |
|
84 | #define HAVE_MMAP 1 |
644 | #define MORECORE_CANNOT_TRIM 1 |
85 | #define CALL_MMAP(s) MMAP_DEFAULT(s) |
645 | #define CALL_MMAP(s) MMAP_DEFAULT(s) |
86 | #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s)) |
646 | #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s)) |
87 | #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL |
647 | #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL |
Line 102... | Line 662... | ||
102 | /* the number of bytes to offset an address to align it */ |
662 | /* the number of bytes to offset an address to align it */ |
103 | #define align_offset(A)\ |
663 | #define align_offset(A)\ |
104 | ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\ |
664 | ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\ |
105 | ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK)) |
665 | ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK)) |
Line -... | Line 666... | ||
- | 666 | ||
Line -... | Line 667... | ||
- | 667 | /* -------------------------- MMAP preliminaries ------------------------- */ |
|
- | 668 | ||
- | 669 | /* |
|
- | 670 | If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and |
|
- | 671 | checks to fail so compiler optimizer can delete code rather than |
|
- | 672 | using so many "#if"s. |
|
- | 673 | */ |
|
- | 674 | ||
106 | 675 | ||
107 | 676 | /* MORECORE and MMAP must return MFAIL on failure */ |
|
Line 108... | Line 677... | ||
108 | #define MFAIL ((void*)(MAX_SIZE_T)) |
677 | #define MFAIL ((void*)(MAX_SIZE_T)) |
- | 678 | #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */ |
|
- | 679 | ||
- | 680 | #define should_trim(M,s) (0) |
|
109 | #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */ |
681 | |
Line 110... | Line 682... | ||
110 | 682 | ||
- | 683 | ||
- | 684 | /* --------------------------- Lock preliminaries ------------------------ */ |
|
- | 685 | ||
- | 686 | /* |
|
- | 687 | When locks are defined, there is one global lock, plus |
|
- | 688 | one per-mspace lock. |
|
- | 689 | ||
- | 690 | The global lock_ensures that mparams.magic and other unique |
|
- | 691 | mparams values are initialized only once. It also protects |
|
- | 692 | sequences of calls to MORECORE. In many cases sys_alloc requires |
|
- | 693 | two calls, that should not be interleaved with calls by other |
|
- | 694 | threads. This does not protect against direct calls to MORECORE |
|
- | 695 | by other threads not using this lock, so there is still code to |
|
- | 696 | cope the best we can on interference. |
|
- | 697 | ||
- | 698 | Per-mspace locks surround calls to malloc, free, etc. |
|
- | 699 | By default, locks are simple non-reentrant mutexes. |
|
- | 700 | ||
- | 701 | Because lock-protected regions generally have bounded times, it is |
|
- | 702 | OK to use the supplied simple spinlocks. Spinlocks are likely to |
|
- | 703 | improve performance for lightly contended applications, but worsen |
|
- | 704 | performance under heavy contention. |
|
- | 705 | ||
- | 706 | If USE_LOCKS is > 1, the definitions of lock routines here are |
|
- | 707 | bypassed, in which case you will need to define the type MLOCK_T, |
|
- | 708 | and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK |
|
- | 709 | and TRY_LOCK. You must also declare a |
|
- | 710 | static MLOCK_T malloc_global_mutex = { initialization values };. |
|
- | 711 | ||
- | 712 | */ |
|
- | 713 | ||
- | 714 | static DEFINE_MUTEX(malloc_global_mutex); |
|
- | 715 | ||
- | 716 | #define ACQUIRE_MALLOC_GLOBAL_LOCK() MutexLock(&malloc_global_mutex); |
|
- | 717 | #define RELEASE_MALLOC_GLOBAL_LOCK() MutexUnlock(&malloc_global_mutex); |
|
- | 718 | ||
- | 719 | ||
- | 720 | /* ----------------------- Chunk representations ------------------------ */ |
|
- | 721 | ||
- | 722 | /* |
|
- | 723 | (The following includes lightly edited explanations by Colin Plumb.) |
|
- | 724 | ||
- | 725 | The malloc_chunk declaration below is misleading (but accurate and |
|
- | 726 | necessary). It declares a "view" into memory allowing access to |
|
- | 727 | necessary fields at known offsets from a given base. |
|
- | 728 | ||
111 | /* For sys_alloc, enough padding to ensure can malloc request on success */ |
729 | Chunks of memory are maintained using a `boundary tag' method as |
- | 730 | originally described by Knuth. (See the paper by Paul Wilson |
|
- | 731 | ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such |
|
- | 732 | techniques.) Sizes of free chunks are stored both in the front of |
|
- | 733 | each chunk and at the end. This makes consolidating fragmented |
|
- | 734 | chunks into bigger chunks fast. The head fields also hold bits |
|
- | 735 | representing whether chunks are free or in use. |
|
- | 736 | ||
- | 737 | Here are some pictures to make it clearer. They are "exploded" to |
|
- | 738 | show that the state of a chunk can be thought of as extending from |
|
- | 739 | the high 31 bits of the head field of its header through the |
|
- | 740 | prev_foot and PINUSE_BIT bit of the following chunk header. |
|
- | 741 | ||
- | 742 | A chunk that's in use looks like: |
|
- | 743 | ||
- | 744 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 745 | | Size of previous chunk (if P = 0) | |
|
- | 746 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 747 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| |
|
- | 748 | | Size of this chunk 1| +-+ |
|
- | 749 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 750 | | | |
|
- | 751 | +- -+ |
|
- | 752 | | | |
|
- | 753 | +- -+ |
|
- | 754 | | : |
|
- | 755 | +- size - sizeof(size_t) available payload bytes -+ |
|
- | 756 | : | |
|
- | 757 | chunk-> +- -+ |
|
- | 758 | | | |
|
- | 759 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 760 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1| |
|
- | 761 | | Size of next chunk (may or may not be in use) | +-+ |
|
- | 762 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 763 | ||
- | 764 | And if it's free, it looks like this: |
|
- | 765 | ||
- | 766 | chunk-> +- -+ |
|
- | 767 | | User payload (must be in use, or we would have merged!) | |
|
- | 768 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 769 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| |
|
- | 770 | | Size of this chunk 0| +-+ |
|
- | 771 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 772 | | Next pointer | |
|
- | 773 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 774 | | Prev pointer | |
|
- | 775 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 776 | | : |
|
- | 777 | +- size - sizeof(struct chunk) unused bytes -+ |
|
- | 778 | : | |
|
- | 779 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 780 | | Size of this chunk | |
|
- | 781 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 782 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0| |
|
- | 783 | | Size of next chunk (must be in use, or we would have merged)| +-+ |
|
- | 784 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 785 | | : |
|
- | 786 | +- User payload -+ |
|
- | 787 | : | |
|
- | 788 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 789 | |0| |
|
- | 790 | +-+ |
|
- | 791 | Note that since we always merge adjacent free chunks, the chunks |
|
- | 792 | adjacent to a free chunk must be in use. |
|
- | 793 | ||
- | 794 | Given a pointer to a chunk (which can be derived trivially from the |
|
- | 795 | payload pointer) we can, in O(1) time, find out whether the adjacent |
|
- | 796 | chunks are free, and if so, unlink them from the lists that they |
|
- | 797 | are on and merge them with the current chunk. |
|
- | 798 | ||
- | 799 | Chunks always begin on even word boundaries, so the mem portion |
|
- | 800 | (which is returned to the user) is also on an even word boundary, and |
|
- | 801 | thus at least double-word aligned. |
|
- | 802 | ||
- | 803 | The P (PINUSE_BIT) bit, stored in the unused low-order bit of the |
|
- | 804 | chunk size (which is always a multiple of two words), is an in-use |
|
- | 805 | bit for the *previous* chunk. If that bit is *clear*, then the |
|
- | 806 | word before the current chunk size contains the previous chunk |
|
- | 807 | size, and can be used to find the front of the previous chunk. |
|
- | 808 | The very first chunk allocated always has this bit set, preventing |
|
- | 809 | access to non-existent (or non-owned) memory. If pinuse is set for |
|
- | 810 | any given chunk, then you CANNOT determine the size of the |
|
- | 811 | previous chunk, and might even get a memory addressing fault when |
|
- | 812 | trying to do so. |
|
112 | #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT) |
813 | |
- | 814 | The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of |
|
- | 815 | the chunk size redundantly records whether the current chunk is |
|
- | 816 | inuse (unless the chunk is mmapped). This redundancy enables usage |
|
- | 817 | checks within free and realloc, and reduces indirection when freeing |
|
- | 818 | and consolidating chunks. |
|
- | 819 | ||
- | 820 | Each freshly allocated chunk must have both cinuse and pinuse set. |
|
- | 821 | That is, each allocated chunk borders either a previously allocated |
|
113 | 822 | and still in-use chunk, or the base of its memory arena. This is |
|
- | 823 | ensured by making all allocations from the `lowest' part of any |
|
- | 824 | found chunk. Further, no free chunk physically borders another one, |
|
- | 825 | so each free chunk is known to be preceded and followed by either |
|
- | 826 | inuse chunks or the ends of memory. |
|
- | 827 | ||
- | 828 | Note that the `foot' of the current chunk is actually represented |
|
- | 829 | as the prev_foot of the NEXT chunk. This makes it easier to |
|
- | 830 | deal with alignments etc but can be very confusing when trying |
|
- | 831 | to extend or adapt this code. |
|
- | 832 | ||
- | 833 | The exceptions to all this are |
|
- | 834 | ||
- | 835 | 1. The special chunk `top' is the top-most available chunk (i.e., |
|
- | 836 | the one bordering the end of available memory). It is treated |
|
- | 837 | specially. Top is never included in any bin, is used only if |
|
- | 838 | no other chunk is available, and is released back to the |
|
- | 839 | system if it is very large (see M_TRIM_THRESHOLD). In effect, |
|
- | 840 | the top chunk is treated as larger (and thus less well |
|
- | 841 | fitting) than any other available chunk. The top chunk |
|
- | 842 | doesn't update its trailing size field since there is no next |
|
- | 843 | contiguous chunk that would have to index off it. However, |
|
- | 844 | space is still allocated for it (TOP_FOOT_SIZE) to enable |
|
- | 845 | separation or merging when space is extended. |
|
- | 846 | ||
- | 847 | 3. Chunks allocated via mmap, have both cinuse and pinuse bits |
|
- | 848 | cleared in their head fields. Because they are allocated |
|
- | 849 | one-by-one, each must carry its own prev_foot field, which is |
|
- | 850 | also used to hold the offset this chunk has within its mmapped |
|
114 | /* |
851 | region, which is needed to preserve alignment. Each mmapped |
- | 852 | chunk is trailed by the first two fields of a fake next-chunk |
|
115 | TOP_FOOT_SIZE is padding at the end of a segment, including space |
853 | for sake of usage checks. |
116 | that may be needed to place segment records and fenceposts when new |
854 | |
- | 855 | */ |
|
- | 856 | ||
- | 857 | struct malloc_chunk { |
|
- | 858 | size_t prev_foot; /* Size of previous chunk (if free). */ |
|
- | 859 | size_t head; /* Size and inuse bits. */ |
|
- | 860 | struct malloc_chunk* fd; /* double links -- used only if free. */ |
|
- | 861 | struct malloc_chunk* bk; |
|
- | 862 | }; |
|
- | 863 | ||
- | 864 | typedef struct malloc_chunk mchunk; |
|
- | 865 | typedef struct malloc_chunk* mchunkptr; |
|
Line 117... | Line 866... | ||
117 | noncontiguous segments are added. |
866 | typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */ |
Line 118... | Line 867... | ||
118 | */ |
867 | typedef unsigned int bindex_t; /* Described below */ |
Line -... | Line 868... | ||
- | 868 | typedef unsigned int binmap_t; /* Described below */ |
|
- | 869 | typedef unsigned int flag_t; /* The type of various bit flag sets */ |
|
- | 870 | ||
- | 871 | /* ------------------- Chunks sizes and alignments ----------------------- */ |
|
- | 872 | ||
- | 873 | #define MCHUNK_SIZE (sizeof(mchunk)) |
|
119 | #define TOP_FOOT_SIZE\ |
874 | |
120 | (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE) |
875 | #if FOOTERS |
121 | 876 | #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES) |
|
122 | /* ------------------- Chunks sizes and alignments ----------------------- */ |
877 | #else /* FOOTERS */ |
Line 148... | Line 903... | ||
148 | 903 | ||
149 | /* pad request, checking for minimum (but not maximum) */ |
904 | /* pad request, checking for minimum (but not maximum) */ |
150 | #define request2size(req) \ |
905 | #define request2size(req) \ |
Line -... | Line 906... | ||
- | 906 | (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req)) |
|
151 | (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req)) |
907 | |
Line 152... | Line 908... | ||
152 | 908 | ||
153 | /* ------------------ Operations on head and foot fields ----------------- */ |
909 | /* ------------------ Operations on head and foot fields ----------------- */ |
154 | 910 | ||
Line 170... | Line 926... | ||
170 | #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE) |
926 | #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE) |
Line 171... | Line 927... | ||
171 | 927 | ||
172 | /* extraction of fields from head words */ |
928 | /* extraction of fields from head words */ |
173 | #define cinuse(p) ((p)->head & CINUSE_BIT) |
929 | #define cinuse(p) ((p)->head & CINUSE_BIT) |
- | 930 | #define pinuse(p) ((p)->head & PINUSE_BIT) |
|
174 | #define pinuse(p) ((p)->head & PINUSE_BIT) |
931 | #define flag4inuse(p) ((p)->head & FLAG4_BIT) |
175 | #define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT) |
932 | #define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT) |
Line 176... | Line 933... | ||
176 | #define is_mmapped(p) (((p)->head & INUSE_BITS) == 0) |
933 | #define is_mmapped(p) (((p)->head & INUSE_BITS) == 0) |
Line 177... | Line 934... | ||
177 | 934 | ||
- | 935 | #define chunksize(p) ((p)->head & ~(FLAG_BITS)) |
|
- | 936 | ||
Line 178... | Line 937... | ||
178 | #define chunksize(p) ((p)->head & ~(FLAG_BITS)) |
937 | #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT) |
179 | 938 | #define set_flag4(p) ((p)->head |= FLAG4_BIT) |
|
180 | #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT) |
939 | #define clear_flag4(p) ((p)->head &= ~FLAG4_BIT) |
Line 188... | Line 947... | ||
188 | #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) )) |
947 | #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) )) |
Line 189... | Line 948... | ||
189 | 948 | ||
190 | /* extract next chunk's pinuse bit */ |
949 | /* extract next chunk's pinuse bit */ |
Line -... | Line 950... | ||
- | 950 | #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT) |
|
- | 951 | ||
- | 952 | /* Get/set size at footer */ |
|
- | 953 | #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot) |
|
191 | #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT) |
954 | #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s)) |
192 | 955 | ||
193 | /* Set size, pinuse bit, and foot */ |
956 | /* Set size, pinuse bit, and foot */ |
Line 194... | Line 957... | ||
194 | #define set_size_and_pinuse_of_free_chunk(p, s)\ |
957 | #define set_size_and_pinuse_of_free_chunk(p, s)\ |
Line 200... | Line 963... | ||
200 | 963 | ||
201 | /* Get the internal overhead associated with chunk p */ |
964 | /* Get the internal overhead associated with chunk p */ |
202 | #define overhead_for(p)\ |
965 | #define overhead_for(p)\ |
Line -... | Line 966... | ||
- | 966 | (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD) |
|
- | 967 | ||
- | 968 | /* Return true if malloced space is not necessarily cleared */ |
|
- | 969 | #if MMAP_CLEARS |
|
- | 970 | #define calloc_must_clear(p) (!is_mmapped(p)) |
|
- | 971 | #else /* MMAP_CLEARS */ |
|
- | 972 | #define calloc_must_clear(p) (1) |
|
- | 973 | #endif /* MMAP_CLEARS */ |
|
- | 974 | ||
- | 975 | /* ---------------------- Overlaid data structures ----------------------- */ |
|
- | 976 | ||
- | 977 | /* |
|
- | 978 | When chunks are not in use, they are treated as nodes of either |
|
- | 979 | lists or trees. |
|
- | 980 | ||
- | 981 | "Small" chunks are stored in circular doubly-linked lists, and look |
|
- | 982 | like this: |
|
- | 983 | ||
- | 984 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 985 | | Size of previous chunk | |
|
- | 986 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 987 | `head:' | Size of chunk, in bytes |P| |
|
- | 988 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 989 | | Forward pointer to next chunk in list | |
|
- | 990 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 991 | | Back pointer to previous chunk in list | |
|
- | 992 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 993 | | Unused space (may be 0 bytes long) . |
|
- | 994 | . . |
|
- | 995 | . | |
|
- | 996 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 997 | `foot:' | Size of chunk, in bytes | |
|
- | 998 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 999 | ||
- | 1000 | Larger chunks are kept in a form of bitwise digital trees (aka |
|
- | 1001 | tries) keyed on chunksizes. Because malloc_tree_chunks are only for |
|
- | 1002 | free chunks greater than 256 bytes, their size doesn't impose any |
|
- | 1003 | constraints on user chunk sizes. Each node looks like: |
|
- | 1004 | ||
- | 1005 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 1006 | | Size of previous chunk | |
|
- | 1007 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 1008 | `head:' | Size of chunk, in bytes |P| |
|
- | 1009 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 1010 | | Forward pointer to next chunk of same size | |
|
- | 1011 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 1012 | | Back pointer to previous chunk of same size | |
|
- | 1013 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 1014 | | Pointer to left child (child[0]) | |
|
- | 1015 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 1016 | | Pointer to right child (child[1]) | |
|
- | 1017 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 1018 | | Pointer to parent | |
|
- | 1019 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 1020 | | bin index of this chunk | |
|
- | 1021 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 1022 | | Unused space . |
|
- | 1023 | . | |
|
- | 1024 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 1025 | `foot:' | Size of chunk, in bytes | |
|
- | 1026 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |
|
- | 1027 | ||
- | 1028 | Each tree holding treenodes is a tree of unique chunk sizes. Chunks |
|
- | 1029 | of the same size are arranged in a circularly-linked list, with only |
|
- | 1030 | the oldest chunk (the next to be used, in our FIFO ordering) |
|
- | 1031 | actually in the tree. (Tree members are distinguished by a non-null |
|
- | 1032 | parent pointer.) If a chunk with the same size an an existing node |
|
- | 1033 | is inserted, it is linked off the existing node using pointers that |
|
- | 1034 | work in the same way as fd/bk pointers of small chunks. |
|
- | 1035 | ||
- | 1036 | Each tree contains a power of 2 sized range of chunk sizes (the |
|
- | 1037 | smallest is 0x100 <= x < 0x180), which is is divided in half at each |
|
- | 1038 | tree level, with the chunks in the smaller half of the range (0x100 |
|
- | 1039 | <= x < 0x140 for the top nose) in the left subtree and the larger |
|
- | 1040 | half (0x140 <= x < 0x180) in the right subtree. This is, of course, |
|
- | 1041 | done by inspecting individual bits. |
|
- | 1042 | ||
- | 1043 | Using these rules, each node's left subtree contains all smaller |
|
- | 1044 | sizes than its right subtree. However, the node at the root of each |
|
- | 1045 | subtree has no particular ordering relationship to either. (The |
|
- | 1046 | dividing line between the subtree sizes is based on trie relation.) |
|
- | 1047 | If we remove the last chunk of a given size from the interior of the |
|
- | 1048 | tree, we need to replace it with a leaf node. The tree ordering |
|
- | 1049 | rules permit a node to be replaced by any leaf below it. |
|
- | 1050 | ||
- | 1051 | The smallest chunk in a tree (a common operation in a best-fit |
|
- | 1052 | allocator) can be found by walking a path to the leftmost leaf in |
|
- | 1053 | the tree. Unlike a usual binary tree, where we follow left child |
|
- | 1054 | pointers until we reach a null, here we follow the right child |
|
- | 1055 | pointer any time the left one is null, until we reach a leaf with |
|
- | 1056 | both child pointers null. The smallest chunk in the tree will be |
|
- | 1057 | somewhere along that path. |
|
- | 1058 | ||
- | 1059 | The worst case number of steps to add, find, or remove a node is |
|
- | 1060 | bounded by the number of bits differentiating chunks within |
|
- | 1061 | bins. Under current bin calculations, this ranges from 6 up to 21 |
|
- | 1062 | (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case |
|
Line 203... | Line 1063... | ||
203 | (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD) |
1063 | is of course much better. |
204 | 1064 | */ |
|
205 | 1065 | ||
206 | struct malloc_tree_chunk { |
1066 | struct malloc_tree_chunk { |
Line 220... | Line 1080... | ||
220 | typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */ |
1080 | typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */ |
Line 221... | Line 1081... | ||
221 | 1081 | ||
222 | /* A little helper macro for trees */ |
1082 | /* A little helper macro for trees */ |
Line -... | Line 1083... | ||
- | 1083 | #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1]) |
|
- | 1084 | ||
- | 1085 | /* ----------------------------- Segments -------------------------------- */ |
|
- | 1086 | ||
- | 1087 | /* |
|
- | 1088 | Each malloc space may include non-contiguous segments, held in a |
|
- | 1089 | list headed by an embedded malloc_segment record representing the |
|
- | 1090 | top-most space. Segments also include flags holding properties of |
|
- | 1091 | the space. Large chunks that are directly allocated by mmap are not |
|
- | 1092 | included in this list. They are instead independently created and |
|
- | 1093 | destroyed without otherwise keeping track of them. |
|
- | 1094 | ||
- | 1095 | Segment management mainly comes into play for spaces allocated by |
|
- | 1096 | MMAP. Any call to MMAP might or might not return memory that is |
|
- | 1097 | adjacent to an existing segment. MORECORE normally contiguously |
|
- | 1098 | extends the current space, so this space is almost always adjacent, |
|
- | 1099 | which is simpler and faster to deal with. (This is why MORECORE is |
|
- | 1100 | used preferentially to MMAP when both are available -- see |
|
- | 1101 | sys_alloc.) When allocating using MMAP, we don't use any of the |
|
- | 1102 | hinting mechanisms (inconsistently) supported in various |
|
- | 1103 | implementations of unix mmap, or distinguish reserving from |
|
- | 1104 | committing memory. Instead, we just ask for space, and exploit |
|
- | 1105 | contiguity when we get it. It is probably possible to do |
|
- | 1106 | better than this on some systems, but no general scheme seems |
|
- | 1107 | to be significantly better. |
|
- | 1108 | ||
- | 1109 | Management entails a simpler variant of the consolidation scheme |
|
- | 1110 | used for chunks to reduce fragmentation -- new adjacent memory is |
|
- | 1111 | normally prepended or appended to an existing segment. However, |
|
- | 1112 | there are limitations compared to chunk consolidation that mostly |
|
- | 1113 | reflect the fact that segment processing is relatively infrequent |
|
- | 1114 | (occurring only when getting memory from system) and that we |
|
- | 1115 | don't expect to have huge numbers of segments: |
|
- | 1116 | ||
- | 1117 | * Segments are not indexed, so traversal requires linear scans. (It |
|
- | 1118 | would be possible to index these, but is not worth the extra |
|
- | 1119 | overhead and complexity for most programs on most platforms.) |
|
- | 1120 | * New segments are only appended to old ones when holding top-most |
|
- | 1121 | memory; if they cannot be prepended to others, they are held in |
|
- | 1122 | different segments. |
|
- | 1123 | ||
- | 1124 | Except for the top-most segment of an mstate, each segment record |
|
- | 1125 | is kept at the tail of its segment. Segments are added by pushing |
|
- | 1126 | segment records onto the list headed by &mstate.seg for the |
|
- | 1127 | containing mstate. |
|
- | 1128 | ||
- | 1129 | Segment flags control allocation/merge/deallocation policies: |
|
- | 1130 | * If EXTERN_BIT set, then we did not allocate this segment, |
|
- | 1131 | and so should not try to deallocate or merge with others. |
|
- | 1132 | (This currently holds only for the initial segment passed |
|
- | 1133 | into create_mspace_with_base.) |
|
- | 1134 | * If USE_MMAP_BIT set, the segment may be merged with |
|
- | 1135 | other surrounding mmapped segments and trimmed/de-allocated |
|
- | 1136 | using munmap. |
|
- | 1137 | * If neither bit is set, then the segment was obtained using |
|
- | 1138 | MORECORE so can be merged with surrounding MORECORE'd segments |
|
Line 223... | Line 1139... | ||
223 | #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1]) |
1139 | and deallocated/trimmed using MORECORE with negative arguments. |
224 | 1140 | */ |
|
225 | 1141 | ||
226 | struct malloc_segment { |
1142 | struct malloc_segment { |
Line 297... | Line 1213... | ||
297 | than this are trapped (unless INSECURE is defined). |
1213 | than this are trapped (unless INSECURE is defined). |
Line 298... | Line 1214... | ||
298 | 1214 | ||
299 | Magic tag |
1215 | Magic tag |
Line -... | Line 1216... | ||
- | 1216 | A cross-check field that should always hold same value as mparams.magic. |
|
- | 1217 | ||
- | 1218 | Max allowed footprint |
|
300 | A cross-check field that should always hold same value as mparams.magic. |
1219 | The maximum allowed bytes to allocate from system (zero means no limit) |
301 | 1220 | ||
Line 302... | Line 1221... | ||
302 | Flags |
1221 | Flags |
303 | Bits recording whether to use MMAP, locks, or contiguous MORECORE |
1222 | Bits recording whether to use MMAP, locks, or contiguous MORECORE |
304 | 1223 | ||
Line 305... | Line 1224... | ||
305 | Statistics |
1224 | Statistics |
306 | Each space keeps track of current and maximum system memory |
1225 | Each space keeps track of current and maximum system memory |
307 | obtained via MORECORE or MMAP. |
1226 | obtained via MORECORE or MMAP. |
308 | 1227 | ||
Line 309... | Line 1228... | ||
309 | Trim support |
1228 | Trim support |
310 | Fields holding the amount of unused topmost memory that should trigger |
1229 | Fields holding the amount of unused topmost memory that should trigger |
311 | timming, and a counter to force periodic scanning to release unused |
1230 | trimming, and a counter to force periodic scanning to release unused |
Line 343... | Line 1262... | ||
343 | size_t magic; |
1262 | size_t magic; |
344 | mchunkptr smallbins[(NSMALLBINS+1)*2]; |
1263 | mchunkptr smallbins[(NSMALLBINS+1)*2]; |
345 | tbinptr treebins[NTREEBINS]; |
1264 | tbinptr treebins[NTREEBINS]; |
346 | size_t footprint; |
1265 | size_t footprint; |
347 | size_t max_footprint; |
1266 | size_t max_footprint; |
- | 1267 | size_t footprint_limit; /* zero means no limit */ |
|
348 | flag_t mflags; |
1268 | flag_t mflags; |
349 | struct mutex lock; /* locate lock among fields that rarely change */ |
1269 | struct mutex lock; /* locate lock among fields that rarely change */ |
350 | msegment seg; |
1270 | msegment seg; |
351 | void* extp; /* Unused but available for extensions */ |
1271 | void* extp; /* Unused but available for extensions */ |
352 | size_t exts; |
1272 | size_t exts; |
Line 361... | Line 1281... | ||
361 | dynamically set using mallopt. There is a single instance, mparams, |
1281 | dynamically set using mallopt. There is a single instance, mparams, |
362 | initialized in init_mparams. Note that the non-zeroness of "magic" |
1282 | initialized in init_mparams. Note that the non-zeroness of "magic" |
363 | also serves as an initialization flag. |
1283 | also serves as an initialization flag. |
364 | */ |
1284 | */ |
Line 365... | Line 1285... | ||
365 | 1285 | ||
366 | struct malloc_params |
- | |
367 | { |
1286 | struct malloc_params { |
368 | volatile size_t magic; |
1287 | size_t magic; |
369 | size_t page_size; |
1288 | size_t page_size; |
370 | size_t granularity; |
1289 | size_t granularity; |
371 | size_t mmap_threshold; |
1290 | size_t mmap_threshold; |
372 | size_t trim_threshold; |
1291 | size_t trim_threshold; |
373 | flag_t default_mflags; |
1292 | flag_t default_mflags; |
Line 374... | Line 1293... | ||
374 | }; |
1293 | }; |
Line -... | Line 1294... | ||
- | 1294 | ||
375 | 1295 | static struct malloc_params mparams; |
|
Line 376... | Line 1296... | ||
376 | static struct malloc_params mparams; |
1296 | |
377 | 1297 | /* Ensure mparams initialized */ |
|
378 | #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams()) |
1298 | #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams()) |
Line 379... | Line 1299... | ||
379 | 1299 | ||
Line -... | Line 1300... | ||
- | 1300 | static struct malloc_state _gm_; |
|
Line 380... | Line 1301... | ||
380 | static struct malloc_state _gm_; |
1301 | #define gm (&_gm_) |
Line -... | Line 1302... | ||
- | 1302 | #define is_global(M) ((M) == &_gm_) |
|
- | 1303 | ||
- | 1304 | #define is_initialized(M) ((M)->top != 0) |
|
381 | #define gm (&_gm_) |
1305 | |
- | 1306 | /* -------------------------- system alloc setup ------------------------- */ |
|
- | 1307 | ||
- | 1308 | /* Operations on mflags */ |
|
Line -... | Line 1309... | ||
- | 1309 | ||
- | 1310 | #define use_lock(M) ((M)->mflags & USE_LOCK_BIT) |
|
- | 1311 | #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT) |
|
382 | #define is_global(M) ((M) == &_gm_) |
1312 | #if USE_LOCKS |
- | 1313 | #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT) |
|
- | 1314 | #else |
|
- | 1315 | #define disable_lock(M) |
|
- | 1316 | #endif |
|
- | 1317 | ||
- | 1318 | #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT) |
|
- | 1319 | #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT) |
|
- | 1320 | #if HAVE_MMAP |
|
- | 1321 | #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT) |
|
- | 1322 | #else |
|
- | 1323 | #define disable_mmap(M) |
|
- | 1324 | #endif |
|
- | 1325 | ||
- | 1326 | #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT) |
|
- | 1327 | #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT) |
|
- | 1328 | ||
- | 1329 | #define set_lock(M,L)\ |
|
- | 1330 | ((M)->mflags = (L)?\ |
|
- | 1331 | ((M)->mflags | USE_LOCK_BIT) :\ |
|
- | 1332 | ((M)->mflags & ~USE_LOCK_BIT)) |
|
- | 1333 | ||
- | 1334 | /* page-align a size */ |
|
- | 1335 | #define page_align(S)\ |
|
- | 1336 | (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE)) |
|
- | 1337 | ||
- | 1338 | /* granularity-align a size */ |
|
- | 1339 | #define granularity_align(S)\ |
|
- | 1340 | (((S) + (mparams.granularity - SIZE_T_ONE))\ |
|
- | 1341 | & ~(mparams.granularity - SIZE_T_ONE)) |
|
- | 1342 | ||
383 | 1343 | ||
- | 1344 | /* For mmap, use granularity alignment on windows, else page-align */ |
|
- | 1345 | #ifdef WIN32 |
|
- | 1346 | #define mmap_align(S) granularity_align(S) |
|
- | 1347 | #else |
|
- | 1348 | #define mmap_align(S) page_align(S) |
|
- | 1349 | #endif |
|
- | 1350 | ||
- | 1351 | /* For sys_alloc, enough padding to ensure can malloc request on success */ |
|
- | 1352 | #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT) |
|
- | 1353 | ||
- | 1354 | #define is_page_aligned(S)\ |
|
- | 1355 | (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0) |
|
- | 1356 | #define is_granularity_aligned(S)\ |
|
- | 1357 | (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0) |
|
- | 1358 | ||
- | 1359 | /* True if segment S holds address A */ |
|
- | 1360 | #define segment_holds(S, A)\ |
|
- | 1361 | ((char*)(A) >= S->base && (char*)(A) < S->base + S->size) |
|
- | 1362 | ||
- | 1363 | /* Return segment holding given address */ |
|
- | 1364 | static msegmentptr segment_holding(mstate m, char* addr) { |
|
- | 1365 | msegmentptr sp = &m->seg; |
|
- | 1366 | for (;;) { |
|
- | 1367 | if (addr >= sp->base && addr < sp->base + sp->size) |
|
- | 1368 | return sp; |
|
- | 1369 | if ((sp = sp->next) == 0) |
|
- | 1370 | return 0; |
|
- | 1371 | } |
|
- | 1372 | } |
|
- | 1373 | ||
- | 1374 | /* Return true if segment contains a segment link */ |
|
- | 1375 | static int has_segment_link(mstate m, msegmentptr ss) { |
|
- | 1376 | msegmentptr sp = &m->seg; |
|
- | 1377 | for (;;) { |
|
- | 1378 | if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size) |
|
- | 1379 | return 1; |
|
- | 1380 | if ((sp = sp->next) == 0) |
|
- | 1381 | return 0; |
|
- | 1382 | } |
|
- | 1383 | } |
|
- | 1384 | ||
- | 1385 | ||
- | 1386 | /* |
|
- | 1387 | TOP_FOOT_SIZE is padding at the end of a segment, including space |
|
- | 1388 | that may be needed to place segment records and fenceposts when new |
|
- | 1389 | noncontiguous segments are added. |
|
- | 1390 | */ |
|
- | 1391 | #define TOP_FOOT_SIZE\ |
|
- | 1392 | (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE) |
|
Line 384... | Line 1393... | ||
384 | #define is_initialized(M) ((M)->top != 0) |
1393 | |
385 | 1394 | ||
Line -... | Line 1395... | ||
- | 1395 | /* ------------------------------- Hooks -------------------------------- */ |
|
- | 1396 | ||
- | 1397 | /* |
|
- | 1398 | PREACTION should be defined to return 0 on success, and nonzero on |
|
- | 1399 | failure. If you are not using locking, you can redefine these to do |
|
- | 1400 | anything you like. |
|
- | 1401 | */ |
|
- | 1402 | ||
- | 1403 | #define PREACTION(M) ( MutexLock(&(M)->lock)) |
|
- | 1404 | #define POSTACTION(M) { MutexUnlock(&(M)->lock); } |
|
- | 1405 | ||
- | 1406 | /* -------------------------- Debugging setup ---------------------------- */ |
|
- | 1407 | ||
- | 1408 | #if ! DEBUG |
|
- | 1409 | ||
- | 1410 | #define check_free_chunk(M,P) |
|
- | 1411 | #define check_inuse_chunk(M,P) |
|
- | 1412 | #define check_malloced_chunk(M,P,N) |
|
- | 1413 | #define check_mmapped_chunk(M,P) |
|
- | 1414 | #define check_malloc_state(M) |
|
- | 1415 | #define check_top_chunk(M,P) |
|
- | 1416 | ||
- | 1417 | #else /* DEBUG */ |
|
- | 1418 | #define check_free_chunk(M,P) do_check_free_chunk(M,P) |
|
- | 1419 | #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P) |
|
- | 1420 | #define check_top_chunk(M,P) do_check_top_chunk(M,P) |
|
- | 1421 | #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N) |
|
- | 1422 | #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P) |
|
- | 1423 | #define check_malloc_state(M) do_check_malloc_state(M) |
|
- | 1424 | ||
- | 1425 | static void do_check_any_chunk(mstate m, mchunkptr p); |
|
- | 1426 | static void do_check_top_chunk(mstate m, mchunkptr p); |
|
Line 386... | Line 1427... | ||
386 | 1427 | static void do_check_mmapped_chunk(mstate m, mchunkptr p); |
|
Line 387... | Line 1428... | ||
387 | //struct mutex malloc_global_mutex; |
1428 | static void do_check_inuse_chunk(mstate m, mchunkptr p); |
388 | 1429 | static void do_check_free_chunk(mstate m, mchunkptr p); |
|
389 | static DEFINE_MUTEX(malloc_global_mutex); |
1430 | static void do_check_malloced_chunk(mstate m, void* mem, size_t s); |
390 | 1431 | static void do_check_tree(mstate m, tchunkptr t); |
|
Line 391... | Line 1432... | ||
391 | #define ACQUIRE_MALLOC_GLOBAL_LOCK() MutexLock(&malloc_global_mutex); |
1432 | static void do_check_treebin(mstate m, bindex_t i); |
392 | #define RELEASE_MALLOC_GLOBAL_LOCK() MutexUnlock(&malloc_global_mutex); |
1433 | static void do_check_smallbin(mstate m, bindex_t i); |
393 | 1434 | static void do_check_malloc_state(mstate m); |
|
Line 394... | Line -... | ||
394 | #define PREACTION(M) ( MutexLock(&(M)->lock)) |
- | |
395 | #define POSTACTION(M) { MutexUnlock(&(M)->lock); } |
1435 | static int bin_find(mstate m, mchunkptr x); |
396 | 1436 | static size_t traverse_and_check(mstate m); |
|
397 | 1437 | #endif /* DEBUG */ |
|
398 | /* ---------------------------- Indexing Bins ---------------------------- */ |
1438 | |
399 | 1439 | /* ---------------------------- Indexing Bins ---------------------------- */ |
|
400 | #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS) |
1440 | |
401 | #define small_index(s) ((s) >> SMALLBIN_SHIFT) |
1441 | #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS) |
402 | #define small_index2size(i) ((i) << SMALLBIN_SHIFT) |
1442 | #define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT) |
403 | #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE)) |
- | |
404 | 1443 | #define small_index2size(i) ((i) << SMALLBIN_SHIFT) |
|
405 | /* addressing by index. See above about smallbin repositioning */ |
1444 | #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE)) |
406 | #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1]))) |
1445 | |
407 | #define treebin_at(M,i) (&((M)->treebins[i])) |
1446 | /* addressing by index. See above about smallbin repositioning */ |
Line -... | Line 1447... | ||
- | 1447 | #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1]))) |
|
- | 1448 | #define treebin_at(M,i) (&((M)->treebins[i])) |
|
408 | 1449 | ||
409 | 1450 | #define compute_tree_index(S, I)\ |
|
410 | #define compute_tree_index(S, I)\ |
1451 | {\ |
Line 411... | Line 1452... | ||
411 | {\ |
1452 | unsigned int X = S >> TREEBIN_SHIFT;\ |
Line 457... | Line 1498... | ||
457 | #define left_bits(x) ((x<<1) | -(x<<1)) |
1498 | #define left_bits(x) ((x<<1) | -(x<<1)) |
Line 458... | Line 1499... | ||
458 | 1499 | ||
459 | /* mask with all bits to left of or equal to least bit of x on */ |
1500 | /* mask with all bits to left of or equal to least bit of x on */ |
Line 460... | Line -... | ||
460 | #define same_or_left_bits(x) ((x) | -(x)) |
- | |
461 | - | ||
462 | - | ||
463 | /* index corresponding to given bit. Use x86 asm if possible */ |
1501 | #define same_or_left_bits(x) ((x) | -(x)) |
464 | 1502 | ||
465 | #define compute_bit2idx(X, I)\ |
1503 | #define compute_bit2idx(X, I)\ |
466 | {\ |
1504 | {\ |
467 | unsigned int J;\ |
1505 | unsigned int J;\ |
468 | __asm__("bsfl\t%1, %0\n\t" : "=r" (J) : "g" (X));\ |
1506 | J = __builtin_ctz(X); \ |
Line -... | Line 1507... | ||
- | 1507 | I = (bindex_t)J;\ |
|
Line -... | Line 1508... | ||
- | 1508 | } |
|
- | 1509 | ||
- | 1510 | /* ----------------------- Runtime Check Support ------------------------- */ |
|
- | 1511 | ||
- | 1512 | /* |
|
- | 1513 | For security, the main invariant is that malloc/free/etc never |
|
- | 1514 | writes to a static address other than malloc_state, unless static |
|
- | 1515 | malloc_state itself has been corrupted, which cannot occur via |
|
469 | I = (bindex_t)J;\ |
1516 | malloc (because of these checks). In essence this means that we |
- | 1517 | believe all pointers, sizes, maps etc held in malloc_state, but |
|
- | 1518 | check all of those linked or offsetted from other embedded data |
|
- | 1519 | structures. These checks are interspersed with main code in a way |
|
- | 1520 | that tends to minimize their run-time cost. |
|
- | 1521 | ||
- | 1522 | When FOOTERS is defined, in addition to range checking, we also |
|
- | 1523 | verify footer fields of inuse chunks, which can be used guarantee |
|
- | 1524 | that the mstate controlling malloc/free is intact. This is a |
|
- | 1525 | streamlined version of the approach described by William Robertson |
|
- | 1526 | et al in "Run-time Detection of Heap-based Overflows" LISA'03 |
|
- | 1527 | http://www.usenix.org/events/lisa03/tech/robertson.html The footer |
|
- | 1528 | of an inuse chunk holds the xor of its mstate and a random seed, |
|
- | 1529 | that is checked upon calls to free() and realloc(). This is |
|
- | 1530 | (probabalistically) unguessable from outside the program, but can be |
|
- | 1531 | computed by any code successfully malloc'ing any chunk, so does not |
|
- | 1532 | itself provide protection against code that has already broken |
|
Line -... | Line 1533... | ||
- | 1533 | security through some other means. Unlike Robertson et al, we |
|
470 | } |
1534 | always dynamically check addresses of all offset chunks (previous, |
471 | 1535 | next, etc). This turns out to be cheaper than relying on hashes. |
|
- | 1536 | */ |
|
472 | 1537 | ||
- | 1538 | #if !INSECURE |
|
- | 1539 | /* Check if address a is at least as high as any from MORECORE or MMAP */ |
|
- | 1540 | #define ok_address(M, a) ((char*)(a) >= (M)->least_addr) |
|
- | 1541 | /* Check if address of next chunk n is higher than base chunk p */ |
|
- | 1542 | #define ok_next(p, n) ((char*)(p) < (char*)(n)) |
|
- | 1543 | /* Check if p has inuse status */ |
|
- | 1544 | #define ok_inuse(p) is_inuse(p) |
|
- | 1545 | /* Check if p has its pinuse bit on */ |
|
- | 1546 | #define ok_pinuse(p) pinuse(p) |
|
- | 1547 | ||
- | 1548 | #else /* !INSECURE */ |
|
- | 1549 | #define ok_address(M, a) (1) |
|
- | 1550 | #define ok_next(b, n) (1) |
|
- | 1551 | #define ok_inuse(p) (1) |
|
- | 1552 | #define ok_pinuse(p) (1) |
|
- | 1553 | #endif /* !INSECURE */ |
|
- | 1554 | ||
- | 1555 | #if (FOOTERS && !INSECURE) |
|
- | 1556 | /* Check if (alleged) mstate m has expected magic field */ |
|
- | 1557 | #define ok_magic(M) ((M)->magic == mparams.magic) |
|
- | 1558 | #else /* (FOOTERS && !INSECURE) */ |
|
- | 1559 | #define ok_magic(M) (1) |
|
- | 1560 | #endif /* (FOOTERS && !INSECURE) */ |
|
- | 1561 | ||
- | 1562 | /* In gcc, use __builtin_expect to minimize impact of checks */ |
|
- | 1563 | #if !INSECURE |
|
- | 1564 | #if defined(__GNUC__) && __GNUC__ >= 3 |
|
- | 1565 | #define RTCHECK(e) __builtin_expect(e, 1) |
|
- | 1566 | #else /* GNUC */ |
|
- | 1567 | #define RTCHECK(e) (e) |
|
- | 1568 | #endif /* GNUC */ |
|
- | 1569 | #else /* !INSECURE */ |
|
- | 1570 | #define RTCHECK(e) (1) |
|
- | 1571 | #endif /* !INSECURE */ |
|
- | 1572 | ||
Line 473... | Line 1573... | ||
473 | #define mark_inuse_foot(M,p,s) |
1573 | /* macros to set up inuse chunks with or without footers */ |
Line 474... | Line 1574... | ||
474 | 1574 | ||
475 | /* Get/set size at footer */ |
1575 | #if !FOOTERS |
Line 490... | Line 1590... | ||
490 | 1590 | ||
491 | /* Set size, cinuse and pinuse bit of this chunk */ |
1591 | /* Set size, cinuse and pinuse bit of this chunk */ |
492 | #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ |
1592 | #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ |
Line -... | Line 1593... | ||
- | 1593 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT)) |
|
Line -... | Line 1594... | ||
- | 1594 | ||
493 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT)) |
1595 | #else /* FOOTERS */ |
- | 1596 | ||
- | 1597 | /* Set foot of inuse chunk to be xor of mstate and seed */ |
|
494 | 1598 | #define mark_inuse_foot(M,p,s)\ |
|
- | 1599 | (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic)) |
|
- | 1600 | ||
Line 495... | Line -... | ||
495 | - | ||
496 | #define assert(x) |
1601 | #define get_mstate_for(p)\ |
497 | #define RTCHECK(e) __builtin_expect(e, 1) |
1602 | ((mstate)(((mchunkptr)((char*)(p) +\ |
498 | 1603 | (chunksize(p))))->prev_foot ^ mparams.magic)) |
|
499 | #define check_free_chunk(M,P) |
- | |
500 | #define check_inuse_chunk(M,P) |
1604 | |
Line -... | Line 1605... | ||
- | 1605 | #define set_inuse(M,p,s)\ |
|
- | 1606 | ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\ |
|
- | 1607 | (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \ |
|
- | 1608 | mark_inuse_foot(M,p,s)) |
|
- | 1609 | ||
- | 1610 | #define set_inuse_and_pinuse(M,p,s)\ |
|
- | 1611 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ |
|
- | 1612 | (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\ |
|
- | 1613 | mark_inuse_foot(M,p,s)) |
|
- | 1614 | ||
- | 1615 | #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ |
|
- | 1616 | ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ |
|
- | 1617 | mark_inuse_foot(M, p, s)) |
|
- | 1618 | ||
- | 1619 | #endif /* !FOOTERS */ |
|
- | 1620 | ||
- | 1621 | /* ---------------------------- setting mparams -------------------------- */ |
|
- | 1622 | ||
- | 1623 | #if LOCK_AT_FORK |
|
- | 1624 | static void pre_fork(void) { ACQUIRE_LOCK(&(gm)->mutex); } |
|
- | 1625 | static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); } |
|
- | 1626 | static void post_fork_child(void) { INITIAL_LOCK(&(gm)->mutex); } |
|
- | 1627 | #endif /* LOCK_AT_FORK */ |
|
- | 1628 | ||
- | 1629 | /* Initialize mparams */ |
|
- | 1630 | static int init_mparams(void) { |
|
- | 1631 | ||
- | 1632 | ACQUIRE_MALLOC_GLOBAL_LOCK(); |
|
- | 1633 | if (mparams.magic == 0) { |
|
- | 1634 | size_t magic; |
|
- | 1635 | size_t psize; |
|
- | 1636 | size_t gsize; |
|
- | 1637 | ||
- | 1638 | psize = 4096; |
|
- | 1639 | gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize); |
|
- | 1640 | ||
- | 1641 | mparams.granularity = gsize; |
|
- | 1642 | mparams.page_size = psize; |
|
- | 1643 | mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD; |
|
- | 1644 | mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD; |
|
- | 1645 | mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT; |
|
- | 1646 | ||
- | 1647 | /* Set up lock for main malloc area */ |
|
- | 1648 | gm->mflags = mparams.default_mflags; |
|
501 | #define check_malloced_chunk(M,P,N) |
1649 | MutexInit(&gm->lock); |
- | 1650 | ||
502 | #define check_mmapped_chunk(M,P) |
1651 | { |
- | 1652 | magic = (size_t)&magic ^ (size_t)0x55555555U; |
|
- | 1653 | magic |= (size_t)8U; /* ensure nonzero */ |
|
- | 1654 | magic &= ~(size_t)7U; /* improve chances of fault for bad values */ |
|
- | 1655 | /* Until memory modes commonly available, use volatile-write */ |
|
- | 1656 | (*(volatile size_t *)(&(mparams.magic))) = magic; |
|
- | 1657 | } |
|
- | 1658 | } |
|
- | 1659 | ||
- | 1660 | RELEASE_MALLOC_GLOBAL_LOCK(); |
|
- | 1661 | return 1; |
|
- | 1662 | } |
|
503 | #define check_malloc_state(M) |
1663 | |
- | 1664 | ||
- | 1665 | #if DEBUG |
|
- | 1666 | /* ------------------------- Debugging Support --------------------------- */ |
|
- | 1667 | ||
- | 1668 | /* Check properties of any chunk, whether free, inuse, mmapped etc */ |
|
- | 1669 | static void do_check_any_chunk(mstate m, mchunkptr p) { |
|
- | 1670 | assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); |
|
504 | #define check_top_chunk(M,P) |
1671 | assert(ok_address(m, p)); |
- | 1672 | } |
|
- | 1673 | ||
- | 1674 | /* Check properties of top chunk */ |
|
- | 1675 | static void do_check_top_chunk(mstate m, mchunkptr p) { |
|
- | 1676 | msegmentptr sp = segment_holding(m, (char*)p); |
|
- | 1677 | size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */ |
|
- | 1678 | assert(sp != 0); |
|
- | 1679 | assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); |
|
- | 1680 | assert(ok_address(m, p)); |
|
- | 1681 | assert(sz == m->topsize); |
|
- | 1682 | assert(sz > 0); |
|
- | 1683 | assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE); |
|
- | 1684 | assert(pinuse(p)); |
|
- | 1685 | assert(!pinuse(chunk_plus_offset(p, sz))); |
|
- | 1686 | } |
|
- | 1687 | ||
- | 1688 | /* Check properties of (inuse) mmapped chunks */ |
|
- | 1689 | static void do_check_mmapped_chunk(mstate m, mchunkptr p) { |
|
- | 1690 | size_t sz = chunksize(p); |
|
- | 1691 | size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD); |
|
- | 1692 | assert(is_mmapped(p)); |
|
- | 1693 | assert(use_mmap(m)); |
|
- | 1694 | assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); |
|
- | 1695 | assert(ok_address(m, p)); |
|
- | 1696 | assert(!is_small(sz)); |
|
505 | 1697 | assert((len & (mparams.page_size-SIZE_T_ONE)) == 0); |
|
- | 1698 | assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD); |
|
- | 1699 | assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0); |
|
- | 1700 | } |
|
- | 1701 | ||
- | 1702 | /* Check properties of inuse chunks */ |
|
- | 1703 | static void do_check_inuse_chunk(mstate m, mchunkptr p) { |
|
- | 1704 | do_check_any_chunk(m, p); |
|
- | 1705 | assert(is_inuse(p)); |
|
- | 1706 | assert(next_pinuse(p)); |
|
- | 1707 | /* If not pinuse and not mmapped, previous chunk has OK offset */ |
|
- | 1708 | assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p); |
|
- | 1709 | if (is_mmapped(p)) |
|
- | 1710 | do_check_mmapped_chunk(m, p); |
|
- | 1711 | } |
|
- | 1712 | ||
506 | /* Check if address a is at least as high as any from MORECORE or MMAP */ |
1713 | /* Check properties of free chunks */ |
- | 1714 | static void do_check_free_chunk(mstate m, mchunkptr p) { |
|
- | 1715 | size_t sz = chunksize(p); |
|
- | 1716 | mchunkptr next = chunk_plus_offset(p, sz); |
|
- | 1717 | do_check_any_chunk(m, p); |
|
- | 1718 | assert(!is_inuse(p)); |
|
- | 1719 | assert(!next_pinuse(p)); |
|
- | 1720 | assert (!is_mmapped(p)); |
|
- | 1721 | if (p != m->dv && p != m->top) { |
|
- | 1722 | if (sz >= MIN_CHUNK_SIZE) { |
|
- | 1723 | assert((sz & CHUNK_ALIGN_MASK) == 0); |
|
- | 1724 | assert(is_aligned(chunk2mem(p))); |
|
- | 1725 | assert(next->prev_foot == sz); |
|
- | 1726 | assert(pinuse(p)); |
|
- | 1727 | assert (next == m->top || is_inuse(next)); |
|
- | 1728 | assert(p->fd->bk == p); |
|
- | 1729 | assert(p->bk->fd == p); |
|
- | 1730 | } |
|
- | 1731 | else /* markers are always of size SIZE_T_SIZE */ |
|
- | 1732 | assert(sz == SIZE_T_SIZE); |
|
- | 1733 | } |
|
- | 1734 | } |
|
- | 1735 | ||
- | 1736 | /* Check properties of malloced chunks at the point they are malloced */ |
|
- | 1737 | static void do_check_malloced_chunk(mstate m, void* mem, size_t s) { |
|
- | 1738 | if (mem != 0) { |
|
- | 1739 | mchunkptr p = mem2chunk(mem); |
|
- | 1740 | size_t sz = p->head & ~INUSE_BITS; |
|
- | 1741 | do_check_inuse_chunk(m, p); |
|
- | 1742 | assert((sz & CHUNK_ALIGN_MASK) == 0); |
|
- | 1743 | assert(sz >= MIN_CHUNK_SIZE); |
|
- | 1744 | assert(sz >= s); |
|
507 | #define ok_address(M, a) ((char*)(a) >= (M)->least_addr) |
1745 | /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */ |
- | 1746 | assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE)); |
|
- | 1747 | } |
|
- | 1748 | } |
|
508 | /* Check if address of next chunk n is higher than base chunk p */ |
1749 | |
- | 1750 | /* Check a tree and its subtrees. */ |
|
- | 1751 | static void do_check_tree(mstate m, tchunkptr t) { |
|
- | 1752 | tchunkptr head = 0; |
|
- | 1753 | tchunkptr u = t; |
|
- | 1754 | bindex_t tindex = t->index; |
|
- | 1755 | size_t tsize = chunksize(t); |
|
- | 1756 | bindex_t idx; |
|
- | 1757 | compute_tree_index(tsize, idx); |
|
- | 1758 | assert(tindex == idx); |
|
- | 1759 | assert(tsize >= MIN_LARGE_SIZE); |
|
- | 1760 | assert(tsize >= minsize_for_tree_index(idx)); |
|
- | 1761 | assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1)))); |
|
- | 1762 | ||
- | 1763 | do { /* traverse through chain of same-sized nodes */ |
|
- | 1764 | do_check_any_chunk(m, ((mchunkptr)u)); |
|
- | 1765 | assert(u->index == tindex); |
|
- | 1766 | assert(chunksize(u) == tsize); |
|
- | 1767 | assert(!is_inuse(u)); |
|
- | 1768 | assert(!next_pinuse(u)); |
|
- | 1769 | assert(u->fd->bk == u); |
|
- | 1770 | assert(u->bk->fd == u); |
|
- | 1771 | if (u->parent == 0) { |
|
- | 1772 | assert(u->child[0] == 0); |
|
- | 1773 | assert(u->child[1] == 0); |
|
- | 1774 | } |
|
- | 1775 | else { |
|
- | 1776 | assert(head == 0); /* only one node on chain has parent */ |
|
- | 1777 | head = u; |
|
- | 1778 | assert(u->parent != u); |
|
- | 1779 | assert (u->parent->child[0] == u || |
|
- | 1780 | u->parent->child[1] == u || |
|
- | 1781 | *((tbinptr*)(u->parent)) == u); |
|
- | 1782 | if (u->child[0] != 0) { |
|
- | 1783 | assert(u->child[0]->parent == u); |
|
- | 1784 | assert(u->child[0] != u); |
|
- | 1785 | do_check_tree(m, u->child[0]); |
|
- | 1786 | } |
|
- | 1787 | if (u->child[1] != 0) { |
|
- | 1788 | assert(u->child[1]->parent == u); |
|
- | 1789 | assert(u->child[1] != u); |
|
- | 1790 | do_check_tree(m, u->child[1]); |
|
- | 1791 | } |
|
- | 1792 | if (u->child[0] != 0 && u->child[1] != 0) { |
|
- | 1793 | assert(chunksize(u->child[0]) < chunksize(u->child[1])); |
|
- | 1794 | } |
|
- | 1795 | } |
|
- | 1796 | u = u->fd; |
|
- | 1797 | } while (u != t); |
|
- | 1798 | assert(head != 0); |
|
- | 1799 | } |
|
- | 1800 | ||
- | 1801 | /* Check all the chunks in a treebin. */ |
|
- | 1802 | static void do_check_treebin(mstate m, bindex_t i) { |
|
- | 1803 | tbinptr* tb = treebin_at(m, i); |
|
- | 1804 | tchunkptr t = *tb; |
|
- | 1805 | int empty = (m->treemap & (1U << i)) == 0; |
|
- | 1806 | if (t == 0) |
|
- | 1807 | assert(empty); |
|
- | 1808 | if (!empty) |
|
- | 1809 | do_check_tree(m, t); |
|
- | 1810 | } |
|
- | 1811 | ||
- | 1812 | /* Check all the chunks in a smallbin. */ |
|
- | 1813 | static void do_check_smallbin(mstate m, bindex_t i) { |
|
- | 1814 | sbinptr b = smallbin_at(m, i); |
|
- | 1815 | mchunkptr p = b->bk; |
|
- | 1816 | unsigned int empty = (m->smallmap & (1U << i)) == 0; |
|
- | 1817 | if (p == b) |
|
- | 1818 | assert(empty); |
|
- | 1819 | if (!empty) { |
|
- | 1820 | for (; p != b; p = p->bk) { |
|
- | 1821 | size_t size = chunksize(p); |
|
- | 1822 | mchunkptr q; |
|
- | 1823 | /* each chunk claims to be free */ |
|
- | 1824 | do_check_free_chunk(m, p); |
|
- | 1825 | /* chunk belongs in bin */ |
|
- | 1826 | assert(small_index(size) == i); |
|
- | 1827 | assert(p->bk == b || chunksize(p->bk) == chunksize(p)); |
|
- | 1828 | /* chunk is followed by an inuse chunk */ |
|
- | 1829 | q = next_chunk(p); |
|
- | 1830 | if (q->head != FENCEPOST_HEAD) |
|
- | 1831 | do_check_inuse_chunk(m, q); |
|
- | 1832 | } |
|
- | 1833 | } |
|
- | 1834 | } |
|
- | 1835 | ||
- | 1836 | /* Find x in a bin. Used in other check functions. */ |
|
- | 1837 | static int bin_find(mstate m, mchunkptr x) { |
|
- | 1838 | size_t size = chunksize(x); |
|
- | 1839 | if (is_small(size)) { |
|
- | 1840 | bindex_t sidx = small_index(size); |
|
- | 1841 | sbinptr b = smallbin_at(m, sidx); |
|
- | 1842 | if (smallmap_is_marked(m, sidx)) { |
|
- | 1843 | mchunkptr p = b; |
|
- | 1844 | do { |
|
- | 1845 | if (p == x) |
|
- | 1846 | return 1; |
|
- | 1847 | } while ((p = p->fd) != b); |
|
- | 1848 | } |
|
- | 1849 | } |
|
- | 1850 | else { |
|
- | 1851 | bindex_t tidx; |
|
- | 1852 | compute_tree_index(size, tidx); |
|
- | 1853 | if (treemap_is_marked(m, tidx)) { |
|
- | 1854 | tchunkptr t = *treebin_at(m, tidx); |
|
- | 1855 | size_t sizebits = size << leftshift_for_tree_index(tidx); |
|
- | 1856 | while (t != 0 && chunksize(t) != size) { |
|
- | 1857 | t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]; |
|
- | 1858 | sizebits <<= 1; |
|
- | 1859 | } |
|
- | 1860 | if (t != 0) { |
|
- | 1861 | tchunkptr u = t; |
|
- | 1862 | do { |
|
- | 1863 | if (u == (tchunkptr)x) |
|
- | 1864 | return 1; |
|
- | 1865 | } while ((u = u->fd) != t); |
|
- | 1866 | } |
|
- | 1867 | } |
|
- | 1868 | } |
|
- | 1869 | return 0; |
|
- | 1870 | } |
|
- | 1871 | ||
- | 1872 | /* Traverse each chunk and check it; return total */ |
|
- | 1873 | static size_t traverse_and_check(mstate m) { |
|
- | 1874 | size_t sum = 0; |
|
- | 1875 | if (is_initialized(m)) { |
|
- | 1876 | msegmentptr s = &m->seg; |
|
- | 1877 | sum += m->topsize + TOP_FOOT_SIZE; |
|
- | 1878 | while (s != 0) { |
|
- | 1879 | mchunkptr q = align_as_chunk(s->base); |
|
- | 1880 | mchunkptr lastq = 0; |
|
- | 1881 | assert(pinuse(q)); |
|
- | 1882 | while (segment_holds(s, q) && |
|
- | 1883 | q != m->top && q->head != FENCEPOST_HEAD) { |
|
- | 1884 | sum += chunksize(q); |
|
- | 1885 | if (is_inuse(q)) { |
|
- | 1886 | assert(!bin_find(m, q)); |
|
- | 1887 | do_check_inuse_chunk(m, q); |
|
- | 1888 | } |
|
- | 1889 | else { |
|
- | 1890 | assert(q == m->dv || bin_find(m, q)); |
|
- | 1891 | assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */ |
|
- | 1892 | do_check_free_chunk(m, q); |
|
- | 1893 | } |
|
- | 1894 | lastq = q; |
|
- | 1895 | q = next_chunk(q); |
|
- | 1896 | } |
|
- | 1897 | s = s->next; |
|
- | 1898 | } |
|
- | 1899 | } |
|
- | 1900 | return sum; |
|
- | 1901 | } |
|
- | 1902 | ||
- | 1903 | ||
- | 1904 | /* Check all properties of malloc_state. */ |
|
- | 1905 | static void do_check_malloc_state(mstate m) { |
|
- | 1906 | bindex_t i; |
|
- | 1907 | size_t total; |
|
- | 1908 | /* check bins */ |
|
- | 1909 | for (i = 0; i < NSMALLBINS; ++i) |
|
- | 1910 | do_check_smallbin(m, i); |
|
- | 1911 | for (i = 0; i < NTREEBINS; ++i) |
|
- | 1912 | do_check_treebin(m, i); |
|
- | 1913 | ||
- | 1914 | if (m->dvsize != 0) { /* check dv chunk */ |
|
- | 1915 | do_check_any_chunk(m, m->dv); |
|
- | 1916 | assert(m->dvsize == chunksize(m->dv)); |
|
- | 1917 | assert(m->dvsize >= MIN_CHUNK_SIZE); |
|
- | 1918 | assert(bin_find(m, m->dv) == 0); |
|
- | 1919 | } |
|
- | 1920 | ||
- | 1921 | if (m->top != 0) { /* check top chunk */ |
|
- | 1922 | do_check_top_chunk(m, m->top); |
|
- | 1923 | /*assert(m->topsize == chunksize(m->top)); redundant */ |
|
- | 1924 | assert(m->topsize > 0); |
|
- | 1925 | assert(bin_find(m, m->top) == 0); |
|
- | 1926 | } |
|
- | 1927 | ||
Line 509... | Line 1928... | ||
509 | #define ok_next(p, n) ((char*)(p) < (char*)(n)) |
1928 | total = traverse_and_check(m); |
510 | /* Check if p has inuse status */ |
1929 | assert(total <= m->footprint); |
511 | #define ok_inuse(p) is_inuse(p) |
1930 | assert(m->footprint <= m->max_footprint); |
512 | /* Check if p has its pinuse bit on */ |
1931 | } |
Line 556... | Line 1975... | ||
556 | B->fd = P;\ |
1975 | B->fd = P;\ |
557 | F->bk = P;\ |
1976 | F->bk = P;\ |
558 | P->fd = F;\ |
1977 | P->fd = F;\ |
559 | P->bk = B;\ |
1978 | P->bk = B;\ |
560 | } |
1979 | } |
- | 1980 | /* ----------------------- Operations on smallbins ----------------------- */ |
|
- | 1981 | ||
- | 1982 | /* |
|
- | 1983 | Various forms of linking and unlinking are defined as macros. Even |
|
- | 1984 | the ones for trees, which are very long but have very short typical |
|
- | 1985 | paths. This is ugly but reduces reliance on inlining support of |
|
- | 1986 | compilers. |
|
- | 1987 | */ |
|
- | 1988 | ||
- | 1989 | /* Link a free chunk into a smallbin */ |
|
- | 1990 | #define insert_small_chunk(M, P, S) {\ |
|
- | 1991 | bindex_t I = small_index(S);\ |
|
- | 1992 | mchunkptr B = smallbin_at(M, I);\ |
|
- | 1993 | mchunkptr F = B;\ |
|
- | 1994 | assert(S >= MIN_CHUNK_SIZE);\ |
|
- | 1995 | if (!smallmap_is_marked(M, I))\ |
|
- | 1996 | mark_smallmap(M, I);\ |
|
- | 1997 | else if (RTCHECK(ok_address(M, B->fd)))\ |
|
- | 1998 | F = B->fd;\ |
|
- | 1999 | else {\ |
|
- | 2000 | CORRUPTION_ERROR_ACTION(M);\ |
|
- | 2001 | }\ |
|
- | 2002 | B->fd = P;\ |
|
- | 2003 | F->bk = P;\ |
|
- | 2004 | P->fd = F;\ |
|
- | 2005 | P->bk = B;\ |
|
- | 2006 | } |
|
Line 561... | Line 2007... | ||
561 | 2007 | ||
562 | /* Unlink a chunk from a smallbin */ |
2008 | /* Unlink a chunk from a smallbin */ |
563 | #define unlink_small_chunk(M, P, S) {\ |
2009 | #define unlink_small_chunk(M, P, S) {\ |
564 | mchunkptr F = P->fd;\ |
2010 | mchunkptr F = P->fd;\ |
565 | mchunkptr B = P->bk;\ |
2011 | mchunkptr B = P->bk;\ |
566 | bindex_t I = small_index(S);\ |
2012 | bindex_t I = small_index(S);\ |
567 | assert(P != B);\ |
2013 | assert(P != B);\ |
568 | assert(P != F);\ |
2014 | assert(P != F);\ |
- | 2015 | assert(chunksize(P) == small_index2size(I));\ |
|
569 | assert(chunksize(P) == small_index2size(I));\ |
2016 | if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \ |
570 | if (F == B)\ |
2017 | if (B == F) {\ |
- | 2018 | clear_smallmap(M, I);\ |
|
571 | clear_smallmap(M, I);\ |
2019 | }\ |
572 | else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\ |
2020 | else if (RTCHECK(B == smallbin_at(M,I) ||\ |
573 | (B == smallbin_at(M,I) || ok_address(M, B)))) {\ |
2021 | (ok_address(M, B) && B->fd == P))) {\ |
574 | F->bk = B;\ |
2022 | F->bk = B;\ |
575 | B->fd = F;\ |
2023 | B->fd = F;\ |
576 | }\ |
2024 | }\ |
577 | else {\ |
2025 | else {\ |
578 | CORRUPTION_ERROR_ACTION(M);\ |
2026 | CORRUPTION_ERROR_ACTION(M);\ |
- | 2027 | }\ |
|
- | 2028 | }\ |
|
- | 2029 | else {\ |
|
- | 2030 | CORRUPTION_ERROR_ACTION(M);\ |
|
579 | }\ |
2031 | }\ |
Line 580... | Line 2032... | ||
580 | } |
2032 | } |
581 | 2033 | ||
582 | /* Unlink the first chunk from a smallbin */ |
2034 | /* Unlink the first chunk from a smallbin */ |
583 | #define unlink_first_small_chunk(M, B, P, I) {\ |
2035 | #define unlink_first_small_chunk(M, B, P, I) {\ |
584 | mchunkptr F = P->fd;\ |
2036 | mchunkptr F = P->fd;\ |
585 | assert(P != B);\ |
2037 | assert(P != B);\ |
586 | assert(P != F);\ |
2038 | assert(P != F);\ |
587 | assert(chunksize(P) == small_index2size(I));\ |
2039 | assert(chunksize(P) == small_index2size(I));\ |
- | 2040 | if (B == F) {\ |
|
588 | if (B == F)\ |
2041 | clear_smallmap(M, I);\ |
589 | clear_smallmap(M, I);\ |
- | |
590 | else if (RTCHECK(ok_address(M, F))) {\ |
2042 | }\ |
- | 2043 | else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\ |
|
591 | B->fd = F;\ |
2044 | F->bk = B;\ |
592 | F->bk = B;\ |
2045 | B->fd = F;\ |
593 | }\ |
2046 | }\ |
594 | else {\ |
2047 | else {\ |
595 | CORRUPTION_ERROR_ACTION(M);\ |
2048 | CORRUPTION_ERROR_ACTION(M);\ |
Line 596... | Line 2049... | ||
596 | }\ |
2049 | }\ |
597 | } |
2050 | } |
598 | 2051 | ||
599 | /* Replace dv node, binning the old one */ |
2052 | /* Replace dv node, binning the old one */ |
- | 2053 | /* Used only when dvsize known to be small */ |
|
600 | /* Used only when dvsize known to be small */ |
2054 | #define replace_dv(M, P, S) {\ |
601 | #define replace_dv(M, P, S) {\ |
2055 | size_t DVS = M->dvsize;\ |
602 | size_t DVS = M->dvsize;\ |
- | |
603 | if (DVS != 0) {\ |
2056 | assert(is_small(DVS));\ |
604 | mchunkptr DV = M->dv;\ |
2057 | if (DVS != 0) {\ |
605 | assert(is_small(DVS));\ |
2058 | mchunkptr DV = M->dv;\ |
606 | insert_small_chunk(M, DV, DVS);\ |
2059 | insert_small_chunk(M, DV, DVS);\ |
607 | }\ |
2060 | }\ |
Line 608... | Line -... | ||
608 | M->dvsize = S;\ |
- | |
609 | M->dv = P;\ |
2061 | M->dvsize = S;\ |
Line 610... | Line 2062... | ||
610 | } |
2062 | M->dv = P;\ |
611 | 2063 | } |
|
612 | 2064 | ||
Line 685... | Line 2137... | ||
685 | tchunkptr XP = X->parent;\ |
2137 | tchunkptr XP = X->parent;\ |
686 | tchunkptr R;\ |
2138 | tchunkptr R;\ |
687 | if (X->bk != X) {\ |
2139 | if (X->bk != X) {\ |
688 | tchunkptr F = X->fd;\ |
2140 | tchunkptr F = X->fd;\ |
689 | R = X->bk;\ |
2141 | R = X->bk;\ |
690 | if (RTCHECK(ok_address(M, F))) {\ |
2142 | if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\ |
691 | F->bk = R;\ |
2143 | F->bk = R;\ |
692 | R->fd = F;\ |
2144 | R->fd = F;\ |
693 | }\ |
2145 | }\ |
694 | else {\ |
2146 | else {\ |
695 | CORRUPTION_ERROR_ACTION(M);\ |
2147 | CORRUPTION_ERROR_ACTION(M);\ |
Line 761... | Line 2213... | ||
761 | #define unlink_chunk(M, P, S)\ |
2213 | #define unlink_chunk(M, P, S)\ |
762 | if (is_small(S)) unlink_small_chunk(M, P, S)\ |
2214 | if (is_small(S)) unlink_small_chunk(M, P, S)\ |
763 | else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); } |
2215 | else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); } |
Line 764... | Line -... | ||
764 | - | ||
765 | - | ||
766 | /* -------------------------- system alloc setup ------------------------- */ |
- | |
767 | - | ||
768 | /* Operations on mflags */ |
- | |
769 | - | ||
770 | #define use_lock(M) ((M)->mflags & USE_LOCK_BIT) |
- | |
771 | #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT) |
- | |
772 | #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT) |
- | |
773 | - | ||
774 | #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT) |
- | |
775 | #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT) |
- | |
776 | #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT) |
- | |
777 | - | ||
778 | #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT) |
- | |
779 | #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT) |
- | |
780 | - | ||
781 | #define set_lock(M,L)\ |
- | |
782 | ((M)->mflags = (L)?\ |
- | |
783 | ((M)->mflags | USE_LOCK_BIT) :\ |
- | |
784 | ((M)->mflags & ~USE_LOCK_BIT)) |
- | |
785 | - | ||
786 | /* page-align a size */ |
- | |
787 | #define page_align(S)\ |
- | |
788 | (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE)) |
- | |
789 | - | ||
790 | /* granularity-align a size */ |
- | |
791 | #define granularity_align(S)\ |
- | |
792 | (((S) + (mparams.granularity - SIZE_T_ONE))\ |
- | |
793 | & ~(mparams.granularity - SIZE_T_ONE)) |
- | |
794 | - | ||
795 | - | ||
796 | /* For mmap, use granularity alignment */ |
- | |
797 | #define mmap_align(S) granularity_align(S) |
- | |
798 | - | ||
799 | /* For sys_alloc, enough padding to ensure can malloc request on success */ |
- | |
800 | #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT) |
- | |
801 | - | ||
802 | #define is_page_aligned(S)\ |
- | |
803 | (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0) |
- | |
Line 804... | Line -... | ||
804 | #define is_granularity_aligned(S)\ |
- | |
805 | (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0) |
- | |
806 | - | ||
Line 807... | Line -... | ||
807 | /* True if segment S holds address A */ |
- | |
808 | #define segment_holds(S, A)\ |
- | |
809 | ((char*)(A) >= S->base && (char*)(A) < S->base + S->size) |
- | |
810 | - | ||
811 | /* Return segment holding given address */ |
- | |
812 | static msegmentptr segment_holding(mstate m, char* addr) |
- | |
813 | { |
- | |
814 | msegmentptr sp = &m->seg; |
- | |
815 | for (;;) { |
- | |
816 | if (addr >= sp->base && addr < sp->base + sp->size) |
- | |
817 | return sp; |
- | |
818 | if ((sp = sp->next) == 0) |
- | |
819 | return 0; |
- | |
820 | } |
- | |
821 | } |
- | |
822 | - | ||
823 | /* Return true if segment contains a segment link */ |
- | |
824 | static int has_segment_link(mstate m, msegmentptr ss) |
- | |
825 | { |
- | |
826 | msegmentptr sp = &m->seg; |
- | |
827 | for (;;) { |
- | |
828 | if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size) |
- | |
829 | return 1; |
- | |
Line 830... | Line 2216... | ||
830 | if ((sp = sp->next) == 0) |
2216 | |
831 | return 0; |
2217 | |
832 | } |
2218 | |
833 | } |
2219 | |
Line 842... | Line 2228... | ||
842 | static inline int os_munmap(void* ptr, size_t size) |
2228 | static inline int os_munmap(void* ptr, size_t size) |
843 | { |
2229 | { |
844 | return (KernelFree(ptr) != 0) ? 0 : -1; |
2230 | return (KernelFree(ptr) != 0) ? 0 : -1; |
845 | } |
2231 | } |
Line 846... | Line -... | ||
846 | - | ||
Line 847... | Line 2232... | ||
847 | #define should_trim(M,s) ((s) > (M)->trim_check) |
2232 | |
848 | 2233 | ||
849 | 2234 | ||
Line 850... | Line -... | ||
850 | #define MMAP_DEFAULT(s) os_mmap(s) |
- | |
851 | #define MUNMAP_DEFAULT(a, s) os_munmap((a), (s)) |
- | |
Line 852... | Line 2235... | ||
852 | #define DIRECT_MMAP_DEFAULT(s) os_mmap(s) |
2235 | #define MMAP_DEFAULT(s) os_mmap(s) |
Line 853... | Line 2236... | ||
853 | 2236 | #define MUNMAP_DEFAULT(a, s) os_munmap((a), (s)) |
|
854 | #define internal_malloc(m, b) malloc(b) |
2237 | #define DIRECT_MMAP_DEFAULT(s) os_mmap(s) |
Line 863... | Line 2246... | ||
863 | and also allows adjustment of the returned chunk to meet alignment |
2246 | and also allows adjustment of the returned chunk to meet alignment |
864 | requirements (especially in memalign). |
2247 | requirements (especially in memalign). |
865 | */ |
2248 | */ |
Line 866... | Line 2249... | ||
866 | 2249 | ||
867 | /* Malloc using mmap */ |
2250 | /* Malloc using mmap */ |
868 | static void* mmap_alloc(mstate m, size_t nb) |
- | |
869 | { |
2251 | static void* mmap_alloc(mstate m, size_t nb) { |
- | 2252 | size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); |
|
- | 2253 | if (m->footprint_limit != 0) { |
|
870 | size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); |
2254 | size_t fp = m->footprint + mmsize; |
- | 2255 | if (fp <= m->footprint || fp > m->footprint_limit) |
|
871 | if (mmsize > nb) /* Check for wrap around 0 */ |
2256 | return 0; |
- | 2257 | } |
|
872 | { |
2258 | if (mmsize > nb) { /* Check for wrap around 0 */ |
873 | char* mm = (char*)(os_mmap(mmsize)); |
2259 | char* mm = (char*)(CALL_DIRECT_MMAP(mmsize)); |
874 | if (mm != CMFAIL) |
- | |
875 | { |
2260 | if (mm != CMFAIL) { |
876 | size_t offset = align_offset(chunk2mem(mm)); |
2261 | size_t offset = align_offset(chunk2mem(mm)); |
877 | size_t psize = mmsize - offset - MMAP_FOOT_PAD; |
2262 | size_t psize = mmsize - offset - MMAP_FOOT_PAD; |
878 | mchunkptr p = (mchunkptr)(mm + offset); |
2263 | mchunkptr p = (mchunkptr)(mm + offset); |
879 | p->prev_foot = offset; |
2264 | p->prev_foot = offset; |
Line 893... | Line 2278... | ||
893 | } |
2278 | } |
894 | return 0; |
2279 | return 0; |
895 | } |
2280 | } |
Line 896... | Line 2281... | ||
896 | 2281 | ||
897 | /* Realloc using mmap */ |
2282 | /* Realloc using mmap */ |
898 | static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) |
- | |
899 | { |
2283 | static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) { |
- | 2284 | size_t oldsize = chunksize(oldp); |
|
900 | size_t oldsize = chunksize(oldp); |
2285 | (void)flags; /* placate people compiling -Wunused */ |
901 | if (is_small(nb)) /* Can't shrink mmap regions below small size */ |
2286 | if (is_small(nb)) /* Can't shrink mmap regions below small size */ |
902 | return 0; |
2287 | return 0; |
903 | /* Keep old chunk if big enough but not too big */ |
2288 | /* Keep old chunk if big enough but not too big */ |
904 | if (oldsize >= nb + SIZE_T_SIZE && |
2289 | if (oldsize >= nb + SIZE_T_SIZE && |
905 | (oldsize - nb) <= (mparams.granularity << 1)) |
2290 | (oldsize - nb) <= (mparams.granularity << 1)) |
906 | return oldp; |
2291 | return oldp; |
907 | else |
- | |
908 | { |
2292 | else { |
909 | size_t offset = oldp->prev_foot; |
2293 | size_t offset = oldp->prev_foot; |
910 | size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD; |
2294 | size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD; |
911 | size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); |
2295 | size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); |
912 | char* cp = (char*)CALL_MREMAP((char*)oldp - offset, |
2296 | char* cp = (char*)CALL_MREMAP((char*)oldp - offset, |
913 | oldmmsize, newmmsize, 1); |
2297 | oldmmsize, newmmsize, flags); |
914 | if (cp != CMFAIL) |
- | |
915 | { |
2298 | if (cp != CMFAIL) { |
916 | mchunkptr newp = (mchunkptr)(cp + offset); |
2299 | mchunkptr newp = (mchunkptr)(cp + offset); |
917 | size_t psize = newmmsize - offset - MMAP_FOOT_PAD; |
2300 | size_t psize = newmmsize - offset - MMAP_FOOT_PAD; |
918 | newp->head = psize; |
2301 | newp->head = psize; |
919 | mark_inuse_foot(m, newp, psize); |
2302 | mark_inuse_foot(m, newp, psize); |
Line 929... | Line 2312... | ||
929 | } |
2312 | } |
930 | } |
2313 | } |
931 | return 0; |
2314 | return 0; |
932 | } |
2315 | } |
Line 933... | Line -... | ||
933 | - | ||
934 | /* ---------------------------- setting mparams -------------------------- */ |
- | |
935 | - | ||
936 | /* Initialize mparams */ |
- | |
937 | static int init_mparams(void) { |
- | |
938 | - | ||
939 | ACQUIRE_MALLOC_GLOBAL_LOCK(); |
- | |
940 | - | ||
941 | if (mparams.magic == 0) |
- | |
942 | { |
- | |
943 | size_t magic; |
- | |
944 | size_t psize; |
- | |
945 | size_t gsize; |
- | |
946 | - | ||
947 | psize = 4096; |
- | |
948 | gsize = DEFAULT_GRANULARITY; |
- | |
949 | - | ||
950 | /* Sanity-check configuration: |
- | |
951 | size_t must be unsigned and as wide as pointer type. |
- | |
952 | ints must be at least 4 bytes. |
- | |
953 | alignment must be at least 8. |
- | |
954 | Alignment, min chunk size, and page size must all be powers of 2. |
- | |
955 | */ |
- | |
956 | - | ||
957 | mparams.granularity = gsize; |
- | |
958 | mparams.page_size = psize; |
- | |
959 | mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD; |
- | |
960 | mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD; |
- | |
961 | mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT; |
- | |
962 | - | ||
963 | /* Set up lock for main malloc area */ |
- | |
964 | gm->mflags = mparams.default_mflags; |
- | |
965 | MutexInit(&gm->lock); |
- | |
966 | - | ||
967 | magic = (size_t)(GetTimerTicks() ^ (size_t)0x55555555U); |
- | |
968 | magic |= (size_t)8U; /* ensure nonzero */ |
- | |
969 | magic &= ~(size_t)7U; /* improve chances of fault for bad values */ |
- | |
970 | mparams.magic = magic; |
- | |
971 | } |
- | |
972 | - | ||
973 | RELEASE_MALLOC_GLOBAL_LOCK(); |
- | |
974 | return 1; |
- | |
Line 975... | Line 2316... | ||
975 | } |
2316 | |
Line 976... | Line 2317... | ||
976 | 2317 | ||
977 | /* -------------------------- mspace management -------------------------- */ |
2318 | /* -------------------------- mspace management -------------------------- */ |
978 | - | ||
979 | /* Initialize top chunk and its size */ |
2319 | |
980 | static void init_top(mstate m, mchunkptr p, size_t psize) |
2320 | /* Initialize top chunk and its size */ |
981 | { |
2321 | static void init_top(mstate m, mchunkptr p, size_t psize) { |
982 | /* Ensure alignment */ |
2322 | /* Ensure alignment */ |
Line 991... | Line 2331... | ||
991 | chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE; |
2331 | chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE; |
992 | m->trim_check = mparams.trim_threshold; /* reset on each update */ |
2332 | m->trim_check = mparams.trim_threshold; /* reset on each update */ |
993 | } |
2333 | } |
Line 994... | Line 2334... | ||
994 | 2334 | ||
995 | /* Initialize bins for a new mstate that is otherwise zeroed out */ |
2335 | /* Initialize bins for a new mstate that is otherwise zeroed out */ |
996 | static void init_bins(mstate m) |
- | |
997 | { |
2336 | static void init_bins(mstate m) { |
998 | /* Establish circular links for smallbins */ |
2337 | /* Establish circular links for smallbins */ |
999 | bindex_t i; |
2338 | bindex_t i; |
1000 | for (i = 0; i < NSMALLBINS; ++i) { |
2339 | for (i = 0; i < NSMALLBINS; ++i) { |
1001 | sbinptr bin = smallbin_at(m,i); |
2340 | sbinptr bin = smallbin_at(m,i); |
1002 | bin->fd = bin->bk = bin; |
2341 | bin->fd = bin->bk = bin; |
1003 | } |
2342 | } |
Line 1004... | Line 2343... | ||
1004 | } |
2343 | } |
1005 | 2344 | ||
1006 | /* Allocate chunk and prepend remainder with chunk in successor base. */ |
2345 | /* Allocate chunk and prepend remainder with chunk in successor base. */ |
1007 | static void* prepend_alloc(mstate m, char* newbase, char* oldbase, |
- | |
1008 | size_t nb) |
2346 | static void* prepend_alloc(mstate m, char* newbase, char* oldbase, |
1009 | { |
2347 | size_t nb) { |
1010 | mchunkptr p = align_as_chunk(newbase); |
2348 | mchunkptr p = align_as_chunk(newbase); |
1011 | mchunkptr oldfirst = align_as_chunk(oldbase); |
2349 | mchunkptr oldfirst = align_as_chunk(oldbase); |
1012 | size_t psize = (char*)oldfirst - (char*)p; |
2350 | size_t psize = (char*)oldfirst - (char*)p; |
Line 1045... | Line 2383... | ||
1045 | check_malloced_chunk(m, chunk2mem(p), nb); |
2383 | check_malloced_chunk(m, chunk2mem(p), nb); |
1046 | return chunk2mem(p); |
2384 | return chunk2mem(p); |
1047 | } |
2385 | } |
Line 1048... | Line 2386... | ||
1048 | 2386 | ||
1049 | /* Add a segment to hold a new noncontiguous region */ |
2387 | /* Add a segment to hold a new noncontiguous region */ |
1050 | static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) |
- | |
1051 | { |
2388 | static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) { |
1052 | /* Determine locations and sizes of segment, fenceposts, old top */ |
2389 | /* Determine locations and sizes of segment, fenceposts, old top */ |
1053 | char* old_top = (char*)m->top; |
2390 | char* old_top = (char*)m->top; |
1054 | msegmentptr oldsp = segment_holding(m, old_top); |
2391 | msegmentptr oldsp = segment_holding(m, old_top); |
1055 | char* old_end = oldsp->base + oldsp->size; |
2392 | char* old_end = oldsp->base + oldsp->size; |
Line 1101... | Line 2438... | ||
1101 | } |
2438 | } |
Line 1102... | Line 2439... | ||
1102 | 2439 | ||
Line 1103... | Line 2440... | ||
1103 | /* -------------------------- System allocation -------------------------- */ |
2440 | /* -------------------------- System allocation -------------------------- */ |
1104 | 2441 | ||
1105 | /* Get memory from system using MORECORE or MMAP */ |
- | |
1106 | static void* sys_alloc(mstate m, size_t nb) |
2442 | /* Get memory from system using MORECORE or MMAP */ |
1107 | { |
2443 | static void* sys_alloc(mstate m, size_t nb) { |
1108 | char* tbase = CMFAIL; |
2444 | char* tbase = CMFAIL; |
- | 2445 | size_t tsize = 0; |
|
Line 1109... | Line 2446... | ||
1109 | size_t tsize = 0; |
2446 | flag_t mmap_flag = 0; |
Line 1110... | Line 2447... | ||
1110 | flag_t mmap_flag = 0; |
2447 | size_t asize; /* allocation size */ |
Line 1111... | Line 2448... | ||
1111 | 2448 | ||
1112 | ensure_initialization(); |
2449 | ensure_initialization(); |
1113 | - | ||
1114 | printf("%s %d bytes\n", __FUNCTION__, nb); |
2450 | |
1115 | 2451 | printf("%s %d bytes\n", __FUNCTION__, nb); |
|
1116 | /* Directly map large chunks, but only if already initialized */ |
2452 | |
1117 | if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) |
2453 | /* Directly map large chunks, but only if already initialized */ |
Line -... | Line 2454... | ||
- | 2454 | if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) { |
|
- | 2455 | void* mem = mmap_alloc(m, nb); |
|
- | 2456 | if (mem != 0) |
|
- | 2457 | return mem; |
|
- | 2458 | } |
|
- | 2459 | ||
- | 2460 | asize = granularity_align(nb + SYS_ALLOC_PADDING); |
|
- | 2461 | if (asize <= nb) |
|
- | 2462 | return 0; /* wraparound */ |
|
1118 | { |
2463 | if (m->footprint_limit != 0) { |
1119 | void* mem = mmap_alloc(m, nb); |
2464 | size_t fp = m->footprint + asize; |
1120 | if (mem != 0) |
2465 | if (fp <= m->footprint || fp > m->footprint_limit) |
1121 | return mem; |
2466 | return 0; |
1122 | } |
2467 | } |
Line 1141... | Line 2486... | ||
1141 | we can malloc nb bytes upon success, so pad with enough space for |
2486 | we can malloc nb bytes upon success, so pad with enough space for |
1142 | top_foot, plus alignment-pad to make sure we don't lose bytes if |
2487 | top_foot, plus alignment-pad to make sure we don't lose bytes if |
1143 | not on boundary, and round this up to a granularity unit. |
2488 | not on boundary, and round this up to a granularity unit. |
1144 | */ |
2489 | */ |
Line -... | Line 2490... | ||
- | 2490 | ||
- | 2491 | #if 0 |
|
- | 2492 | if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) { |
|
- | 2493 | char* br = CMFAIL; |
|
- | 2494 | size_t ssize = asize; /* sbrk call size */ |
|
- | 2495 | msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top); |
|
- | 2496 | ACQUIRE_MALLOC_GLOBAL_LOCK(); |
|
1145 | 2497 | ||
- | 2498 | if (ss == 0) { /* First time through or recovery */ |
|
- | 2499 | char* base = (char*)CALL_MORECORE(0); |
|
- | 2500 | if (base != CMFAIL) { |
|
- | 2501 | size_t fp; |
|
- | 2502 | /* Adjust to end on a page boundary */ |
|
- | 2503 | if (!is_page_aligned(base)) |
|
- | 2504 | ssize += (page_align((size_t)base) - (size_t)base); |
|
- | 2505 | fp = m->footprint + ssize; /* recheck limits */ |
|
- | 2506 | if (ssize > nb && ssize < HALF_MAX_SIZE_T && |
|
- | 2507 | (m->footprint_limit == 0 || |
|
- | 2508 | (fp > m->footprint && fp <= m->footprint_limit)) && |
|
- | 2509 | (br = (char*)(CALL_MORECORE(ssize))) == base) { |
|
- | 2510 | tbase = base; |
|
- | 2511 | tsize = ssize; |
|
- | 2512 | } |
|
1146 | if (HAVE_MMAP && tbase == CMFAIL) /* Try MMAP */ |
2513 | } |
- | 2514 | } |
|
- | 2515 | else { |
|
1147 | { |
2516 | /* Subtract out existing available top space from MORECORE request. */ |
1148 | size_t rsize = granularity_align(nb + SYS_ALLOC_PADDING); |
2517 | ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING); |
- | 2518 | /* Use mem here only if it did continuously extend old space */ |
|
- | 2519 | if (ssize < HALF_MAX_SIZE_T && |
|
- | 2520 | (br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) { |
|
- | 2521 | tbase = br; |
|
1149 | if (rsize > nb) /* Fail if wraps around zero */ |
2522 | tsize = ssize; |
- | 2523 | } |
|
- | 2524 | } |
|
- | 2525 | ||
- | 2526 | if (tbase == CMFAIL) { /* Cope with partial failure */ |
|
- | 2527 | if (br != CMFAIL) { /* Try to use/extend the space we did get */ |
|
- | 2528 | if (ssize < HALF_MAX_SIZE_T && |
|
- | 2529 | ssize < nb + SYS_ALLOC_PADDING) { |
|
- | 2530 | size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize); |
|
1150 | { |
2531 | if (esize < HALF_MAX_SIZE_T) { |
1151 | char* mp = (char*)(CALL_MMAP(rsize)); |
2532 | char* end = (char*)CALL_MORECORE(esize); |
- | 2533 | if (end != CMFAIL) |
|
- | 2534 | ssize += esize; |
|
- | 2535 | else { /* Can't use; try to release */ |
|
- | 2536 | (void) CALL_MORECORE(-ssize); |
|
1152 | if (mp != CMFAIL) |
2537 | br = CMFAIL; |
- | 2538 | } |
|
- | 2539 | } |
|
- | 2540 | } |
|
- | 2541 | } |
|
- | 2542 | if (br != CMFAIL) { /* Use the space we did get */ |
|
- | 2543 | tbase = br; |
|
- | 2544 | tsize = ssize; |
|
- | 2545 | } |
|
- | 2546 | else |
|
- | 2547 | disable_contiguous(m); /* Don't try contiguous path in the future */ |
|
- | 2548 | } |
|
- | 2549 | ||
- | 2550 | RELEASE_MALLOC_GLOBAL_LOCK(); |
|
- | 2551 | } |
|
- | 2552 | #endif |
|
- | 2553 | ||
- | 2554 | if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */ |
|
- | 2555 | char* mp = (char*)(CALL_MMAP(asize)); |
|
1153 | { |
2556 | if (mp != CMFAIL) { |
1154 | tbase = mp; |
2557 | tbase = mp; |
1155 | tsize = rsize; |
2558 | tsize = asize; |
1156 | mmap_flag = USE_MMAP_BIT; |
2559 | mmap_flag = USE_MMAP_BIT; |
1157 | } |
2560 | } |
1158 | } |
- | |
Line -... | Line 2561... | ||
- | 2561 | } |
|
- | 2562 | ||
- | 2563 | #if 0 |
|
- | 2564 | if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */ |
|
- | 2565 | if (asize < HALF_MAX_SIZE_T) { |
|
- | 2566 | char* br = CMFAIL; |
|
- | 2567 | char* end = CMFAIL; |
|
- | 2568 | ACQUIRE_MALLOC_GLOBAL_LOCK(); |
|
- | 2569 | br = (char*)(CALL_MORECORE(asize)); |
|
- | 2570 | end = (char*)(CALL_MORECORE(0)); |
|
- | 2571 | RELEASE_MALLOC_GLOBAL_LOCK(); |
|
- | 2572 | if (br != CMFAIL && end != CMFAIL && br < end) { |
|
- | 2573 | size_t ssize = end - br; |
|
- | 2574 | if (ssize > nb + TOP_FOOT_SIZE) { |
|
- | 2575 | tbase = br; |
|
- | 2576 | tsize = ssize; |
|
- | 2577 | } |
|
- | 2578 | } |
|
- | 2579 | } |
|
Line 1159... | Line 2580... | ||
1159 | } |
2580 | } |
1160 | - | ||
Line 1161... | Line 2581... | ||
1161 | 2581 | #endif |
|
1162 | if (tbase != CMFAIL) |
2582 | |
Line 1163... | Line 2583... | ||
1163 | { |
2583 | if (tbase != CMFAIL) { |
1164 | - | ||
1165 | if ((m->footprint += tsize) > m->max_footprint) |
2584 | |
1166 | m->max_footprint = m->footprint; |
2585 | if ((m->footprint += tsize) > m->max_footprint) |
1167 | 2586 | m->max_footprint = m->footprint; |
|
1168 | if (!is_initialized(m)) /* first-time initialization */ |
2587 | |
1169 | { |
2588 | if (!is_initialized(m)) { /* first-time initialization */ |
Line 1183... | Line 2602... | ||
1183 | /* Offset top by embedded malloc_state */ |
2602 | /* Offset top by embedded malloc_state */ |
1184 | mchunkptr mn = next_chunk(mem2chunk(m)); |
2603 | mchunkptr mn = next_chunk(mem2chunk(m)); |
1185 | init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE); |
2604 | init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE); |
1186 | } |
2605 | } |
1187 | } |
2606 | } |
1188 | else |
2607 | |
1189 | { |
2608 | else { |
1190 | /* Try to merge with an existing segment */ |
2609 | /* Try to merge with an existing segment */ |
1191 | msegmentptr sp = &m->seg; |
2610 | msegmentptr sp = &m->seg; |
1192 | /* Only consider most recent segment if traversal suppressed */ |
2611 | /* Only consider most recent segment if traversal suppressed */ |
1193 | while (sp != 0 && tbase != sp->base + sp->size) |
2612 | while (sp != 0 && tbase != sp->base + sp->size) |
1194 | sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next; |
2613 | sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next; |
- | 2614 | if (sp != 0 && |
|
1195 | if (sp != 0 && !is_extern_segment(sp) && |
2615 | !is_extern_segment(sp) && |
1196 | (sp->sflags & USE_MMAP_BIT) == mmap_flag && |
2616 | (sp->sflags & USE_MMAP_BIT) == mmap_flag && |
1197 | segment_holds(sp, m->top)) /* append */ |
2617 | segment_holds(sp, m->top)) { /* append */ |
1198 | { |
- | |
1199 | sp->size += tsize; |
2618 | sp->size += tsize; |
1200 | init_top(m, m->top, m->topsize + tsize); |
2619 | init_top(m, m->top, m->topsize + tsize); |
1201 | } |
2620 | } |
1202 | else |
- | |
1203 | { |
2621 | else { |
1204 | if (tbase < m->least_addr) |
2622 | if (tbase < m->least_addr) |
1205 | m->least_addr = tbase; |
2623 | m->least_addr = tbase; |
1206 | sp = &m->seg; |
2624 | sp = &m->seg; |
1207 | while (sp != 0 && sp->base != tbase + tsize) |
2625 | while (sp != 0 && sp->base != tbase + tsize) |
1208 | sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next; |
2626 | sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next; |
1209 | if (sp != 0 && |
2627 | if (sp != 0 && |
1210 | !is_extern_segment(sp) && |
2628 | !is_extern_segment(sp) && |
1211 | (sp->sflags & USE_MMAP_BIT) == mmap_flag) |
2629 | (sp->sflags & USE_MMAP_BIT) == mmap_flag) { |
1212 | { |
- | |
1213 | char* oldbase = sp->base; |
2630 | char* oldbase = sp->base; |
1214 | sp->base = tbase; |
2631 | sp->base = tbase; |
1215 | sp->size += tsize; |
2632 | sp->size += tsize; |
1216 | return prepend_alloc(m, tbase, oldbase, nb); |
2633 | return prepend_alloc(m, tbase, oldbase, nb); |
1217 | } |
2634 | } |
1218 | else |
2635 | else |
1219 | add_segment(m, tbase, tsize, mmap_flag); |
2636 | add_segment(m, tbase, tsize, mmap_flag); |
1220 | } |
2637 | } |
1221 | } |
2638 | } |
Line 1222... | Line 2639... | ||
1222 | 2639 | ||
1223 | if (nb < m->topsize) /* Allocate from new or extended top space */ |
- | |
1224 | { |
2640 | if (nb < m->topsize) { /* Allocate from new or extended top space */ |
1225 | size_t rsize = m->topsize -= nb; |
2641 | size_t rsize = m->topsize -= nb; |
1226 | mchunkptr p = m->top; |
2642 | mchunkptr p = m->top; |
1227 | mchunkptr r = m->top = chunk_plus_offset(p, nb); |
2643 | mchunkptr r = m->top = chunk_plus_offset(p, nb); |
1228 | r->head = rsize | PINUSE_BIT; |
2644 | r->head = rsize | PINUSE_BIT; |
Line 1235... | Line 2651... | ||
1235 | 2651 | ||
1236 | // MALLOC_FAILURE_ACTION; |
2652 | // MALLOC_FAILURE_ACTION; |
1237 | return 0; |
2653 | return 0; |
Line 1238... | Line -... | ||
1238 | } |
- | |
1239 | 2654 | } |
|
Line 1240... | Line 2655... | ||
1240 | 2655 | ||
1241 | /* ----------------------- system deallocation -------------------------- */ |
2656 | /* ----------------------- system deallocation -------------------------- */ |
1242 | - | ||
1243 | /* Unmap and unlink any mmapped segments that don't contain used chunks */ |
2657 | |
1244 | static size_t release_unused_segments(mstate m) |
2658 | /* Unmap and unlink any mmapped segments that don't contain used chunks */ |
1245 | { |
2659 | static size_t release_unused_segments(mstate m) { |
1246 | size_t released = 0; |
2660 | size_t released = 0; |
1247 | int nsegs = 0; |
2661 | int nsegs = 0; |
1248 | msegmentptr pred = &m->seg; |
- | |
1249 | msegmentptr sp = pred->next; |
2662 | msegmentptr pred = &m->seg; |
1250 | while (sp != 0) |
2663 | msegmentptr sp = pred->next; |
1251 | { |
2664 | while (sp != 0) { |
1252 | char* base = sp->base; |
2665 | char* base = sp->base; |
1253 | size_t size = sp->size; |
2666 | size_t size = sp->size; |
1254 | msegmentptr next = sp->next; |
- | |
1255 | ++nsegs; |
2667 | msegmentptr next = sp->next; |
1256 | if (is_mmapped_segment(sp) && !is_extern_segment(sp)) |
2668 | ++nsegs; |
1257 | { |
2669 | if (is_mmapped_segment(sp) && !is_extern_segment(sp)) { |
1258 | mchunkptr p = align_as_chunk(base); |
2670 | mchunkptr p = align_as_chunk(base); |
1259 | size_t psize = chunksize(p); |
- | |
1260 | /* Can unmap if first chunk holds entire segment and not pinned */ |
2671 | size_t psize = chunksize(p); |
1261 | if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) |
2672 | /* Can unmap if first chunk holds entire segment and not pinned */ |
1262 | { |
2673 | if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) { |
1263 | tchunkptr tp = (tchunkptr)p; |
2674 | tchunkptr tp = (tchunkptr)p; |
1264 | assert(segment_holds(sp, (char*)sp)); |
2675 | assert(segment_holds(sp, (char*)sp)); |
1265 | if (p == m->dv) { |
2676 | if (p == m->dv) { |
1266 | m->dv = 0; |
2677 | m->dv = 0; |
1267 | m->dvsize = 0; |
2678 | m->dvsize = 0; |
1268 | } |
2679 | } |
1269 | else { |
2680 | else { |
1270 | unlink_large_chunk(m, tp); |
- | |
1271 | } |
2681 | unlink_large_chunk(m, tp); |
1272 | if (CALL_MUNMAP(base, size) == 0) |
2682 | } |
1273 | { |
2683 | if (CALL_MUNMAP(base, size) == 0) { |
1274 | released += size; |
2684 | released += size; |
1275 | m->footprint -= size; |
2685 | m->footprint -= size; |
Line 1286... | Line 2696... | ||
1286 | break; |
2696 | break; |
1287 | pred = sp; |
2697 | pred = sp; |
1288 | sp = next; |
2698 | sp = next; |
1289 | } |
2699 | } |
1290 | /* Reset check counter */ |
2700 | /* Reset check counter */ |
1291 | m->release_checks = ((nsegs > MAX_RELEASE_CHECK_RATE)? |
2701 | m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)? |
1292 | nsegs : MAX_RELEASE_CHECK_RATE); |
2702 | (size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE); |
1293 | return released; |
2703 | return released; |
1294 | } |
2704 | } |
Line 1295... | Line 2705... | ||
1295 | 2705 | ||
1296 | static int sys_trim(mstate m, size_t pad) |
- | |
1297 | { |
2706 | static int sys_trim(mstate m, size_t pad) { |
1298 | size_t released = 0; |
2707 | size_t released = 0; |
1299 | ensure_initialization(); |
2708 | ensure_initialization(); |
1300 | if (pad < MAX_REQUEST && is_initialized(m)) |
- | |
1301 | { |
2709 | if (pad < MAX_REQUEST && is_initialized(m)) { |
Line 1302... | Line 2710... | ||
1302 | pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */ |
2710 | pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */ |
1303 | - | ||
1304 | if (m->topsize > pad) |
2711 | |
1305 | { |
2712 | if (m->topsize > pad) { |
1306 | /* Shrink top space in granularity-size units, keeping at least one */ |
2713 | /* Shrink top space in granularity-size units, keeping at least one */ |
1307 | size_t unit = mparams.granularity; |
2714 | size_t unit = mparams.granularity; |
1308 | size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit - |
2715 | size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit - |
Line 1309... | Line 2716... | ||
1309 | SIZE_T_ONE) * unit; |
2716 | SIZE_T_ONE) * unit; |
1310 | msegmentptr sp = segment_holding(m, (char*)m->top); |
- | |
1311 | 2717 | msegmentptr sp = segment_holding(m, (char*)m->top); |
|
1312 | if (!is_extern_segment(sp)) |
- | |
1313 | { |
2718 | |
1314 | if (is_mmapped_segment(sp)) |
2719 | if (!is_extern_segment(sp)) { |
1315 | { |
2720 | if (is_mmapped_segment(sp)) { |
1316 | if (HAVE_MMAP && |
- | |
1317 | sp->size >= extra && |
2721 | if (HAVE_MMAP && |
- | 2722 | sp->size >= extra && |
|
1318 | !has_segment_link(m, sp)) /* can't shrink if pinned */ |
2723 | !has_segment_link(m, sp)) { /* can't shrink if pinned */ |
1319 | { |
2724 | size_t newsize = sp->size - extra; |
1320 | size_t newsize = sp->size - extra; |
2725 | (void)newsize; /* placate people compiling -Wunused-variable */ |
1321 | /* Prefer mremap, fall back to munmap */ |
- | |
1322 | if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) || |
2726 | /* Prefer mremap, fall back to munmap */ |
1323 | (CALL_MUNMAP(sp->base + newsize, extra) == 0)) |
2727 | if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) || |
1324 | { |
2728 | (CALL_MUNMAP(sp->base + newsize, extra) == 0)) { |
1325 | released = extra; |
2729 | released = extra; |
- | 2730 | } |
|
- | 2731 | } |
|
- | 2732 | } |
|
- | 2733 | else if (HAVE_MORECORE) { |
|
- | 2734 | if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */ |
|
- | 2735 | extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit; |
|
- | 2736 | ACQUIRE_MALLOC_GLOBAL_LOCK(); |
|
- | 2737 | { |
|
- | 2738 | /* Make sure end of memory is where we last set it. */ |
|
- | 2739 | char* old_br = (char*)(CALL_MORECORE(0)); |
|
- | 2740 | if (old_br == sp->base + sp->size) { |
|
- | 2741 | char* rel_br = (char*)(CALL_MORECORE(-extra)); |
|
- | 2742 | char* new_br = (char*)(CALL_MORECORE(0)); |
|
- | 2743 | if (rel_br != CMFAIL && new_br < old_br) |
|
- | 2744 | released = old_br - new_br; |
|
- | 2745 | } |
|
1326 | } |
2746 | } |
Line 1327... | Line 2747... | ||
1327 | } |
2747 | RELEASE_MALLOC_GLOBAL_LOCK(); |
1328 | } |
- | |
1329 | } |
2748 | } |
1330 | 2749 | } |
|
1331 | if (released != 0) |
2750 | |
1332 | { |
2751 | if (released != 0) { |
1333 | sp->size -= released; |
2752 | sp->size -= released; |
Line 1347... | Line 2766... | ||
1347 | } |
2766 | } |
Line 1348... | Line 2767... | ||
1348 | 2767 | ||
1349 | return (released != 0)? 1 : 0; |
2768 | return (released != 0)? 1 : 0; |
Line -... | Line 2769... | ||
- | 2769 | } |
|
- | 2770 | ||
- | 2771 | /* Consolidate and bin a chunk. Differs from exported versions |
|
- | 2772 | of free mainly in that the chunk need not be marked as inuse. |
|
- | 2773 | */ |
|
- | 2774 | static void dispose_chunk(mstate m, mchunkptr p, size_t psize) { |
|
- | 2775 | mchunkptr next = chunk_plus_offset(p, psize); |
|
- | 2776 | if (!pinuse(p)) { |
|
- | 2777 | mchunkptr prev; |
|
- | 2778 | size_t prevsize = p->prev_foot; |
|
- | 2779 | if (is_mmapped(p)) { |
|
- | 2780 | psize += prevsize + MMAP_FOOT_PAD; |
|
- | 2781 | if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) |
|
- | 2782 | m->footprint -= psize; |
|
- | 2783 | return; |
|
- | 2784 | } |
|
- | 2785 | prev = chunk_minus_offset(p, prevsize); |
|
- | 2786 | psize += prevsize; |
|
- | 2787 | p = prev; |
|
- | 2788 | if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */ |
|
- | 2789 | if (p != m->dv) { |
|
- | 2790 | unlink_chunk(m, p, prevsize); |
|
- | 2791 | } |
|
- | 2792 | else if ((next->head & INUSE_BITS) == INUSE_BITS) { |
|
- | 2793 | m->dvsize = psize; |
|
- | 2794 | set_free_with_pinuse(p, psize, next); |
|
- | 2795 | return; |
|
- | 2796 | } |
|
- | 2797 | } |
|
- | 2798 | else { |
|
- | 2799 | CORRUPTION_ERROR_ACTION(m); |
|
- | 2800 | return; |
|
- | 2801 | } |
|
- | 2802 | } |
|
- | 2803 | if (RTCHECK(ok_address(m, next))) { |
|
- | 2804 | if (!cinuse(next)) { /* consolidate forward */ |
|
- | 2805 | if (next == m->top) { |
|
- | 2806 | size_t tsize = m->topsize += psize; |
|
- | 2807 | m->top = p; |
|
- | 2808 | p->head = tsize | PINUSE_BIT; |
|
- | 2809 | if (p == m->dv) { |
|
- | 2810 | m->dv = 0; |
|
- | 2811 | m->dvsize = 0; |
|
- | 2812 | } |
|
- | 2813 | return; |
|
- | 2814 | } |
|
- | 2815 | else if (next == m->dv) { |
|
- | 2816 | size_t dsize = m->dvsize += psize; |
|
- | 2817 | m->dv = p; |
|
- | 2818 | set_size_and_pinuse_of_free_chunk(p, dsize); |
|
- | 2819 | return; |
|
- | 2820 | } |
|
- | 2821 | else { |
|
- | 2822 | size_t nsize = chunksize(next); |
|
- | 2823 | psize += nsize; |
|
- | 2824 | unlink_chunk(m, next, nsize); |
|
- | 2825 | set_size_and_pinuse_of_free_chunk(p, psize); |
|
- | 2826 | if (p == m->dv) { |
|
- | 2827 | m->dvsize = psize; |
|
- | 2828 | return; |
|
- | 2829 | } |
|
- | 2830 | } |
|
- | 2831 | } |
|
- | 2832 | else { |
|
- | 2833 | set_free_with_pinuse(p, psize, next); |
|
- | 2834 | } |
|
- | 2835 | insert_chunk(m, p, psize); |
|
- | 2836 | } |
|
- | 2837 | else { |
|
- | 2838 | CORRUPTION_ERROR_ACTION(m); |
|
Line 1350... | Line -... | ||
1350 | } |
- | |
1351 | 2839 | } |
|
Line 1352... | Line 2840... | ||
1352 | 2840 | } |
|
1353 | 2841 | ||
1354 | /* ---------------------------- malloc support --------------------------- */ |
2842 | /* ---------------------------- malloc --------------------------- */ |
1355 | 2843 | ||
Line 1423... | Line 2911... | ||
1423 | } |
2911 | } |
1424 | return 0; |
2912 | return 0; |
1425 | } |
2913 | } |
Line 1426... | Line 2914... | ||
1426 | 2914 | ||
1427 | /* allocate a small request from the best fitting chunk in a treebin */ |
2915 | /* allocate a small request from the best fitting chunk in a treebin */ |
1428 | static void* tmalloc_small(mstate m, size_t nb) |
- | |
1429 | { |
2916 | static void* tmalloc_small(mstate m, size_t nb) { |
1430 | tchunkptr t, v; |
2917 | tchunkptr t, v; |
1431 | size_t rsize; |
2918 | size_t rsize; |
1432 | bindex_t i; |
2919 | bindex_t i; |
1433 | binmap_t leastbit = least_bit(m->treemap); |
2920 | binmap_t leastbit = least_bit(m->treemap); |
Line 1461... | Line 2948... | ||
1461 | 2948 | ||
1462 | CORRUPTION_ERROR_ACTION(m); |
2949 | CORRUPTION_ERROR_ACTION(m); |
1463 | return 0; |
2950 | return 0; |
Line 1464... | Line -... | ||
1464 | } |
- | |
1465 | - | ||
1466 | /* --------------------------- memalign support -------------------------- */ |
- | |
1467 | - | ||
1468 | static void* internal_memalign(mstate m, size_t alignment, size_t bytes) |
- | |
1469 | { |
- | |
1470 | if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */ |
- | |
1471 | return internal_malloc(m, bytes); |
- | |
1472 | if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */ |
- | |
1473 | alignment = MIN_CHUNK_SIZE; |
- | |
1474 | if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */ |
- | |
1475 | size_t a = MALLOC_ALIGNMENT << 1; |
- | |
1476 | while (a < alignment) a <<= 1; |
- | |
1477 | alignment = a; |
- | |
1478 | } |
- | |
1479 | - | ||
1480 | if (bytes >= MAX_REQUEST - alignment) { |
- | |
1481 | if (m != 0) { /* Test isn't needed but avoids compiler warning */ |
- | |
1482 | // MALLOC_FAILURE_ACTION; |
- | |
1483 | } |
- | |
1484 | } |
- | |
1485 | else |
- | |
1486 | { |
- | |
1487 | size_t nb = request2size(bytes); |
- | |
1488 | size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD; |
- | |
1489 | char* mem = (char*)internal_malloc(m, req); |
- | |
1490 | if (mem != 0) |
- | |
1491 | { |
- | |
1492 | void* leader = 0; |
- | |
1493 | void* trailer = 0; |
- | |
1494 | mchunkptr p = mem2chunk(mem); |
- | |
1495 | - | ||
1496 | PREACTION(m); |
- | |
1497 | - | ||
1498 | if ((((size_t)(mem)) % alignment) != 0) /* misaligned */ |
- | |
1499 | { |
- | |
1500 | /* |
- | |
1501 | Find an aligned spot inside chunk. Since we need to give |
- | |
1502 | back leading space in a chunk of at least MIN_CHUNK_SIZE, if |
- | |
1503 | the first calculation places us at a spot with less than |
- | |
1504 | MIN_CHUNK_SIZE leader, we can move to the next aligned spot. |
- | |
1505 | We've allocated enough total room so that this is always |
- | |
1506 | possible. |
- | |
1507 | */ |
- | |
1508 | char* br = (char*)mem2chunk((size_t)(((size_t)(mem + |
- | |
1509 | alignment - |
- | |
1510 | SIZE_T_ONE)) & |
- | |
1511 | -alignment)); |
- | |
1512 | char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)? |
- | |
1513 | br : br+alignment; |
- | |
1514 | mchunkptr newp = (mchunkptr)pos; |
- | |
1515 | size_t leadsize = pos - (char*)(p); |
- | |
1516 | size_t newsize = chunksize(p) - leadsize; |
- | |
1517 | - | ||
1518 | if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */ |
- | |
1519 | newp->prev_foot = p->prev_foot + leadsize; |
- | |
1520 | newp->head = newsize; |
- | |
1521 | } |
- | |
1522 | else { /* Otherwise, give back leader, use the rest */ |
- | |
1523 | set_inuse(m, newp, newsize); |
- | |
1524 | set_inuse(m, p, leadsize); |
- | |
1525 | leader = chunk2mem(p); |
- | |
1526 | } |
- | |
1527 | p = newp; |
- | |
1528 | } |
- | |
1529 | - | ||
1530 | /* Give back spare room at the end */ |
- | |
1531 | if (!is_mmapped(p)) |
- | |
1532 | { |
- | |
1533 | size_t size = chunksize(p); |
- | |
1534 | if (size > nb + MIN_CHUNK_SIZE) |
- | |
1535 | { |
- | |
1536 | size_t remainder_size = size - nb; |
- | |
1537 | mchunkptr remainder = chunk_plus_offset(p, nb); |
- | |
1538 | set_inuse(m, p, nb); |
- | |
1539 | set_inuse(m, remainder, remainder_size); |
- | |
1540 | trailer = chunk2mem(remainder); |
- | |
1541 | } |
- | |
1542 | } |
- | |
1543 | - | ||
1544 | assert (chunksize(p) >= nb); |
- | |
1545 | assert((((size_t)(chunk2mem(p))) % alignment) == 0); |
- | |
1546 | check_inuse_chunk(m, p); |
- | |
1547 | POSTACTION(m); |
- | |
1548 | if (leader != 0) { |
- | |
1549 | internal_free(m, leader); |
- | |
1550 | } |
- | |
1551 | if (trailer != 0) { |
- | |
1552 | internal_free(m, trailer); |
- | |
1553 | } |
- | |
1554 | return chunk2mem(p); |
- | |
1555 | } |
- | |
1556 | } |
- | |
1557 | return 0; |
- | |
1558 | } |
- | |
1559 | - | ||
1560 | void* memalign(size_t alignment, size_t bytes) |
- | |
1561 | { |
- | |
1562 | return internal_memalign(gm, alignment, bytes); |
- | |
Line 1563... | Line 2951... | ||
1563 | } |
2951 | } |
1564 | 2952 | ||
1565 | 2953 | ||
1566 | void* malloc(size_t bytes) |
2954 | void* malloc(size_t bytes) |
Line 1592... | Line 2980... | ||
1592 | 2980 | ||
1593 | PREACTION(gm); |
2981 | PREACTION(gm); |
1594 | { |
2982 | { |
1595 | void* mem; |
2983 | void* mem; |
1596 | size_t nb; |
- | |
1597 | 2984 | size_t nb; |
|
1598 | if (bytes <= MAX_SMALL_REQUEST) |
- | |
1599 | { |
2985 | if (bytes <= MAX_SMALL_REQUEST) { |
1600 | bindex_t idx; |
2986 | bindex_t idx; |
1601 | binmap_t smallbits; |
2987 | binmap_t smallbits; |
1602 | nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes); |
2988 | nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes); |
1603 | idx = small_index(nb); |
2989 | idx = small_index(nb); |
Line 1604... | Line 2990... | ||
1604 | smallbits = gm->smallmap >> idx; |
2990 | smallbits = gm->smallmap >> idx; |
1605 | - | ||
1606 | if ((smallbits & 0x3U) != 0) /* Remainderless fit to a smallbin. */ |
2991 | |
1607 | { |
2992 | if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */ |
1608 | mchunkptr b, p; |
2993 | mchunkptr b, p; |
1609 | idx += ~smallbits & 1; /* Uses next bin if idx empty */ |
2994 | idx += ~smallbits & 1; /* Uses next bin if idx empty */ |
1610 | b = smallbin_at(gm, idx); |
2995 | b = smallbin_at(gm, idx); |
Line 1614... | Line 2999... | ||
1614 | set_inuse_and_pinuse(gm, p, small_index2size(idx)); |
2999 | set_inuse_and_pinuse(gm, p, small_index2size(idx)); |
1615 | mem = chunk2mem(p); |
3000 | mem = chunk2mem(p); |
1616 | check_malloced_chunk(gm, mem, nb); |
3001 | check_malloced_chunk(gm, mem, nb); |
1617 | goto postaction; |
3002 | goto postaction; |
1618 | } |
3003 | } |
- | 3004 | ||
1619 | else if (nb > gm->dvsize) |
3005 | else if (nb > gm->dvsize) { |
1620 | { |
- | |
1621 | if (smallbits != 0) /* Use chunk in next nonempty smallbin */ |
3006 | if (smallbits != 0) { /* Use chunk in next nonempty smallbin */ |
1622 | { |
- | |
1623 | mchunkptr b, p, r; |
3007 | mchunkptr b, p, r; |
1624 | size_t rsize; |
3008 | size_t rsize; |
1625 | bindex_t i; |
3009 | bindex_t i; |
1626 | binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx)); |
3010 | binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx)); |
1627 | binmap_t leastbit = least_bit(leftbits); |
3011 | binmap_t leastbit = least_bit(leftbits); |
Line 1632... | Line 3016... | ||
1632 | unlink_first_small_chunk(gm, b, p, i); |
3016 | unlink_first_small_chunk(gm, b, p, i); |
1633 | rsize = small_index2size(i) - nb; |
3017 | rsize = small_index2size(i) - nb; |
1634 | /* Fit here cannot be remainderless if 4byte sizes */ |
3018 | /* Fit here cannot be remainderless if 4byte sizes */ |
1635 | if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) |
3019 | if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) |
1636 | set_inuse_and_pinuse(gm, p, small_index2size(i)); |
3020 | set_inuse_and_pinuse(gm, p, small_index2size(i)); |
1637 | else |
- | |
1638 | { |
3021 | else { |
1639 | set_size_and_pinuse_of_inuse_chunk(gm, p, nb); |
3022 | set_size_and_pinuse_of_inuse_chunk(gm, p, nb); |
1640 | r = chunk_plus_offset(p, nb); |
3023 | r = chunk_plus_offset(p, nb); |
1641 | set_size_and_pinuse_of_free_chunk(r, rsize); |
3024 | set_size_and_pinuse_of_free_chunk(r, rsize); |
1642 | replace_dv(gm, r, rsize); |
3025 | replace_dv(gm, r, rsize); |
1643 | } |
3026 | } |
1644 | mem = chunk2mem(p); |
3027 | mem = chunk2mem(p); |
1645 | check_malloced_chunk(gm, mem, nb); |
3028 | check_malloced_chunk(gm, mem, nb); |
1646 | goto postaction; |
3029 | goto postaction; |
1647 | } |
3030 | } |
- | 3031 | ||
1648 | else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) |
3032 | else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) { |
1649 | { |
- | |
1650 | check_malloced_chunk(gm, mem, nb); |
3033 | check_malloced_chunk(gm, mem, nb); |
1651 | goto postaction; |
3034 | goto postaction; |
1652 | } |
3035 | } |
1653 | } |
3036 | } |
1654 | } |
3037 | } |
1655 | else if (bytes >= MAX_REQUEST) |
3038 | else if (bytes >= MAX_REQUEST) |
1656 | nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ |
3039 | nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ |
1657 | else |
- | |
1658 | { |
3040 | else { |
1659 | nb = pad_request(bytes); |
3041 | nb = pad_request(bytes); |
1660 | if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) |
3042 | if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) { |
1661 | { |
- | |
1662 | check_malloced_chunk(gm, mem, nb); |
3043 | check_malloced_chunk(gm, mem, nb); |
1663 | goto postaction; |
3044 | goto postaction; |
1664 | } |
3045 | } |
1665 | } |
3046 | } |
Line 1681... | Line 3062... | ||
1681 | } |
3062 | } |
1682 | mem = chunk2mem(p); |
3063 | mem = chunk2mem(p); |
1683 | check_malloced_chunk(gm, mem, nb); |
3064 | check_malloced_chunk(gm, mem, nb); |
1684 | goto postaction; |
3065 | goto postaction; |
1685 | } |
3066 | } |
- | 3067 | ||
1686 | else if (nb < gm->topsize) { /* Split top */ |
3068 | else if (nb < gm->topsize) { /* Split top */ |
1687 | size_t rsize = gm->topsize -= nb; |
3069 | size_t rsize = gm->topsize -= nb; |
1688 | mchunkptr p = gm->top; |
3070 | mchunkptr p = gm->top; |
1689 | mchunkptr r = gm->top = chunk_plus_offset(p, nb); |
3071 | mchunkptr r = gm->top = chunk_plus_offset(p, nb); |
1690 | r->head = rsize | PINUSE_BIT; |
3072 | r->head = rsize | PINUSE_BIT; |
Line 1697... | Line 3079... | ||
1697 | 3079 | ||
Line 1698... | Line 3080... | ||
1698 | mem = sys_alloc(gm, nb); |
3080 | mem = sys_alloc(gm, nb); |
1699 | 3081 | ||
- | 3082 | postaction: |
|
1700 | postaction: |
3083 | POSTACTION(gm); |
1701 | POSTACTION(gm); |
3084 | // printf("%s %p %d\n", __FUNCTION__, mem, bytes); |
1702 | return mem; |
3085 | return mem; |
1703 | } |
3086 | } |
1704 | 3087 | // FAIL(); |
|
Line -... | Line 3088... | ||
- | 3088 | return 0; |
|
Line 1705... | Line 3089... | ||
1705 | return 0; |
3089 | } |
1706 | } |
3090 | |
1707 | 3091 | /* ---------------------------- free --------------------------- */ |
|
1708 | 3092 | ||
1709 | void free(void* mem) |
3093 | void free(void* mem) |
1710 | { |
3094 | { |
1711 | /* |
3095 | /* |
Line 1712... | Line -... | ||
1712 | Consolidate freed chunks with preceeding or succeeding bordering |
- | |
1713 | free chunks, if they exist, and then place in a bin. Intermixed |
- | |
1714 | with special cases for top, dv, mmapped chunks, and usage errors. |
3096 | Consolidate freed chunks with preceeding or succeeding bordering |
Line -... | Line 3097... | ||
- | 3097 | free chunks, if they exist, and then place in a bin. Intermixed |
|
- | 3098 | with special cases for top, dv, mmapped chunks, and usage errors. |
|
- | 3099 | */ |
|
- | 3100 | ||
- | 3101 | // dbgprintf("%s %p\n", __FUNCTION__, mem); |
|
- | 3102 | ||
- | 3103 | if (mem != 0) { |
|
- | 3104 | mchunkptr p = mem2chunk(mem); |
|
- | 3105 | #if FOOTERS |
|
1715 | */ |
3106 | mstate fm = get_mstate_for(p); |
1716 | - | ||
- | 3107 | if (!ok_magic(fm)) { |
|
1717 | if (mem != 0) |
3108 | USAGE_ERROR_ACTION(fm, p); |
1718 | { |
3109 | return; |
1719 | mchunkptr p = mem2chunk(mem); |
3110 | } |
1720 | 3111 | #else /* FOOTERS */ |
|
1721 | #define fm gm |
- | |
1722 | 3112 | #define fm gm |
|
1723 | PREACTION(fm); |
3113 | #endif /* FOOTERS */ |
1724 | { |
3114 | PREACTION(fm); |
1725 | check_inuse_chunk(fm, p); |
- | |
1726 | if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) |
3115 | { |
1727 | { |
3116 | check_inuse_chunk(fm, p); |
1728 | size_t psize = chunksize(p); |
- | |
1729 | mchunkptr next = chunk_plus_offset(p, psize); |
3117 | if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) { |
1730 | if (!pinuse(p)) |
3118 | size_t psize = chunksize(p); |
1731 | { |
3119 | mchunkptr next = chunk_plus_offset(p, psize); |
1732 | size_t prevsize = p->prev_foot; |
3120 | if (!pinuse(p)) { |
1733 | if (is_mmapped(p)) |
3121 | size_t prevsize = p->prev_foot; |
1734 | { |
- | |
1735 | psize += prevsize + MMAP_FOOT_PAD; |
3122 | if (is_mmapped(p)) { |
1736 | if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) |
3123 | psize += prevsize + MMAP_FOOT_PAD; |
1737 | fm->footprint -= psize; |
3124 | if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) |
1738 | goto postaction; |
3125 | fm->footprint -= psize; |
1739 | } |
3126 | goto postaction; |
1740 | else |
- | |
1741 | { |
- | |
1742 | mchunkptr prev = chunk_minus_offset(p, prevsize); |
3127 | } |
1743 | psize += prevsize; |
3128 | else { |
1744 | p = prev; |
3129 | mchunkptr prev = chunk_minus_offset(p, prevsize); |
1745 | if (RTCHECK(ok_address(fm, prev))) /* consolidate backward */ |
3130 | psize += prevsize; |
1746 | { |
- | |
1747 | if (p != fm->dv) |
3131 | p = prev; |
1748 | { |
3132 | if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */ |
1749 | unlink_chunk(fm, p, prevsize); |
3133 | if (p != fm->dv) { |
1750 | } |
3134 | unlink_chunk(fm, p, prevsize); |
1751 | else if ((next->head & INUSE_BITS) == INUSE_BITS) |
3135 | } |
1752 | { |
3136 | else if ((next->head & INUSE_BITS) == INUSE_BITS) { |
1753 | fm->dvsize = psize; |
3137 | fm->dvsize = psize; |
1754 | set_free_with_pinuse(p, psize, next); |
3138 | set_free_with_pinuse(p, psize, next); |
1755 | goto postaction; |
3139 | goto postaction; |
Line 1756... | Line 3140... | ||
1756 | } |
3140 | } |
1757 | } |
- | |
1758 | else |
3141 | } |
1759 | goto erroraction; |
- | |
1760 | } |
3142 | else |
1761 | } |
- | |
1762 | 3143 | goto erroraction; |
|
1763 | if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) |
3144 | } |
1764 | { |
3145 | } |
1765 | if (!cinuse(next)) /* consolidate forward */ |
- | |
1766 | { |
3146 | |
1767 | if (next == fm->top) |
3147 | if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) { |
1768 | { |
3148 | if (!cinuse(next)) { /* consolidate forward */ |
1769 | size_t tsize = fm->topsize += psize; |
3149 | if (next == fm->top) { |
1770 | fm->top = p; |
3150 | size_t tsize = fm->topsize += psize; |
1771 | p->head = tsize | PINUSE_BIT; |
3151 | fm->top = p; |
1772 | if (p == fm->dv) |
3152 | p->head = tsize | PINUSE_BIT; |
1773 | { |
3153 | if (p == fm->dv) { |
1774 | fm->dv = 0; |
3154 | fm->dv = 0; |
1775 | fm->dvsize = 0; |
- | |
1776 | } |
3155 | fm->dvsize = 0; |
1777 | if (should_trim(fm, tsize)) |
3156 | } |
1778 | sys_trim(fm, 0); |
3157 | if (should_trim(fm, tsize)) |
1779 | goto postaction; |
3158 | sys_trim(fm, 0); |
1780 | } |
3159 | goto postaction; |
1781 | else if (next == fm->dv) |
- | |
1782 | { |
3160 | } |
1783 | size_t dsize = fm->dvsize += psize; |
3161 | else if (next == fm->dv) { |
1784 | fm->dv = p; |
3162 | size_t dsize = fm->dvsize += psize; |
1785 | set_size_and_pinuse_of_free_chunk(p, dsize); |
3163 | fm->dv = p; |
1786 | goto postaction; |
3164 | set_size_and_pinuse_of_free_chunk(p, dsize); |
1787 | } |
- | |
1788 | else |
3165 | goto postaction; |
1789 | { |
3166 | } |
1790 | size_t nsize = chunksize(next); |
3167 | else { |
1791 | psize += nsize; |
3168 | size_t nsize = chunksize(next); |
1792 | unlink_chunk(fm, next, nsize); |
3169 | psize += nsize; |
1793 | set_size_and_pinuse_of_free_chunk(p, psize); |
3170 | unlink_chunk(fm, next, nsize); |
1794 | if (p == fm->dv) |
3171 | set_size_and_pinuse_of_free_chunk(p, psize); |
1795 | { |
3172 | if (p == fm->dv) { |
Line 1796... | Line 3173... | ||
1796 | fm->dvsize = psize; |
3173 | fm->dvsize = psize; |
1797 | goto postaction; |
- | |
1798 | } |
3174 | goto postaction; |
1799 | } |
3175 | } |
1800 | } |
3176 | } |
1801 | else |
- | |
1802 | set_free_with_pinuse(p, psize, next); |
3177 | } |
1803 | 3178 | else |
|
1804 | if (is_small(psize)) |
3179 | set_free_with_pinuse(p, psize, next); |
1805 | { |
3180 | |
1806 | insert_small_chunk(fm, p, psize); |
3181 | if (is_small(psize)) { |
1807 | check_free_chunk(fm, p); |
3182 | insert_small_chunk(fm, p, psize); |
Line 1821... | Line 3196... | ||
1821 | USAGE_ERROR_ACTION(fm, p); |
3196 | USAGE_ERROR_ACTION(fm, p); |
1822 | postaction: |
3197 | postaction: |
1823 | POSTACTION(fm); |
3198 | POSTACTION(fm); |
1824 | } |
3199 | } |
1825 | } |
3200 | } |
- | 3201 | ||
- | 3202 | // LEAVE(); |
|
- | 3203 | ||
- | 3204 | #if !FOOTERS |
|
1826 | #undef fm |
3205 | #undef fm |
- | 3206 | #endif /* FOOTERS */ |
|
1827 | }>=>>><>>=>>=><=>>><>>=>>>>>>=><=>>><>>>>>>>>>><>=>>>>>=><=>><>>1)) |
3207 | }>=>>><>>=>>>>>>>=><=>>><>>>>>>>>>>>>=>>=>=>>>>>><>=>>=>=><=>><>=>=>>>=><=>><>><>><>>>>>1)) |