modules/up/src/Core/gnu/malloc.c
/* [<][>][^][v][top][bottom][index][help] */
FUNCTIONS
This source file includes following functions.
- MALLOC_ZERO
- MALLOC_COPY
- MALLOC_ZERO
- MALLOC_COPY
- AlignPage
- makeGmListElement
- gcleanup
- findRegion
- wsbrk
- chunk2mem
- mem2chunk
- request2size
- aligned_OK
- next_chunk
- prev_chunk
- chunk_at_offset
- inuse
- prev_inuse
- chunk_is_mmapped
- set_inuse
- clear_inuse
- inuse_bit_at_offset
- set_inuse_bit_at_offset
- clear_inuse_bit_at_offset
- chunksize
- set_head_size
- set_head
- set_foot
- bin_at
- next_bin
- prev_bin
- IAV
- first
- last
- bin_index
- smallbin_index
- is_small_request
- idx2binblock
- mark_binblock
- clear_binblock
- do_check_chunk
- do_check_free_chunk
- do_check_inuse_chunk
- do_check_malloced_chunk
- check_free_chunk
- check_inuse_chunk
- check_chunk
- check_malloced_chunk
- check_free_chunk
- check_inuse_chunk
- check_chunk
- check_malloced_chunk
- frontlink
- unlink
- link_last_remainder
- mmap_chunk
- munmap_chunk
- mremap_chunk
- malloc_extend_top
- mALLOc
- fREe
- rEALLOc
- mEMALIGn
- vALLOc
- pvALLOc
- cALLOc
- cfree
- malloc_trim
- malloc_usable_size
- malloc_update_mallinfo
- malloc_stats
- mALLINFo
- mALLOPt
1 /* ---------- To make a malloc.h, start cutting here ------------ */
2
3 /*
4 A version of malloc/free/realloc written by Doug Lea and released to the
5 public domain. Send questions/comments/complaints/performance data
6 to dl@cs.oswego.edu
7
8 * VERSION 2.6.5 Wed Jun 17 15:55:16 1998 Doug Lea (dl at gee)
9
10 Note: There may be an updated version of this malloc obtainable at
11 ftp://g.oswego.edu/pub/misc/malloc.c
12 Check before installing!
13
14 Note: This version differs from 2.6.4 only by correcting a
15 statement ordering error that could cause failures only
16 when calls to this malloc are interposed with calls to
17 other memory allocators.
18
19 * Why use this malloc?
20
21 This is not the fastest, most space-conserving, most portable, or
22 most tunable malloc ever written. However it is among the fastest
23 while also being among the most space-conserving, portable and tunable.
24 Consistent balance across these factors results in a good general-purpose
25 allocator. For a high-level description, see
26 http://g.oswego.edu/dl/html/malloc.html
27
28 * Synopsis of public routines
29
30 (Much fuller descriptions are contained in the program documentation below.)
31
32 malloc(size_t n);
33 Return a pointer to a newly allocated chunk of at least n bytes, or null
34 if no space is available.
35 free(Void_t* p);
36 Release the chunk of memory pointed to by p, or no effect if p is null.
37 realloc(Void_t* p, size_t n);
38 Return a pointer to a chunk of size n that contains the same data
39 as does chunk p up to the minimum of (n, p's size) bytes, or null
40 if no space is available. The returned pointer may or may not be
41 the same as p. If p is null, equivalent to malloc. Unless the
42 #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
43 size argument of zero (re)allocates a minimum-sized chunk.
44 memalign(size_t alignment, size_t n);
45 Return a pointer to a newly allocated chunk of n bytes, aligned
46 in accord with the alignment argument, which must be a power of
47 two.
48 valloc(size_t n);
49 Equivalent to memalign(pagesize, n), where pagesize is the page
50 size of the system (or as near to this as can be figured out from
51 all the includes/defines below.)
52 pvalloc(size_t n);
53 Equivalent to valloc(minimum-page-that-holds(n)), that is,
54 round up n to nearest pagesize.
55 calloc(size_t unit, size_t quantity);
56 Returns a pointer to quantity * unit bytes, with all locations
57 set to zero.
58 cfree(Void_t* p);
59 Equivalent to free(p).
60 malloc_trim(size_t pad);
61 Release all but pad bytes of freed top-most memory back
62 to the system. Return 1 if successful, else 0.
63 malloc_usable_size(Void_t* p);
64 Report the number usable allocated bytes associated with allocated
65 chunk p. This may or may not report more bytes than were requested,
66 due to alignment and minimum size constraints.
67 malloc_stats();
68 Prints brief summary statistics on stderr.
69 mallinfo()
70 Returns (by copy) a struct containing various summary statistics.
71 mallopt(int parameter_number, int parameter_value)
72 Changes one of the tunable parameters described below. Returns
73 1 if successful in changing the parameter, else 0.
74
75 * Vital statistics:
76
77 Alignment: 8-byte
78 8 byte alignment is currently hardwired into the design. This
79 seems to suffice for all current machines and C compilers.
80
81 Assumed pointer representation: 4 or 8 bytes
82 Code for 8-byte pointers is untested by me but has worked
83 reliably by Wolfram Gloger, who contributed most of the
84 changes supporting this.
85
86 Assumed size_t representation: 4 or 8 bytes
87 Note that size_t is allowed to be 4 bytes even if pointers are 8.
88
89 Minimum overhead per allocated chunk: 4 or 8 bytes
90 Each malloced chunk has a hidden overhead of 4 bytes holding size
91 and status information.
92
93 Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
94 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
95
96 When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
97 ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
98 needed; 4 (8) for a trailing size field
99 and 8 (16) bytes for free list pointers. Thus, the minimum
100 allocatable size is 16/24/32 bytes.
101
102 Even a request for zero bytes (i.e., malloc(0)) returns a
103 pointer to something of the minimum allocatable size.
104
105 Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes
106 8-byte size_t: 2^63 - 16 bytes
107
108 It is assumed that (possibly signed) size_t bit values suffice to
109 represent chunk sizes. `Possibly signed' is due to the fact
110 that `size_t' may be defined on a system as either a signed or
111 an unsigned type. To be conservative, values that would appear
112 as negative numbers are avoided.
113 Requests for sizes with a negative sign bit will return a
114 minimum-sized chunk.
115
116 Maximum overhead wastage per allocated chunk: normally 15 bytes
117
118 Alignnment demands, plus the minimum allocatable size restriction
119 make the normal worst-case wastage 15 bytes (i.e., up to 15
120 more bytes will be allocated than were requested in malloc), with
121 two exceptions:
122 1. Because requests for zero bytes allocate non-zero space,
123 the worst case wastage for a request of zero bytes is 24 bytes.
124 2. For requests >= mmap_threshold that are serviced via
125 mmap(), the worst case wastage is 8 bytes plus the remainder
126 from a system page (the minimal mmap unit); typically 4096 bytes.
127
128 * Limitations
129
130 Here are some features that are NOT currently supported
131
132 * No user-definable hooks for callbacks and the like.
133 * No automated mechanism for fully checking that all accesses
134 to malloced memory stay within their bounds.
135 * No support for compaction.
136
137 * Synopsis of compile-time options:
138
139 People have reported using previous versions of this malloc on all
140 versions of Unix, sometimes by tweaking some of the defines
141 below. It has been tested most extensively on Solaris and
142 Linux. It is also reported to work on WIN32 platforms.
143 People have also reported adapting this malloc for use in
144 stand-alone embedded systems.
145
146 The implementation is in straight, hand-tuned ANSI C. Among other
147 consequences, it uses a lot of macros. Because of this, to be at
148 all usable, this code should be compiled using an optimizing compiler
149 (for example gcc -O2) that can simplify expressions and control
150 paths.
151
152 __STD_C (default: derived from C compiler defines)
153 Nonzero if using ANSI-standard C compiler, a C++ compiler, or
154 a C compiler sufficiently close to ANSI to get away with it.
155 DEBUG (default: NOT defined)
156 Define to enable debugging. Adds fairly extensive assertion-based
157 checking to help track down memory errors, but noticeably slows down
158 execution.
159 REALLOC_ZERO_BYTES_FREES (default: NOT defined)
160 Define this if you think that realloc(p, 0) should be equivalent
161 to free(p). Otherwise, since malloc returns a unique pointer for
162 malloc(0), so does realloc(p, 0).
163 HAVE_MEMCPY (default: defined)
164 Define if you are not otherwise using ANSI STD C, but still
165 have memcpy and memset in your C library and want to use them.
166 Otherwise, simple internal versions are supplied.
167 USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise)
168 Define as 1 if you want the C library versions of memset and
169 memcpy called in realloc and calloc (otherwise macro versions are used).
170 At least on some platforms, the simple macro versions usually
171 outperform libc versions.
172 HAVE_MMAP (default: defined as 1)
173 Define to non-zero to optionally make malloc() use mmap() to
174 allocate very large blocks.
175 HAVE_MREMAP (default: defined as 0 unless Linux libc set)
176 Define to non-zero to optionally make realloc() use mremap() to
177 reallocate very large blocks.
178 malloc_getpagesize (default: derived from system #includes)
179 Either a constant or routine call returning the system page size.
180 HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined)
181 Optionally define if you are on a system with a /usr/include/malloc.h
182 that declares struct mallinfo. It is not at all necessary to
183 define this even if you do, but will ensure consistency.
184 INTERNAL_SIZE_T (default: size_t)
185 Define to a 32-bit type (probably `unsigned int') if you are on a
186 64-bit machine, yet do not want or need to allow malloc requests of
187 greater than 2^31 to be handled. This saves space, especially for
188 very small chunks.
189 INTERNAL_LINUX_C_LIB (default: NOT defined)
190 Defined only when compiled as part of Linux libc.
191 Also note that there is some odd internal name-mangling via defines
192 (for example, internally, `malloc' is named `mALLOc') needed
193 when compiling in this case. These look funny but don't otherwise
194 affect anything.
195 WIN32 (default: undefined)
196 Define this on MS win (95, nt) platforms to compile in sbrk emulation.
197 LACKS_UNISTD_H (default: undefined)
198 Define this if your system does not have a <unistd.h>.
199 MORECORE (default: sbrk)
200 The name of the routine to call to obtain more memory from the system.
201 MORECORE_FAILURE (default: -1)
202 The value returned upon failure of MORECORE.
203 MORECORE_CLEARS (default 1)
204 True (1) if the routine mapped to MORECORE zeroes out memory (which
205 holds for sbrk).
206 DEFAULT_TRIM_THRESHOLD
207 DEFAULT_TOP_PAD
208 DEFAULT_MMAP_THRESHOLD
209 DEFAULT_MMAP_MAX
210 Default values of tunable parameters (described in detail below)
211 controlling interaction with host system routines (sbrk, mmap, etc).
212 These values may also be changed dynamically via mallopt(). The
213 preset defaults are those that give best performance for typical
214 programs/systems.
215
216
217 */
218
219
220
221
222 /* Preliminaries */
223
224 #ifndef __STD_C
225 #ifdef __STDC__
226 #define __STD_C 1
227 #else
228 #if __cplusplus
229 #define __STD_C 1
230 #else
231 #define __STD_C 0
232 #endif /*__cplusplus*/
233 #endif /*__STDC__*/
234 #endif /*__STD_C*/
235
236 #ifndef Void_t
237 #if __STD_C
238 #define Void_t void
239 #else
240 #define Void_t char
241 #endif
242 #endif /*Void_t*/
243
244 #if __STD_C
245 #include <stddef.h> /* for size_t */
246 #else
247 #include <sys/types.h>
248 #endif
249
250 #ifdef __cplusplus
251 extern "C" {
252 #endif
253
254 #include <stdio.h> /* needed for malloc_stats */
255
256
257 /*
258 Compile-time options
259 */
260
261
262 /*
263 Debugging:
264
265 Because freed chunks may be overwritten with link fields, this
266 malloc will often die when freed memory is overwritten by user
267 programs. This can be very effective (albeit in an annoying way)
268 in helping track down dangling pointers.
269
270 If you compile with -DDEBUG, a number of assertion checks are
271 enabled that will catch more memory errors. You probably won't be
272 able to make much sense of the actual assertion errors, but they
273 should help you locate incorrectly overwritten memory. The
274 checking is fairly extensive, and will slow down execution
275 noticeably. Calling malloc_stats or mallinfo with DEBUG set will
276 attempt to check every non-mmapped allocated and free chunk in the
277 course of computing the summmaries. (By nature, mmapped regions
278 cannot be checked very much automatically.)
279
280 Setting DEBUG may also be helpful if you are trying to modify
281 this code. The assertions in the check routines spell out in more
282 detail the assumptions and invariants underlying the algorithms.
283
284 */
285
286 #if DEBUG
287 #include <assert.h>
288 #else
289 #define assert(x) ((void)0)
290 #endif
291
292
293 /*
294 INTERNAL_SIZE_T is the word-size used for internal bookkeeping
295 of chunk sizes. On a 64-bit machine, you can reduce malloc
296 overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int'
297 at the expense of not being able to handle requests greater than
298 2^31. This limitation is hardly ever a concern; you are encouraged
299 to set this. However, the default version is the same as size_t.
300 */
301
302 #ifndef INTERNAL_SIZE_T
303 #define INTERNAL_SIZE_T size_t
304 #endif
305
306 /*
307 REALLOC_ZERO_BYTES_FREES should be set if a call to
308 realloc with zero bytes should be the same as a call to free.
309 Some people think it should. Otherwise, since this malloc
310 returns a unique pointer for malloc(0), so does realloc(p, 0).
311 */
312
313
314 /* #define REALLOC_ZERO_BYTES_FREES */
315
316
317 /*
318 WIN32 causes an emulation of sbrk to be compiled in
319 mmap-based options are not currently supported in WIN32.
320 */
321
322 /* #define WIN32 */
323 #ifdef WIN32
324 #define MORECORE wsbrk
325 #define HAVE_MMAP 0
326 #endif
327
328
329 /*
330 HAVE_MEMCPY should be defined if you are not otherwise using
331 ANSI STD C, but still have memcpy and memset in your C library
332 and want to use them in calloc and realloc. Otherwise simple
333 macro versions are defined here.
334
335 USE_MEMCPY should be defined as 1 if you actually want to
336 have memset and memcpy called. People report that the macro
337 versions are often enough faster than libc versions on many
338 systems that it is better to use them.
339
340 */
341
342 #define HAVE_MEMCPY
343
344 #ifndef USE_MEMCPY
345 #ifdef HAVE_MEMCPY
346 #define USE_MEMCPY 1
347 #else
348 #define USE_MEMCPY 0
349 #endif
350 #endif
351
352 #if (__STD_C || defined(HAVE_MEMCPY))
353
354 #if __STD_C
355 void* memset(void*, int, size_t);
356 void* memcpy(void*, const void*, size_t);
357 #else
358 Void_t* memset();
359 Void_t* memcpy();
360 #endif
361 #endif
362
363 #if USE_MEMCPY
364
365 /* The following macros are only invoked with (2n+1)-multiples of
366 INTERNAL_SIZE_T units, with a positive integer n. This is exploited
367 for fast inline execution when n is small. */
368
369 #define MALLOC_ZERO(charp, nbytes) \
/* [<][>][^][v][top][bottom][index][help] */
370 do { \
371 INTERNAL_SIZE_T mzsz = (nbytes); \
372 if(mzsz <= 9*sizeof(mzsz)) { \
373 INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \
374 if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \
375 *mz++ = 0; \
376 if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \
377 *mz++ = 0; \
378 if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \
379 *mz++ = 0; }}} \
380 *mz++ = 0; \
381 *mz++ = 0; \
382 *mz = 0; \
383 } else memset((charp), 0, mzsz); \
384 } while(0)
385
386 #define MALLOC_COPY(dest,src,nbytes) \
/* [<][>][^][v][top][bottom][index][help] */
387 do { \
388 INTERNAL_SIZE_T mcsz = (nbytes); \
389 if(mcsz <= 9*sizeof(mcsz)) { \
390 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \
391 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \
392 if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
393 *mcdst++ = *mcsrc++; \
394 if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
395 *mcdst++ = *mcsrc++; \
396 if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \
397 *mcdst++ = *mcsrc++; }}} \
398 *mcdst++ = *mcsrc++; \
399 *mcdst++ = *mcsrc++; \
400 *mcdst = *mcsrc ; \
401 } else memcpy(dest, src, mcsz); \
402 } while(0)
403
404 #else /* !USE_MEMCPY */
405
406 /* Use Duff's device for good zeroing/copying performance. */
407
408 #define MALLOC_ZERO(charp, nbytes) \
/* [<][>][^][v][top][bottom][index][help] */
409 do { \
410 INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
411 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
412 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
413 switch (mctmp) { \
414 case 0: for(;;) { *mzp++ = 0; \
415 case 7: *mzp++ = 0; \
416 case 6: *mzp++ = 0; \
417 case 5: *mzp++ = 0; \
418 case 4: *mzp++ = 0; \
419 case 3: *mzp++ = 0; \
420 case 2: *mzp++ = 0; \
421 case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
422 } \
423 } while(0)
424
425 #define MALLOC_COPY(dest,src,nbytes) \
/* [<][>][^][v][top][bottom][index][help] */
426 do { \
427 INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
428 INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
429 long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
430 if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
431 switch (mctmp) { \
432 case 0: for(;;) { *mcdst++ = *mcsrc++; \
433 case 7: *mcdst++ = *mcsrc++; \
434 case 6: *mcdst++ = *mcsrc++; \
435 case 5: *mcdst++ = *mcsrc++; \
436 case 4: *mcdst++ = *mcsrc++; \
437 case 3: *mcdst++ = *mcsrc++; \
438 case 2: *mcdst++ = *mcsrc++; \
439 case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
440 } \
441 } while(0)
442
443 #endif
444
445
446 /*
447 Define HAVE_MMAP to optionally make malloc() use mmap() to
448 allocate very large blocks. These will be returned to the
449 operating system immediately after a free().
450 */
451
452 #ifndef HAVE_MMAP
453 #define HAVE_MMAP 1
454 #endif
455
456 /*
457 Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
458 large blocks. This is currently only possible on Linux with
459 kernel versions newer than 1.3.77.
460 */
461
462 #ifndef HAVE_MREMAP
463 #ifdef INTERNAL_LINUX_C_LIB
464 #define HAVE_MREMAP 1
465 #else
466 #define HAVE_MREMAP 0
467 #endif
468 #endif
469
470 #if HAVE_MMAP
471
472 #include <unistd.h>
473 #include <fcntl.h>
474 #include <sys/mman.h>
475
476 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
477 #define MAP_ANONYMOUS MAP_ANON
478 #endif
479
480 #endif /* HAVE_MMAP */
481
482 /*
483 Access to system page size. To the extent possible, this malloc
484 manages memory from the system in page-size units.
485
486 The following mechanics for getpagesize were adapted from
487 bsd/gnu getpagesize.h
488 */
489
490 #ifndef LACKS_UNISTD_H
491 # include <unistd.h>
492 #endif
493
494 #ifndef malloc_getpagesize
495 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
496 # ifndef _SC_PAGE_SIZE
497 # define _SC_PAGE_SIZE _SC_PAGESIZE
498 # endif
499 # endif
500 # ifdef _SC_PAGE_SIZE
501 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
502 # else
503 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
504 extern size_t getpagesize();
505 # define malloc_getpagesize getpagesize()
506 # else
507 # include <sys/param.h>
508 # ifdef EXEC_PAGESIZE
509 # define malloc_getpagesize EXEC_PAGESIZE
510 # else
511 # ifdef NBPG
512 # ifndef CLSIZE
513 # define malloc_getpagesize NBPG
514 # else
515 # define malloc_getpagesize (NBPG * CLSIZE)
516 # endif
517 # else
518 # ifdef NBPC
519 # define malloc_getpagesize NBPC
520 # else
521 # ifdef PAGESIZE
522 # define malloc_getpagesize PAGESIZE
523 # else
524 # define malloc_getpagesize (4096) /* just guess */
525 # endif
526 # endif
527 # endif
528 # endif
529 # endif
530 # endif
531 #endif
532
533
534
535 /*
536
537 This version of malloc supports the standard SVID/XPG mallinfo
538 routine that returns a struct containing the same kind of
539 information you can get from malloc_stats. It should work on
540 any SVID/XPG compliant system that has a /usr/include/malloc.h
541 defining struct mallinfo. (If you'd like to install such a thing
542 yourself, cut out the preliminary declarations as described above
543 and below and save them in a malloc.h file. But there's no
544 compelling reason to bother to do this.)
545
546 The main declaration needed is the mallinfo struct that is returned
547 (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
548 bunch of fields, most of which are not even meaningful in this
549 version of malloc. Some of these fields are are instead filled by
550 mallinfo() with other numbers that might possibly be of interest.
551
552 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
553 /usr/include/malloc.h file that includes a declaration of struct
554 mallinfo. If so, it is included; else an SVID2/XPG2 compliant
555 version is declared below. These must be precisely the same for
556 mallinfo() to work.
557
558 */
559
560 /* #define HAVE_USR_INCLUDE_MALLOC_H */
561
562 #if HAVE_USR_INCLUDE_MALLOC_H
563 #include "/usr/include/malloc.h"
564 #else
565
566 /* SVID2/XPG mallinfo structure */
567
568 struct mallinfo {
569 int arena; /* total space allocated from system */
570 int ordblks; /* number of non-inuse chunks */
571 int smblks; /* unused -- always zero */
572 int hblks; /* number of mmapped regions */
573 int hblkhd; /* total space in mmapped regions */
574 int usmblks; /* unused -- always zero */
575 int fsmblks; /* unused -- always zero */
576 int uordblks; /* total allocated space */
577 int fordblks; /* total non-inuse space */
578 int keepcost; /* top-most, releasable (via malloc_trim) space */
579 };
580
581 /* SVID2/XPG mallopt options */
582
583 #define M_MXFAST 1 /* UNUSED in this malloc */
584 #define M_NLBLKS 2 /* UNUSED in this malloc */
585 #define M_GRAIN 3 /* UNUSED in this malloc */
586 #define M_KEEP 4 /* UNUSED in this malloc */
587
588 #endif
589
590 /* mallopt options that actually do something */
591
592 #define M_TRIM_THRESHOLD -1
593 #define M_TOP_PAD -2
594 #define M_MMAP_THRESHOLD -3
595 #define M_MMAP_MAX -4
596
597
598
599 #ifndef DEFAULT_TRIM_THRESHOLD
600 #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
601 #endif
602
603 /*
604 M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
605 to keep before releasing via malloc_trim in free().
606
607 Automatic trimming is mainly useful in long-lived programs.
608 Because trimming via sbrk can be slow on some systems, and can
609 sometimes be wasteful (in cases where programs immediately
610 afterward allocate more large chunks) the value should be high
611 enough so that your overall system performance would improve by
612 releasing.
613
614 The trim threshold and the mmap control parameters (see below)
615 can be traded off with one another. Trimming and mmapping are
616 two different ways of releasing unused memory back to the
617 system. Between these two, it is often possible to keep
618 system-level demands of a long-lived program down to a bare
619 minimum. For example, in one test suite of sessions measuring
620 the XF86 X server on Linux, using a trim threshold of 128K and a
621 mmap threshold of 192K led to near-minimal long term resource
622 consumption.
623
624 If you are using this malloc in a long-lived program, it should
625 pay to experiment with these values. As a rough guide, you
626 might set to a value close to the average size of a process
627 (program) running on your system. Releasing this much memory
628 would allow such a process to run in memory. Generally, it's
629 worth it to tune for trimming rather tham memory mapping when a
630 program undergoes phases where several large chunks are
631 allocated and released in ways that can reuse each other's
632 storage, perhaps mixed with phases where there are no such
633 chunks at all. And in well-behaved long-lived programs,
634 controlling release of large blocks via trimming versus mapping
635 is usually faster.
636
637 However, in most programs, these parameters serve mainly as
638 protection against the system-level effects of carrying around
639 massive amounts of unneeded memory. Since frequent calls to
640 sbrk, mmap, and munmap otherwise degrade performance, the default
641 parameters are set to relatively high values that serve only as
642 safeguards.
643
644 The default trim value is high enough to cause trimming only in
645 fairly extreme (by current memory consumption standards) cases.
646 It must be greater than page size to have any useful effect. To
647 disable trimming completely, you can set to (unsigned long)(-1);
648
649
650 */
651
652
653 #ifndef DEFAULT_TOP_PAD
654 #define DEFAULT_TOP_PAD (0)
655 #endif
656
657 /*
658 M_TOP_PAD is the amount of extra `padding' space to allocate or
659 retain whenever sbrk is called. It is used in two ways internally:
660
661 * When sbrk is called to extend the top of the arena to satisfy
662 a new malloc request, this much padding is added to the sbrk
663 request.
664
665 * When malloc_trim is called automatically from free(),
666 it is used as the `pad' argument.
667
668 In both cases, the actual amount of padding is rounded
669 so that the end of the arena is always a system page boundary.
670
671 The main reason for using padding is to avoid calling sbrk so
672 often. Having even a small pad greatly reduces the likelihood
673 that nearly every malloc request during program start-up (or
674 after trimming) will invoke sbrk, which needlessly wastes
675 time.
676
677 Automatic rounding-up to page-size units is normally sufficient
678 to avoid measurable overhead, so the default is 0. However, in
679 systems where sbrk is relatively slow, it can pay to increase
680 this value, at the expense of carrying around more memory than
681 the program needs.
682
683 */
684
685
686 #ifndef DEFAULT_MMAP_THRESHOLD
687 #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
688 #endif
689
690 /*
691
692 M_MMAP_THRESHOLD is the request size threshold for using mmap()
693 to service a request. Requests of at least this size that cannot
694 be allocated using already-existing space will be serviced via mmap.
695 (If enough normal freed space already exists it is used instead.)
696
697 Using mmap segregates relatively large chunks of memory so that
698 they can be individually obtained and released from the host
699 system. A request serviced through mmap is never reused by any
700 other request (at least not directly; the system may just so
701 happen to remap successive requests to the same locations).
702
703 Segregating space in this way has the benefit that mmapped space
704 can ALWAYS be individually released back to the system, which
705 helps keep the system level memory demands of a long-lived
706 program low. Mapped memory can never become `locked' between
707 other chunks, as can happen with normally allocated chunks, which
708 menas that even trimming via malloc_trim would not release them.
709
710 However, it has the disadvantages that:
711
712 1. The space cannot be reclaimed, consolidated, and then
713 used to service later requests, as happens with normal chunks.
714 2. It can lead to more wastage because of mmap page alignment
715 requirements
716 3. It causes malloc performance to be more dependent on host
717 system memory management support routines which may vary in
718 implementation quality and may impose arbitrary
719 limitations. Generally, servicing a request via normal
720 malloc steps is faster than going through a system's mmap.
721
722 All together, these considerations should lead you to use mmap
723 only for relatively large requests.
724
725
726 */
727
728
729
730 #ifndef DEFAULT_MMAP_MAX
731 #if HAVE_MMAP
732 #define DEFAULT_MMAP_MAX (64)
733 #else
734 #define DEFAULT_MMAP_MAX (0)
735 #endif
736 #endif
737
738 /*
739 M_MMAP_MAX is the maximum number of requests to simultaneously
740 service using mmap. This parameter exists because:
741
742 1. Some systems have a limited number of internal tables for
743 use by mmap.
744 2. In most systems, overreliance on mmap can degrade overall
745 performance.
746 3. If a program allocates many large regions, it is probably
747 better off using normal sbrk-based allocation routines that
748 can reclaim and reallocate normal heap memory. Using a
749 small value allows transition into this mode after the
750 first few allocations.
751
752 Setting to 0 disables all use of mmap. If HAVE_MMAP is not set,
753 the default value is 0, and attempts to set it to non-zero values
754 in mallopt will fail.
755 */
756
757
758
759
760 /*
761
762 Special defines for linux libc
763
764 Except when compiled using these special defines for Linux libc
765 using weak aliases, this malloc is NOT designed to work in
766 multithreaded applications. No semaphores or other concurrency
767 control are provided to ensure that multiple malloc or free calls
768 don't run at the same time, which could be disasterous. A single
769 semaphore could be used across malloc, realloc, and free (which is
770 essentially the effect of the linux weak alias approach). It would
771 be hard to obtain finer granularity.
772
773 */
774
775
776 #ifdef INTERNAL_LINUX_C_LIB
777
778 #if __STD_C
779
780 Void_t * __default_morecore_init (ptrdiff_t);
781 Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init;
782
783 #else
784
785 Void_t * __default_morecore_init ();
786 Void_t *(*__morecore)() = __default_morecore_init;
787
788 #endif
789
790 #define MORECORE (*__morecore)
791 #define MORECORE_FAILURE 0
792 #define MORECORE_CLEARS 1
793
794 #else /* INTERNAL_LINUX_C_LIB */
795 /*
796 #if __STD_C
797 extern Void_t* sbrk(ptrdiff_t);
798 #else
799 extern Void_t* sbrk();
800 #endif
801 */
802 #ifndef MORECORE
803 #define MORECORE sbrk
804 #endif
805
806 #ifndef MORECORE_FAILURE
807 #define MORECORE_FAILURE -1
808 #endif
809
810 #ifndef MORECORE_CLEARS
811 #define MORECORE_CLEARS 1
812 #endif
813
814 #endif /* INTERNAL_LINUX_C_LIB */
815
816 #if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__)
817
818 #define cALLOc __libc_calloc
819 #define fREe __libc_free
820 #define mALLOc __libc_malloc
821 #define mEMALIGn __libc_memalign
822 #define rEALLOc __libc_realloc
823 #define vALLOc __libc_valloc
824 #define pvALLOc __libc_pvalloc
825 #define mALLINFo __libc_mallinfo
826 #define mALLOPt __libc_mallopt
827
828 #pragma weak calloc = __libc_calloc
829 #pragma weak free = __libc_free
830 #pragma weak cfree = __libc_free
831 #pragma weak malloc = __libc_malloc
832 #pragma weak memalign = __libc_memalign
833 #pragma weak realloc = __libc_realloc
834 #pragma weak valloc = __libc_valloc
835 #pragma weak pvalloc = __libc_pvalloc
836 #pragma weak mallinfo = __libc_mallinfo
837 #pragma weak mallopt = __libc_mallopt
838
839 #else
840
841
842 #define cALLOc calloc
843 #define fREe free
844 #define mALLOc malloc
845 #define mEMALIGn memalign
846 #define rEALLOc realloc
847 #define vALLOc valloc
848 #define pvALLOc pvalloc
849 #define mALLINFo mallinfo
850 #define mALLOPt mallopt
851
852 #endif
853
854 /* Public routines */
855
856 #if __STD_C
857
858 Void_t* mALLOc(size_t);
859 void fREe(Void_t*);
860 Void_t* rEALLOc(Void_t*, size_t);
861 Void_t* mEMALIGn(size_t, size_t);
862 Void_t* vALLOc(size_t);
863 Void_t* pvALLOc(size_t);
864 Void_t* cALLOc(size_t, size_t);
865 void cfree(Void_t*);
866 int malloc_trim(size_t);
867 size_t malloc_usable_size(Void_t*);
868 void malloc_stats();
869 int mALLOPt(int, int);
870 struct mallinfo mALLINFo(void);
871 #else
872 Void_t* mALLOc();
873 void fREe();
874 Void_t* rEALLOc();
875 Void_t* mEMALIGn();
876 Void_t* vALLOc();
877 Void_t* pvALLOc();
878 Void_t* cALLOc();
879 void cfree();
880 int malloc_trim();
881 size_t malloc_usable_size();
882 void malloc_stats();
883 int mALLOPt();
884 struct mallinfo mALLINFo();
885 #endif
886
887
888 #ifdef __cplusplus
889 }; /* end of extern "C" */
890 #endif
891
892 /* ---------- To make a malloc.h, end cutting here ------------ */
893
894
895 /*
896 Emulation of sbrk for WIN32
897 All code within the ifdef WIN32 is untested by me.
898 */
899
900
901 #ifdef WIN32
902
903 #define AlignPage(add) (((add) + (malloc_getpagesize-1)) &
/* [<][>][^][v][top][bottom][index][help] */
904 ~(malloc_getpagesize-1))
905
906 /* resrve 64MB to insure large contiguous space */
907 #define RESERVED_SIZE (1024*1024*64)
908 #define NEXT_SIZE (2048*1024)
909 #define TOP_MEMORY ((unsigned long)2*1024*1024*1024)
910
911 struct GmListElement;
912 typedef struct GmListElement GmListElement;
913
914 struct GmListElement
915 {
916 GmListElement* next;
917 void* base;
918 };
919
920 static GmListElement* head = 0;
921 static unsigned int gNextAddress = 0;
922 static unsigned int gAddressBase = 0;
923 static unsigned int gAllocatedSize = 0;
924
925 static
926 GmListElement* makeGmListElement (void* bas)
/* [<][>][^][v][top][bottom][index][help] */
927 {
928 GmListElement* this;
929 this = (GmListElement*)(void*)LocalAlloc (0, sizeof (GmListElement));
930 ASSERT (this);
931 if (this)
932 {
933 this->base = bas;
934 this->next = head;
935 head = this;
936 }
937 return this;
938 }
939
940 void gcleanup ()
/* [<][>][^][v][top][bottom][index][help] */
941 {
942 BOOL rval;
943 ASSERT ( (head == NULL) || (head->base == (void*)gAddressBase));
944 if (gAddressBase && (gNextAddress - gAddressBase))
945 {
946 rval = VirtualFree ((void*)gAddressBase,
947 gNextAddress - gAddressBase,
948 MEM_DECOMMIT);
949 ASSERT (rval);
950 }
951 while (head)
952 {
953 GmListElement* next = head->next;
954 rval = VirtualFree (head->base, 0, MEM_RELEASE);
955 ASSERT (rval);
956 LocalFree (head);
957 head = next;
958 }
959 }
960
961 static
962 void* findRegion (void* start_address, unsigned long size)
/* [<][>][^][v][top][bottom][index][help] */
963 {
964 MEMORY_BASIC_INFORMATION info;
965 while ((unsigned long)start_address < TOP_MEMORY)
966 {
967 VirtualQuery (start_address, &info, sizeof (info));
968 if (info.State != MEM_FREE)
969 start_address = (char*)info.BaseAddress + info.RegionSize;
970 else if (info.RegionSize >= size)
971 return start_address;
972 else
973 start_address = (char*)info.BaseAddress + info.RegionSize;
974 }
975 return NULL;
976
977 }
978
979
980 void* wsbrk (long size)
/* [<][>][^][v][top][bottom][index][help] */
981 {
982 void* tmp;
983 if (size > 0)
984 {
985 if (gAddressBase == 0)
986 {
987 gAllocatedSize = max (RESERVED_SIZE, AlignPage (size));
988 gNextAddress = gAddressBase =
989 (unsigned int)VirtualAlloc (NULL, gAllocatedSize,
990 MEM_RESERVE, PAGE_NOACCESS);
991 } else if (AlignPage (gNextAddress + size) > (gAddressBase +
992 gAllocatedSize))
993 {
994 long new_size = max (NEXT_SIZE, AlignPage (size));
995 void* new_address = (void*)(gAddressBase+gAllocatedSize);
996 do
997 {
998 new_address = findRegion (new_address, new_size);
999
1000 if (new_address == 0)
1001 return (void*)-1;
1002
1003 gAddressBase = gNextAddress =
1004 (unsigned int)VirtualAlloc (new_address, new_size,
1005 MEM_RESERVE, PAGE_NOACCESS);
1006 // repeat in case of race condition
1007 // The region that we found has been snagged
1008 // by another thread
1009 }
1010 while (gAddressBase == 0);
1011
1012 ASSERT (new_address == (void*)gAddressBase);
1013
1014 gAllocatedSize = new_size;
1015
1016 if (!makeGmListElement ((void*)gAddressBase))
1017 return (void*)-1;
1018 }
1019 if ((size + gNextAddress) > AlignPage (gNextAddress))
1020 {
1021 void* res;
1022 res = VirtualAlloc ((void*)AlignPage (gNextAddress),
1023 (size + gNextAddress -
1024 AlignPage (gNextAddress)),
1025 MEM_COMMIT, PAGE_READWRITE);
1026 if (res == 0)
1027 return (void*)-1;
1028 }
1029 tmp = (void*)gNextAddress;
1030 gNextAddress = (unsigned int)tmp + size;
1031 return tmp;
1032 }
1033 else if (size < 0)
1034 {
1035 unsigned int alignedGoal = AlignPage (gNextAddress + size);
1036 /* Trim by releasing the virtual memory */
1037 if (alignedGoal >= gAddressBase)
1038 {
1039 VirtualFree ((void*)alignedGoal, gNextAddress - alignedGoal,
1040 MEM_DECOMMIT);
1041 gNextAddress = gNextAddress + size;
1042 return (void*)gNextAddress;
1043 }
1044 else
1045 {
1046 VirtualFree ((void*)gAddressBase, gNextAddress - gAddressBase,
1047 MEM_DECOMMIT);
1048 gNextAddress = gAddressBase;
1049 return (void*)-1;
1050 }
1051 }
1052 else
1053 {
1054 return (void*)gNextAddress;
1055 }
1056 }
1057
1058 #endif
1059
1060
1061
1062 /*
1063 Type declarations
1064 */
1065
1066
1067 struct malloc_chunk
1068 {
1069 INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
1070 INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
1071 struct malloc_chunk* fd; /* double links -- used only if free. */
1072 struct malloc_chunk* bk;
1073 };
1074
1075 typedef struct malloc_chunk* mchunkptr;
1076
1077 /*
1078
1079 malloc_chunk details:
1080
1081 (The following includes lightly edited explanations by Colin Plumb.)
1082
1083 Chunks of memory are maintained using a `boundary tag' method as
1084 described in e.g., Knuth or Standish. (See the paper by Paul
1085 Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
1086 survey of such techniques.) Sizes of free chunks are stored both
1087 in the front of each chunk and at the end. This makes
1088 consolidating fragmented chunks into bigger chunks very fast. The
1089 size fields also hold bits representing whether chunks are free or
1090 in use.
1091
1092 An allocated chunk looks like this:
1093
1094
1095 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1096 | Size of previous chunk, if allocated | |
1097 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1098 | Size of chunk, in bytes |P|
1099 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1100 | User data starts here... .
1101 . .
1102 . (malloc_usable_space() bytes) .
1103 . |
1104 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1105 | Size of chunk |
1106 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1107
1108
1109 Where "chunk" is the front of the chunk for the purpose of most of
1110 the malloc code, but "mem" is the pointer that is returned to the
1111 user. "Nextchunk" is the beginning of the next contiguous chunk.
1112
1113 Chunks always begin on even word boundries, so the mem portion
1114 (which is returned to the user) is also on an even word boundary, and
1115 thus double-word aligned.
1116
1117 Free chunks are stored in circular doubly-linked lists, and look like this:
1118
1119 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1120 | Size of previous chunk |
1121 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1122 `head:' | Size of chunk, in bytes |P|
1123 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1124 | Forward pointer to next chunk in list |
1125 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1126 | Back pointer to previous chunk in list |
1127 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1128 | Unused space (may be 0 bytes long) .
1129 . .
1130 . |
1131 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1132 `foot:' | Size of chunk, in bytes |
1133 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1134
1135 The P (PREV_INUSE) bit, stored in the unused low-order bit of the
1136 chunk size (which is always a multiple of two words), is an in-use
1137 bit for the *previous* chunk. If that bit is *clear*, then the
1138 word before the current chunk size contains the previous chunk
1139 size, and can be used to find the front of the previous chunk.
1140 (The very first chunk allocated always has this bit set,
1141 preventing access to non-existent (or non-owned) memory.)
1142
1143 Note that the `foot' of the current chunk is actually represented
1144 as the prev_size of the NEXT chunk. (This makes it easier to
1145 deal with alignments etc).
1146
1147 The two exceptions to all this are
1148
1149 1. The special chunk `top', which doesn't bother using the
1150 trailing size field since there is no
1151 next contiguous chunk that would have to index off it. (After
1152 initialization, `top' is forced to always exist. If it would
1153 become less than MINSIZE bytes long, it is replenished via
1154 malloc_extend_top.)
1155
1156 2. Chunks allocated via mmap, which have the second-lowest-order
1157 bit (IS_MMAPPED) set in their size fields. Because they are
1158 never merged or traversed from any other chunk, they have no
1159 foot size or inuse information.
1160
1161 Available chunks are kept in any of several places (all declared below):
1162
1163 * `av': An array of chunks serving as bin headers for consolidated
1164 chunks. Each bin is doubly linked. The bins are approximately
1165 proportionally (log) spaced. There are a lot of these bins
1166 (128). This may look excessive, but works very well in
1167 practice. All procedures maintain the invariant that no
1168 consolidated chunk physically borders another one. Chunks in
1169 bins are kept in size order, with ties going to the
1170 approximately least recently used chunk.
1171
1172 The chunks in each bin are maintained in decreasing sorted order by
1173 size. This is irrelevant for the small bins, which all contain
1174 the same-sized chunks, but facilitates best-fit allocation for
1175 larger chunks. (These lists are just sequential. Keeping them in
1176 order almost never requires enough traversal to warrant using
1177 fancier ordered data structures.) Chunks of the same size are
1178 linked with the most recently freed at the front, and allocations
1179 are taken from the back. This results in LRU or FIFO allocation
1180 order, which tends to give each chunk an equal opportunity to be
1181 consolidated with adjacent freed chunks, resulting in larger free
1182 chunks and less fragmentation.
1183
1184 * `top': The top-most available chunk (i.e., the one bordering the
1185 end of available memory) is treated specially. It is never
1186 included in any bin, is used only if no other chunk is
1187 available, and is released back to the system if it is very
1188 large (see M_TRIM_THRESHOLD).
1189
1190 * `last_remainder': A bin holding only the remainder of the
1191 most recently split (non-top) chunk. This bin is checked
1192 before other non-fitting chunks, so as to provide better
1193 locality for runs of sequentially allocated chunks.
1194
1195 * Implicitly, through the host system's memory mapping tables.
1196 If supported, requests greater than a threshold are usually
1197 serviced via calls to mmap, and then later released via munmap.
1198
1199 */
1200
1201
1202
1203
1204
1205
1206 /* sizes, alignments */
1207
1208 #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
1209 #define MALLOC_ALIGNMENT (SIZE_SZ + SIZE_SZ)
1210 #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
1211 #define MINSIZE (sizeof(struct malloc_chunk))
1212
1213 /* conversion from malloc headers to user pointers, and back */
1214
1215 #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
/* [<][>][^][v][top][bottom][index][help] */
1216 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
/* [<][>][^][v][top][bottom][index][help] */
1217
1218 /* pad request bytes into a usable size */
1219
1220 #define request2size(req) \
/* [<][>][^][v][top][bottom][index][help] */
1221 (((long)((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) < \
1222 (long)(MINSIZE + MALLOC_ALIGN_MASK)) ? MINSIZE : \
1223 (((req) + (SIZE_SZ + MALLOC_ALIGN_MASK)) & ~(MALLOC_ALIGN_MASK)))
1224
1225 /* Check if m has acceptable alignment */
1226
1227 #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
/* [<][>][^][v][top][bottom][index][help] */
1228
1229
1230
1231
1232 /*
1233 Physical chunk operations
1234 */
1235
1236
1237 /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
1238
1239 #define PREV_INUSE 0x1
1240
1241 /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
1242
1243 #define IS_MMAPPED 0x2
1244
1245 /* Bits to mask off when extracting size */
1246
1247 #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
1248
1249
1250 /* Ptr to next physical malloc_chunk. */
1251
1252 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
/* [<][>][^][v][top][bottom][index][help] */
1253
1254 /* Ptr to previous physical malloc_chunk */
1255
1256 #define prev_chunk(p)\
/* [<][>][^][v][top][bottom][index][help] */
1257 ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
1258
1259
1260 /* Treat space at ptr + offset as a chunk */
1261
1262 #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
/* [<][>][^][v][top][bottom][index][help] */
1263
1264
1265
1266
1267 /*
1268 Dealing with use bits
1269 */
1270
1271 /* extract p's inuse bit */
1272
1273 #define inuse(p)\
/* [<][>][^][v][top][bottom][index][help] */
1274 ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
1275
1276 /* extract inuse bit of previous chunk */
1277
1278 #define prev_inuse(p) ((p)->size & PREV_INUSE)
/* [<][>][^][v][top][bottom][index][help] */
1279
1280 /* check for mmap()'ed chunk */
1281
1282 #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
/* [<][>][^][v][top][bottom][index][help] */
1283
1284 /* set/clear chunk as in use without otherwise disturbing */
1285
1286 #define set_inuse(p)\
/* [<][>][^][v][top][bottom][index][help] */
1287 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
1288
1289 #define clear_inuse(p)\
/* [<][>][^][v][top][bottom][index][help] */
1290 ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
1291
1292 /* check/set/clear inuse bits in known places */
1293
1294 #define inuse_bit_at_offset(p, s)\
/* [<][>][^][v][top][bottom][index][help] */
1295 (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
1296
1297 #define set_inuse_bit_at_offset(p, s)\
/* [<][>][^][v][top][bottom][index][help] */
1298 (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
1299
1300 #define clear_inuse_bit_at_offset(p, s)\
/* [<][>][^][v][top][bottom][index][help] */
1301 (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
1302
1303
1304
1305
1306 /*
1307 Dealing with size fields
1308 */
1309
1310 /* Get size, ignoring use bits */
1311
1312 #define chunksize(p) ((p)->size & ~(SIZE_BITS))
/* [<][>][^][v][top][bottom][index][help] */
1313
1314 /* Set size at head, without disturbing its use bit */
1315
1316 #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
/* [<][>][^][v][top][bottom][index][help] */
1317
1318 /* Set size/use ignoring previous bits in header */
1319
1320 #define set_head(p, s) ((p)->size = (s))
/* [<][>][^][v][top][bottom][index][help] */
1321
1322 /* Set size at footer (only when chunk is not in use) */
1323
1324 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
/* [<][>][^][v][top][bottom][index][help] */
1325
1326
1327
1328
1329
1330 /*
1331 Bins
1332
1333 The bins, `av_' are an array of pairs of pointers serving as the
1334 heads of (initially empty) doubly-linked lists of chunks, laid out
1335 in a way so that each pair can be treated as if it were in a
1336 malloc_chunk. (This way, the fd/bk offsets for linking bin heads
1337 and chunks are the same).
1338
1339 Bins for sizes < 512 bytes contain chunks of all the same size, spaced
1340 8 bytes apart. Larger bins are approximately logarithmically
1341 spaced. (See the table below.) The `av_' array is never mentioned
1342 directly in the code, but instead via bin access macros.
1343
1344 Bin layout:
1345
1346 64 bins of size 8
1347 32 bins of size 64
1348 16 bins of size 512
1349 8 bins of size 4096
1350 4 bins of size 32768
1351 2 bins of size 262144
1352 1 bin of size what's left
1353
1354 There is actually a little bit of slop in the numbers in bin_index
1355 for the sake of speed. This makes no difference elsewhere.
1356
1357 The special chunks `top' and `last_remainder' get their own bins,
1358 (this is implemented via yet more trickery with the av_ array),
1359 although `top' is never properly linked to its bin since it is
1360 always handled specially.
1361
1362 */
1363
1364 #define NAV 128 /* number of bins */
1365
1366 typedef struct malloc_chunk* mbinptr;
1367
1368 /* access macros */
1369
1370 #define bin_at(i) ((mbinptr)((char*)&(av_[2*(i) + 2]) - 2*SIZE_SZ))
/* [<][>][^][v][top][bottom][index][help] */
1371 #define next_bin(b) ((mbinptr)((char*)(b) + 2 * sizeof(mbinptr)))
/* [<][>][^][v][top][bottom][index][help] */
1372 #define prev_bin(b) ((mbinptr)((char*)(b) - 2 * sizeof(mbinptr)))
/* [<][>][^][v][top][bottom][index][help] */
1373
1374 /*
1375 The first 2 bins are never indexed. The corresponding av_ cells are instead
1376 used for bookkeeping. This is not to save space, but to simplify
1377 indexing, maintain locality, and avoid some initialization tests.
1378 */
1379
1380 #define top (bin_at(0)->fd) /* The topmost chunk */
1381 #define last_remainder (bin_at(1)) /* remainder from last split */
1382
1383
1384 /*
1385 Because top initially points to its own bin with initial
1386 zero size, thus forcing extension on the first malloc request,
1387 we avoid having any special code in malloc to check whether
1388 it even exists yet. But we still need to in malloc_extend_top.
1389 */
1390
1391 #define initial_top ((mchunkptr)(bin_at(0)))
1392
1393 /* Helper macro to initialize bins */
1394
1395 #define IAV(i) bin_at(i), bin_at(i)
/* [<][>][^][v][top][bottom][index][help] */
1396
1397 static mbinptr av_[NAV * 2 + 2] = {
1398 0, 0,
1399 IAV(0), IAV(1), IAV(2), IAV(3), IAV(4), IAV(5), IAV(6), IAV(7),
1400 IAV(8), IAV(9), IAV(10), IAV(11), IAV(12), IAV(13), IAV(14), IAV(15),
1401 IAV(16), IAV(17), IAV(18), IAV(19), IAV(20), IAV(21), IAV(22), IAV(23),
1402 IAV(24), IAV(25), IAV(26), IAV(27), IAV(28), IAV(29), IAV(30), IAV(31),
1403 IAV(32), IAV(33), IAV(34), IAV(35), IAV(36), IAV(37), IAV(38), IAV(39),
1404 IAV(40), IAV(41), IAV(42), IAV(43), IAV(44), IAV(45), IAV(46), IAV(47),
1405 IAV(48), IAV(49), IAV(50), IAV(51), IAV(52), IAV(53), IAV(54), IAV(55),
1406 IAV(56), IAV(57), IAV(58), IAV(59), IAV(60), IAV(61), IAV(62), IAV(63),
1407 IAV(64), IAV(65), IAV(66), IAV(67), IAV(68), IAV(69), IAV(70), IAV(71),
1408 IAV(72), IAV(73), IAV(74), IAV(75), IAV(76), IAV(77), IAV(78), IAV(79),
1409 IAV(80), IAV(81), IAV(82), IAV(83), IAV(84), IAV(85), IAV(86), IAV(87),
1410 IAV(88), IAV(89), IAV(90), IAV(91), IAV(92), IAV(93), IAV(94), IAV(95),
1411 IAV(96), IAV(97), IAV(98), IAV(99), IAV(100), IAV(101), IAV(102), IAV(103),
1412 IAV(104), IAV(105), IAV(106), IAV(107), IAV(108), IAV(109), IAV(110), IAV(111),
1413 IAV(112), IAV(113), IAV(114), IAV(115), IAV(116), IAV(117), IAV(118), IAV(119),
1414 IAV(120), IAV(121), IAV(122), IAV(123), IAV(124), IAV(125), IAV(126), IAV(127)
1415 };
1416
1417
1418
1419 /* field-extraction macros */
1420
1421 #define first(b) ((b)->fd)
/* [<][>][^][v][top][bottom][index][help] */
1422 #define last(b) ((b)->bk)
/* [<][>][^][v][top][bottom][index][help] */
1423
1424 /*
1425 Indexing into bins
1426 */
1427
1428 #define bin_index(sz) \
/* [<][>][^][v][top][bottom][index][help] */
1429 (((((unsigned long)(sz)) >> 9) == 0) ? (((unsigned long)(sz)) >> 3): \
1430 ((((unsigned long)(sz)) >> 9) <= 4) ? 56 + (((unsigned long)(sz)) >> 6): \
1431 ((((unsigned long)(sz)) >> 9) <= 20) ? 91 + (((unsigned long)(sz)) >> 9): \
1432 ((((unsigned long)(sz)) >> 9) <= 84) ? 110 + (((unsigned long)(sz)) >> 12): \
1433 ((((unsigned long)(sz)) >> 9) <= 340) ? 119 + (((unsigned long)(sz)) >> 15): \
1434 ((((unsigned long)(sz)) >> 9) <= 1364) ? 124 + (((unsigned long)(sz)) >> 18): \
1435 126)
1436 /*
1437 bins for chunks < 512 are all spaced 8 bytes apart, and hold
1438 identically sized chunks. This is exploited in malloc.
1439 */
1440
1441 #define MAX_SMALLBIN 63
1442 #define MAX_SMALLBIN_SIZE 512
1443 #define SMALLBIN_WIDTH 8
1444
1445 #define smallbin_index(sz) (((unsigned long)(sz)) >> 3)
/* [<][>][^][v][top][bottom][index][help] */
1446
1447 /*
1448 Requests are `small' if both the corresponding and the next bin are small
1449 */
1450
1451 #define is_small_request(nb) (nb < MAX_SMALLBIN_SIZE - SMALLBIN_WIDTH)
/* [<][>][^][v][top][bottom][index][help] */
1452
1453
1454
1455 /*
1456 To help compensate for the large number of bins, a one-level index
1457 structure is used for bin-by-bin searching. `binblocks' is a
1458 one-word bitvector recording whether groups of BINBLOCKWIDTH bins
1459 have any (possibly) non-empty bins, so they can be skipped over
1460 all at once during during traversals. The bits are NOT always
1461 cleared as soon as all bins in a block are empty, but instead only
1462 when all are noticed to be empty during traversal in malloc.
1463 */
1464
1465 #define BINBLOCKWIDTH 4 /* bins per block */
1466
1467 #define binblocks (bin_at(0)->size) /* bitvector of nonempty blocks */
1468
1469 /* bin<->block macros */
1470
1471 #define idx2binblock(ix) ((unsigned)1 << (ix / BINBLOCKWIDTH))
/* [<][>][^][v][top][bottom][index][help] */
1472 #define mark_binblock(ii) (binblocks |= idx2binblock(ii))
/* [<][>][^][v][top][bottom][index][help] */
1473 #define clear_binblock(ii) (binblocks &= ~(idx2binblock(ii)))
/* [<][>][^][v][top][bottom][index][help] */
1474
1475
1476
1477
1478
1479 /* Other static bookkeeping data */
1480
1481 /* variables holding tunable values */
1482
1483 static unsigned long trim_threshold = DEFAULT_TRIM_THRESHOLD;
1484 static unsigned long top_pad = DEFAULT_TOP_PAD;
1485 static unsigned int n_mmaps_max = DEFAULT_MMAP_MAX;
1486 static unsigned long mmap_threshold = DEFAULT_MMAP_THRESHOLD;
1487
1488 /* The first value returned from sbrk */
1489 static char* sbrk_base = (char*)(-1);
1490
1491 /* The maximum memory obtained from system via sbrk */
1492 static unsigned long max_sbrked_mem = 0;
1493
1494 /* The maximum via either sbrk or mmap */
1495 static unsigned long max_total_mem = 0;
1496
1497 /* internal working copy of mallinfo */
1498 static struct mallinfo current_mallinfo = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
1499
1500 /* The total memory obtained from system via sbrk */
1501 #define sbrked_mem (current_mallinfo.arena)
1502
1503 /* Tracking mmaps */
1504
1505 static unsigned int n_mmaps = 0;
1506 static unsigned int max_n_mmaps = 0;
1507 static unsigned long mmapped_mem = 0;
1508 static unsigned long max_mmapped_mem = 0;
1509
1510
1511
1512 /*
1513 Debugging support
1514 */
1515
1516 #if DEBUG
1517
1518
1519 /*
1520 These routines make a number of assertions about the states
1521 of data structures that should be true at all times. If any
1522 are not true, it's very likely that a user program has somehow
1523 trashed memory. (It's also possible that there is a coding error
1524 in malloc. In which case, please report it!)
1525 */
1526
1527 #if __STD_C
1528 static void do_check_chunk(mchunkptr p)
/* [<][>][^][v][top][bottom][index][help] */
1529 #else
1530 static void do_check_chunk(p) mchunkptr p;
1531 #endif
1532 {
1533 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1534
1535 /* No checkable chunk is mmapped */
1536 assert(!chunk_is_mmapped(p));
1537
1538 /* Check for legal address ... */
1539 assert((char*)p >= sbrk_base);
1540 if (p != top)
1541 assert((char*)p + sz <= (char*)top);
1542 else
1543 assert((char*)p + sz <= sbrk_base + sbrked_mem);
1544
1545 }
1546
1547
1548 #if __STD_C
1549 static void do_check_free_chunk(mchunkptr p)
/* [<][>][^][v][top][bottom][index][help] */
1550 #else
1551 static void do_check_free_chunk(p) mchunkptr p;
1552 #endif
1553 {
1554 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1555 mchunkptr next = chunk_at_offset(p, sz);
1556
1557 do_check_chunk(p);
1558
1559 /* Check whether it claims to be free ... */
1560 assert(!inuse(p));
1561
1562 /* Unless a special marker, must have OK fields */
1563 if ((long)sz >= (long)MINSIZE)
1564 {
1565 assert((sz & MALLOC_ALIGN_MASK) == 0);
1566 assert(aligned_OK(chunk2mem(p)));
1567 /* ... matching footer field */
1568 assert(next->prev_size == sz);
1569 /* ... and is fully consolidated */
1570 assert(prev_inuse(p));
1571 assert (next == top || inuse(next));
1572
1573 /* ... and has minimally sane links */
1574 assert(p->fd->bk == p);
1575 assert(p->bk->fd == p);
1576 }
1577 else /* markers are always of size SIZE_SZ */
1578 assert(sz == SIZE_SZ);
1579 }
1580
1581 #if __STD_C
1582 static void do_check_inuse_chunk(mchunkptr p)
/* [<][>][^][v][top][bottom][index][help] */
1583 #else
1584 static void do_check_inuse_chunk(p) mchunkptr p;
1585 #endif
1586 {
1587 mchunkptr next = next_chunk(p);
1588 do_check_chunk(p);
1589
1590 /* Check whether it claims to be in use ... */
1591 assert(inuse(p));
1592
1593 /* ... and is surrounded by OK chunks.
1594 Since more things can be checked with free chunks than inuse ones,
1595 if an inuse chunk borders them and debug is on, it's worth doing them.
1596 */
1597 if (!prev_inuse(p))
1598 {
1599 mchunkptr prv = prev_chunk(p);
1600 assert(next_chunk(prv) == p);
1601 do_check_free_chunk(prv);
1602 }
1603 if (next == top)
1604 {
1605 assert(prev_inuse(next));
1606 assert(chunksize(next) >= MINSIZE);
1607 }
1608 else if (!inuse(next))
1609 do_check_free_chunk(next);
1610
1611 }
1612
1613 #if __STD_C
1614 static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
/* [<][>][^][v][top][bottom][index][help] */
1615 #else
1616 static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
1617 #endif
1618 {
1619 INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
1620 long room = sz - s;
1621
1622 do_check_inuse_chunk(p);
1623
1624 /* Legal size ... */
1625 assert((long)sz >= (long)MINSIZE);
1626 assert((sz & MALLOC_ALIGN_MASK) == 0);
1627 assert(room >= 0);
1628 assert(room < (long)MINSIZE);
1629
1630 /* ... and alignment */
1631 assert(aligned_OK(chunk2mem(p)));
1632
1633
1634 /* ... and was allocated at front of an available chunk */
1635 assert(prev_inuse(p));
1636
1637 }
1638
1639
1640 #define check_free_chunk(P) do_check_free_chunk(P)
/* [<][>][^][v][top][bottom][index][help] */
1641 #define check_inuse_chunk(P) do_check_inuse_chunk(P)
/* [<][>][^][v][top][bottom][index][help] */
1642 #define check_chunk(P) do_check_chunk(P)
/* [<][>][^][v][top][bottom][index][help] */
1643 #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
/* [<][>][^][v][top][bottom][index][help] */
1644 #else
1645 #define check_free_chunk(P)
/* [<][>][^][v][top][bottom][index][help] */
1646 #define check_inuse_chunk(P)
/* [<][>][^][v][top][bottom][index][help] */
1647 #define check_chunk(P)
/* [<][>][^][v][top][bottom][index][help] */
1648 #define check_malloced_chunk(P,N)
/* [<][>][^][v][top][bottom][index][help] */
1649 #endif
1650
1651
1652
1653 /*
1654 Macro-based internal utilities
1655 */
1656
1657
1658 /*
1659 Linking chunks in bin lists.
1660 Call these only with variables, not arbitrary expressions, as arguments.
1661 */
1662
1663 /*
1664 Place chunk p of size s in its bin, in size order,
1665 putting it ahead of others of same size.
1666 */
1667
1668
1669 #define frontlink(P, S, IDX, BK, FD) \
/* [<][>][^][v][top][bottom][index][help] */
1670 { \
1671 if (S < MAX_SMALLBIN_SIZE) \
1672 { \
1673 IDX = smallbin_index(S); \
1674 mark_binblock(IDX); \
1675 BK = bin_at(IDX); \
1676 FD = BK->fd; \
1677 P->bk = BK; \
1678 P->fd = FD; \
1679 FD->bk = BK->fd = P; \
1680 } \
1681 else \
1682 { \
1683 IDX = bin_index(S); \
1684 BK = bin_at(IDX); \
1685 FD = BK->fd; \
1686 if (FD == BK) mark_binblock(IDX); \
1687 else \
1688 { \
1689 while (FD != BK && S < chunksize(FD)) FD = FD->fd; \
1690 BK = FD->bk; \
1691 } \
1692 P->bk = BK; \
1693 P->fd = FD; \
1694 FD->bk = BK->fd = P; \
1695 } \
1696 }
1697
1698
1699 /* take a chunk off a list */
1700
1701 #define unlink(P, BK, FD) \
/* [<][>][^][v][top][bottom][index][help] */
1702 { \
1703 BK = P->bk; \
1704 FD = P->fd; \
1705 FD->bk = BK; \
1706 BK->fd = FD; \
1707 } \
1708
1709 /* Place p as the last remainder */
1710
1711 #define link_last_remainder(P) \
/* [<][>][^][v][top][bottom][index][help] */
1712 { \
1713 last_remainder->fd = last_remainder->bk = P; \
1714 P->fd = P->bk = last_remainder; \
1715 }
1716
1717 /* Clear the last_remainder bin */
1718
1719 #define clear_last_remainder \
1720 (last_remainder->fd = last_remainder->bk = last_remainder)
1721
1722
1723
1724
1725
1726
1727 /* Routines dealing with mmap(). */
1728
1729 #if HAVE_MMAP
1730
1731 #if __STD_C
1732 static mchunkptr mmap_chunk(size_t size)
/* [<][>][^][v][top][bottom][index][help] */
1733 #else
1734 static mchunkptr mmap_chunk(size) size_t size;
1735 #endif
1736 {
1737 size_t page_mask = malloc_getpagesize - 1;
1738 mchunkptr p;
1739
1740 #ifndef MAP_ANONYMOUS
1741 static int fd = -1;
1742 #endif
1743
1744 if(n_mmaps >= n_mmaps_max) return 0; /* too many regions */
1745
1746 /* For mmapped chunks, the overhead is one SIZE_SZ unit larger, because
1747 * there is no following chunk whose prev_size field could be used.
1748 */
1749 size = (size + SIZE_SZ + page_mask) & ~page_mask;
1750
1751 #ifdef MAP_ANONYMOUS
1752 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE,
1753 MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
1754 #else /* !MAP_ANONYMOUS */
1755 if (fd < 0)
1756 {
1757 fd = open("/dev/zero", O_RDWR);
1758 if(fd < 0) return 0;
1759 }
1760 p = (mchunkptr)mmap(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
1761 #endif
1762
1763 if(p == (mchunkptr)-1) return 0;
1764
1765 n_mmaps++;
1766 if (n_mmaps > max_n_mmaps) max_n_mmaps = n_mmaps;
1767
1768 /* We demand that eight bytes into a page must be 8-byte aligned. */
1769 assert(aligned_OK(chunk2mem(p)));
1770
1771 /* The offset to the start of the mmapped region is stored
1772 * in the prev_size field of the chunk; normally it is zero,
1773 * but that can be changed in memalign().
1774 */
1775 p->prev_size = 0;
1776 set_head(p, size|IS_MMAPPED);
1777
1778 mmapped_mem += size;
1779 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1780 max_mmapped_mem = mmapped_mem;
1781 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1782 max_total_mem = mmapped_mem + sbrked_mem;
1783 return p;
1784 }
1785
1786 #if __STD_C
1787 static void munmap_chunk(mchunkptr p)
/* [<][>][^][v][top][bottom][index][help] */
1788 #else
1789 static void munmap_chunk(p) mchunkptr p;
1790 #endif
1791 {
1792 INTERNAL_SIZE_T size = chunksize(p);
1793 int ret;
1794
1795 assert (chunk_is_mmapped(p));
1796 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1797 assert((n_mmaps > 0));
1798 assert(((p->prev_size + size) & (malloc_getpagesize-1)) == 0);
1799
1800 n_mmaps--;
1801 mmapped_mem -= (size + p->prev_size);
1802
1803 ret = munmap((char *)p - p->prev_size, size + p->prev_size);
1804
1805 /* munmap returns non-zero on failure */
1806 assert(ret == 0);
1807 }
1808
1809 #if HAVE_MREMAP
1810
1811 #if __STD_C
1812 static mchunkptr mremap_chunk(mchunkptr p, size_t new_size)
/* [<][>][^][v][top][bottom][index][help] */
1813 #else
1814 static mchunkptr mremap_chunk(p, new_size) mchunkptr p; size_t new_size;
1815 #endif
1816 {
1817 size_t page_mask = malloc_getpagesize - 1;
1818 INTERNAL_SIZE_T offset = p->prev_size;
1819 INTERNAL_SIZE_T size = chunksize(p);
1820 char *cp;
1821
1822 assert (chunk_is_mmapped(p));
1823 assert(! ((char*)p >= sbrk_base && (char*)p < sbrk_base + sbrked_mem));
1824 assert((n_mmaps > 0));
1825 assert(((size + offset) & (malloc_getpagesize-1)) == 0);
1826
1827 /* Note the extra SIZE_SZ overhead as in mmap_chunk(). */
1828 new_size = (new_size + offset + SIZE_SZ + page_mask) & ~page_mask;
1829
1830 cp = (char *)mremap((char *)p - offset, size + offset, new_size, 1);
1831
1832 if (cp == (char *)-1) return 0;
1833
1834 p = (mchunkptr)(cp + offset);
1835
1836 assert(aligned_OK(chunk2mem(p)));
1837
1838 assert((p->prev_size == offset));
1839 set_head(p, (new_size - offset)|IS_MMAPPED);
1840
1841 mmapped_mem -= size + offset;
1842 mmapped_mem += new_size;
1843 if ((unsigned long)mmapped_mem > (unsigned long)max_mmapped_mem)
1844 max_mmapped_mem = mmapped_mem;
1845 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1846 max_total_mem = mmapped_mem + sbrked_mem;
1847 return p;
1848 }
1849
1850 #endif /* HAVE_MREMAP */
1851
1852 #endif /* HAVE_MMAP */
1853
1854
1855
1856
1857 /*
1858 Extend the top-most chunk by obtaining memory from system.
1859 Main interface to sbrk (but see also malloc_trim).
1860 */
1861
1862 #if __STD_C
1863 static void malloc_extend_top(INTERNAL_SIZE_T nb)
/* [<][>][^][v][top][bottom][index][help] */
1864 #else
1865 static void malloc_extend_top(nb) INTERNAL_SIZE_T nb;
1866 #endif
1867 {
1868 char* brk; /* return value from sbrk */
1869 INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of sbrked space */
1870 INTERNAL_SIZE_T correction; /* bytes for 2nd sbrk call */
1871 char* new_brk; /* return of 2nd sbrk call */
1872 INTERNAL_SIZE_T top_size; /* new size of top chunk */
1873
1874 mchunkptr old_top = top; /* Record state of old top */
1875 INTERNAL_SIZE_T old_top_size = chunksize(old_top);
1876 char* old_end = (char*)(chunk_at_offset(old_top, old_top_size));
1877
1878 /* Pad request with top_pad plus minimal overhead */
1879
1880 INTERNAL_SIZE_T sbrk_size = nb + top_pad + MINSIZE;
1881 unsigned long pagesz = malloc_getpagesize;
1882
1883 /* If not the first time through, round to preserve page boundary */
1884 /* Otherwise, we need to correct to a page size below anyway. */
1885 /* (We also correct below if an intervening foreign sbrk call.) */
1886
1887 if (sbrk_base != (char*)(-1))
1888 sbrk_size = (sbrk_size + (pagesz - 1)) & ~(pagesz - 1);
1889
1890 brk = (char*)(MORECORE (sbrk_size));
1891
1892 /* Fail if sbrk failed or if a foreign sbrk call killed our space */
1893 if (brk == (char*)(MORECORE_FAILURE) ||
1894 (brk < old_end && old_top != initial_top))
1895 return;
1896
1897 sbrked_mem += sbrk_size;
1898
1899 if (brk == old_end) /* can just add bytes to current top */
1900 {
1901 top_size = sbrk_size + old_top_size;
1902 set_head(top, top_size | PREV_INUSE);
1903 }
1904 else
1905 {
1906 if (sbrk_base == (char*)(-1)) /* First time through. Record base */
1907 sbrk_base = brk;
1908 else /* Someone else called sbrk(). Count those bytes as sbrked_mem. */
1909 sbrked_mem += brk - (char*)old_end;
1910
1911 /* Guarantee alignment of first new chunk made from this space */
1912 front_misalign = (unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK;
1913 if (front_misalign > 0)
1914 {
1915 correction = (MALLOC_ALIGNMENT) - front_misalign;
1916 brk += correction;
1917 }
1918 else
1919 correction = 0;
1920
1921 /* Guarantee the next brk will be at a page boundary */
1922 correction += pagesz - ((unsigned long)(brk + sbrk_size) & (pagesz - 1));
1923
1924 /* Allocate correction */
1925 new_brk = (char*)(MORECORE (correction));
1926 if (new_brk == (char*)(MORECORE_FAILURE)) return;
1927
1928 sbrked_mem += correction;
1929
1930 top = (mchunkptr)brk;
1931 top_size = new_brk - brk + correction;
1932 set_head(top, top_size | PREV_INUSE);
1933
1934 if (old_top != initial_top)
1935 {
1936
1937 /* There must have been an intervening foreign sbrk call. */
1938 /* A double fencepost is necessary to prevent consolidation */
1939
1940 /* If not enough space to do this, then user did something very wrong */
1941 if (old_top_size < MINSIZE)
1942 {
1943 set_head(top, PREV_INUSE); /* will force null return from malloc */
1944 return;
1945 }
1946
1947 /* Also keep size a multiple of MALLOC_ALIGNMENT */
1948 old_top_size = (old_top_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
1949 set_head_size(old_top, old_top_size);
1950 chunk_at_offset(old_top, old_top_size )->size =
1951 SIZE_SZ|PREV_INUSE;
1952 chunk_at_offset(old_top, old_top_size + SIZE_SZ)->size =
1953 SIZE_SZ|PREV_INUSE;
1954 /* If possible, release the rest. */
1955 if (old_top_size >= MINSIZE)
1956 fREe(chunk2mem(old_top));
1957 }
1958 }
1959
1960 if ((unsigned long)sbrked_mem > (unsigned long)max_sbrked_mem)
1961 max_sbrked_mem = sbrked_mem;
1962 if ((unsigned long)(mmapped_mem + sbrked_mem) > (unsigned long)max_total_mem)
1963 max_total_mem = mmapped_mem + sbrked_mem;
1964
1965 /* We always land on a page boundary */
1966 assert(((unsigned long)((char*)top + top_size) & (pagesz - 1)) == 0);
1967 }
1968
1969
1970
1971
1972 /* Main public routines */
1973
1974
1975 /*
1976 Malloc Algorthim:
1977
1978 The requested size is first converted into a usable form, `nb'.
1979 This currently means to add 4 bytes overhead plus possibly more to
1980 obtain 8-byte alignment and/or to obtain a size of at least
1981 MINSIZE (currently 16 bytes), the smallest allocatable size.
1982 (All fits are considered `exact' if they are within MINSIZE bytes.)
1983
1984 From there, the first successful of the following steps is taken:
1985
1986 1. The bin corresponding to the request size is scanned, and if
1987 a chunk of exactly the right size is found, it is taken.
1988
1989 2. The most recently remaindered chunk is used if it is big
1990 enough. This is a form of (roving) first fit, used only in
1991 the absence of exact fits. Runs of consecutive requests use
1992 the remainder of the chunk used for the previous such request
1993 whenever possible. This limited use of a first-fit style
1994 allocation strategy tends to give contiguous chunks
1995 coextensive lifetimes, which improves locality and can reduce
1996 fragmentation in the long run.
1997
1998 3. Other bins are scanned in increasing size order, using a
1999 chunk big enough to fulfill the request, and splitting off
2000 any remainder. This search is strictly by best-fit; i.e.,
2001 the smallest (with ties going to approximately the least
2002 recently used) chunk that fits is selected.
2003
2004 4. If large enough, the chunk bordering the end of memory
2005 (`top') is split off. (This use of `top' is in accord with
2006 the best-fit search rule. In effect, `top' is treated as
2007 larger (and thus less well fitting) than any other available
2008 chunk since it can be extended to be as large as necessary
2009 (up to system limitations).
2010
2011 5. If the request size meets the mmap threshold and the
2012 system supports mmap, and there are few enough currently
2013 allocated mmapped regions, and a call to mmap succeeds,
2014 the request is allocated via direct memory mapping.
2015
2016 6. Otherwise, the top of memory is extended by
2017 obtaining more space from the system (normally using sbrk,
2018 but definable to anything else via the MORECORE macro).
2019 Memory is gathered from the system (in system page-sized
2020 units) in a way that allows chunks obtained across different
2021 sbrk calls to be consolidated, but does not require
2022 contiguous memory. Thus, it should be safe to intersperse
2023 mallocs with other sbrk calls.
2024
2025
2026 All allocations are made from the the `lowest' part of any found
2027 chunk. (The implementation invariant is that prev_inuse is
2028 always true of any allocated chunk; i.e., that each allocated
2029 chunk borders either a previously allocated and still in-use chunk,
2030 or the base of its memory arena.)
2031
2032 */
2033
2034 #if __STD_C
2035 Void_t* mALLOc(size_t bytes)
/* [<][>][^][v][top][bottom][index][help] */
2036 #else
2037 Void_t* mALLOc(bytes) size_t bytes;
2038 #endif
2039 {
2040 mchunkptr victim; /* inspected/selected chunk */
2041 INTERNAL_SIZE_T victim_size; /* its size */
2042 int idx; /* index for bin traversal */
2043 mbinptr bin; /* associated bin */
2044 mchunkptr remainder; /* remainder from a split */
2045 long remainder_size; /* its size */
2046 int remainder_index; /* its bin index */
2047 unsigned long block; /* block traverser bit */
2048 int startidx; /* first bin of a traversed block */
2049 mchunkptr fwd; /* misc temp for linking */
2050 mchunkptr bck; /* misc temp for linking */
2051 mbinptr q; /* misc temp */
2052
2053 INTERNAL_SIZE_T nb = request2size(bytes); /* padded request size; */
2054
2055 /* Check for exact match in a bin */
2056
2057 if (is_small_request(nb)) /* Faster version for small requests */
2058 {
2059 idx = smallbin_index(nb);
2060
2061 /* No traversal or size check necessary for small bins. */
2062
2063 q = bin_at(idx);
2064 victim = last(q);
2065
2066 /* Also scan the next one, since it would have a remainder < MINSIZE */
2067 if (victim == q)
2068 {
2069 q = next_bin(q);
2070 victim = last(q);
2071 }
2072 if (victim != q)
2073 {
2074 victim_size = chunksize(victim);
2075 unlink(victim, bck, fwd);
2076 set_inuse_bit_at_offset(victim, victim_size);
2077 check_malloced_chunk(victim, nb);
2078 return chunk2mem(victim);
2079 }
2080
2081 idx += 2; /* Set for bin scan below. We've already scanned 2 bins. */
2082
2083 }
2084 else
2085 {
2086 idx = bin_index(nb);
2087 bin = bin_at(idx);
2088
2089 for (victim = last(bin); victim != bin; victim = victim->bk)
2090 {
2091 victim_size = chunksize(victim);
2092 remainder_size = victim_size - nb;
2093
2094 if (remainder_size >= (long)MINSIZE) /* too big */
2095 {
2096 --idx; /* adjust to rescan below after checking last remainder */
2097 break;
2098 }
2099
2100 else if (remainder_size >= 0) /* exact fit */
2101 {
2102 unlink(victim, bck, fwd);
2103 set_inuse_bit_at_offset(victim, victim_size);
2104 check_malloced_chunk(victim, nb);
2105 return chunk2mem(victim);
2106 }
2107 }
2108
2109 ++idx;
2110
2111 }
2112
2113 /* Try to use the last split-off remainder */
2114
2115 if ( (victim = last_remainder->fd) != last_remainder)
2116 {
2117 victim_size = chunksize(victim);
2118 remainder_size = victim_size - nb;
2119
2120 if (remainder_size >= (long)MINSIZE) /* re-split */
2121 {
2122 remainder = chunk_at_offset(victim, nb);
2123 set_head(victim, nb | PREV_INUSE);
2124 link_last_remainder(remainder);
2125 set_head(remainder, remainder_size | PREV_INUSE);
2126 set_foot(remainder, remainder_size);
2127 check_malloced_chunk(victim, nb);
2128 return chunk2mem(victim);
2129 }
2130
2131 clear_last_remainder;
2132
2133 if (remainder_size >= 0) /* exhaust */
2134 {
2135 set_inuse_bit_at_offset(victim, victim_size);
2136 check_malloced_chunk(victim, nb);
2137 return chunk2mem(victim);
2138 }
2139
2140 /* Else place in bin */
2141
2142 frontlink(victim, victim_size, remainder_index, bck, fwd);
2143 }
2144
2145 /*
2146 If there are any possibly nonempty big-enough blocks,
2147 search for best fitting chunk by scanning bins in blockwidth units.
2148 */
2149
2150 if ( (block = idx2binblock(idx)) <= binblocks)
2151 {
2152
2153 /* Get to the first marked block */
2154
2155 if ( (block & binblocks) == 0)
2156 {
2157 /* force to an even block boundary */
2158 idx = (idx & ~(BINBLOCKWIDTH - 1)) + BINBLOCKWIDTH;
2159 block <<= 1;
2160 while ((block & binblocks) == 0)
2161 {
2162 idx += BINBLOCKWIDTH;
2163 block <<= 1;
2164 }
2165 }
2166
2167 /* For each possibly nonempty block ... */
2168 for (;;)
2169 {
2170 startidx = idx; /* (track incomplete blocks) */
2171 q = bin = bin_at(idx);
2172
2173 /* For each bin in this block ... */
2174 do
2175 {
2176 /* Find and use first big enough chunk ... */
2177
2178 for (victim = last(bin); victim != bin; victim = victim->bk)
2179 {
2180 victim_size = chunksize(victim);
2181 remainder_size = victim_size - nb;
2182
2183 if (remainder_size >= (long)MINSIZE) /* split */
2184 {
2185 remainder = chunk_at_offset(victim, nb);
2186 set_head(victim, nb | PREV_INUSE);
2187 unlink(victim, bck, fwd);
2188 link_last_remainder(remainder);
2189 set_head(remainder, remainder_size | PREV_INUSE);
2190 set_foot(remainder, remainder_size);
2191 check_malloced_chunk(victim, nb);
2192 return chunk2mem(victim);
2193 }
2194
2195 else if (remainder_size >= 0) /* take */
2196 {
2197 set_inuse_bit_at_offset(victim, victim_size);
2198 unlink(victim, bck, fwd);
2199 check_malloced_chunk(victim, nb);
2200 return chunk2mem(victim);
2201 }
2202
2203 }
2204
2205 bin = next_bin(bin);
2206
2207 } while ((++idx & (BINBLOCKWIDTH - 1)) != 0);
2208
2209 /* Clear out the block bit. */
2210
2211 do /* Possibly backtrack to try to clear a partial block */
2212 {
2213 if ((startidx & (BINBLOCKWIDTH - 1)) == 0)
2214 {
2215 binblocks &= ~block;
2216 break;
2217 }
2218 --startidx;
2219 q = prev_bin(q);
2220 } while (first(q) == q);
2221
2222 /* Get to the next possibly nonempty block */
2223
2224 if ( (block <<= 1) <= binblocks && (block != 0) )
2225 {
2226 while ((block & binblocks) == 0)
2227 {
2228 idx += BINBLOCKWIDTH;
2229 block <<= 1;
2230 }
2231 }
2232 else
2233 break;
2234 }
2235 }
2236
2237
2238 /* Try to use top chunk */
2239
2240 /* Require that there be a remainder, ensuring top always exists */
2241 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2242 {
2243
2244 #if HAVE_MMAP
2245 /* If big and would otherwise need to extend, try to use mmap instead */
2246 if ((unsigned long)nb >= (unsigned long)mmap_threshold &&
2247 (victim = mmap_chunk(nb)) != 0)
2248 return chunk2mem(victim);
2249 #endif
2250
2251 /* Try to extend */
2252 malloc_extend_top(nb);
2253 if ( (remainder_size = chunksize(top) - nb) < (long)MINSIZE)
2254 return 0; /* propagate failure */
2255 }
2256
2257 victim = top;
2258 set_head(victim, nb | PREV_INUSE);
2259 top = chunk_at_offset(victim, nb);
2260 set_head(top, remainder_size | PREV_INUSE);
2261 check_malloced_chunk(victim, nb);
2262 return chunk2mem(victim);
2263
2264 }
2265
2266
2267
2268
2269 /*
2270
2271 free() algorithm :
2272
2273 cases:
2274
2275 1. free(0) has no effect.
2276
2277 2. If the chunk was allocated via mmap, it is release via munmap().
2278
2279 3. If a returned chunk borders the current high end of memory,
2280 it is consolidated into the top, and if the total unused
2281 topmost memory exceeds the trim threshold, malloc_trim is
2282 called.
2283
2284 4. Other chunks are consolidated as they arrive, and
2285 placed in corresponding bins. (This includes the case of
2286 consolidating with the current `last_remainder').
2287
2288 */
2289
2290
2291 #if __STD_C
2292 void fREe(Void_t* mem)
/* [<][>][^][v][top][bottom][index][help] */
2293 #else
2294 void fREe(mem) Void_t* mem;
2295 #endif
2296 {
2297 mchunkptr p; /* chunk corresponding to mem */
2298 INTERNAL_SIZE_T hd; /* its head field */
2299 INTERNAL_SIZE_T sz; /* its size */
2300 int idx; /* its bin index */
2301 mchunkptr next; /* next contiguous chunk */
2302 INTERNAL_SIZE_T nextsz; /* its size */
2303 INTERNAL_SIZE_T prevsz; /* size of previous contiguous chunk */
2304 mchunkptr bck; /* misc temp for linking */
2305 mchunkptr fwd; /* misc temp for linking */
2306 int islr; /* track whether merging with last_remainder */
2307
2308 if (mem == 0) /* free(0) has no effect */
2309 return;
2310
2311 p = mem2chunk(mem);
2312 hd = p->size;
2313
2314 #if HAVE_MMAP
2315 if (hd & IS_MMAPPED) /* release mmapped memory. */
2316 {
2317 munmap_chunk(p);
2318 return;
2319 }
2320 #endif
2321
2322 check_inuse_chunk(p);
2323
2324 sz = hd & ~PREV_INUSE;
2325 next = chunk_at_offset(p, sz);
2326 nextsz = chunksize(next);
2327
2328 if (next == top) /* merge with top */
2329 {
2330 sz += nextsz;
2331
2332 if (!(hd & PREV_INUSE)) /* consolidate backward */
2333 {
2334 prevsz = p->prev_size;
2335 p = chunk_at_offset(p, -prevsz);
2336 sz += prevsz;
2337 unlink(p, bck, fwd);
2338 }
2339
2340 set_head(p, sz | PREV_INUSE);
2341 top = p;
2342 if ((unsigned long)(sz) >= (unsigned long)trim_threshold)
2343 malloc_trim(top_pad);
2344 return;
2345 }
2346
2347 set_head(next, nextsz); /* clear inuse bit */
2348
2349 islr = 0;
2350
2351 if (!(hd & PREV_INUSE)) /* consolidate backward */
2352 {
2353 prevsz = p->prev_size;
2354 p = chunk_at_offset(p, -prevsz);
2355 sz += prevsz;
2356
2357 if (p->fd == last_remainder) /* keep as last_remainder */
2358 islr = 1;
2359 else
2360 unlink(p, bck, fwd);
2361 }
2362
2363 if (!(inuse_bit_at_offset(next, nextsz))) /* consolidate forward */
2364 {
2365 sz += nextsz;
2366
2367 if (!islr && next->fd == last_remainder) /* re-insert last_remainder */
2368 {
2369 islr = 1;
2370 link_last_remainder(p);
2371 }
2372 else
2373 unlink(next, bck, fwd);
2374 }
2375
2376
2377 set_head(p, sz | PREV_INUSE);
2378 set_foot(p, sz);
2379 if (!islr)
2380 frontlink(p, sz, idx, bck, fwd);
2381 }
2382
2383
2384
2385
2386
2387 /*
2388
2389 Realloc algorithm:
2390
2391 Chunks that were obtained via mmap cannot be extended or shrunk
2392 unless HAVE_MREMAP is defined, in which case mremap is used.
2393 Otherwise, if their reallocation is for additional space, they are
2394 copied. If for less, they are just left alone.
2395
2396 Otherwise, if the reallocation is for additional space, and the
2397 chunk can be extended, it is, else a malloc-copy-free sequence is
2398 taken. There are several different ways that a chunk could be
2399 extended. All are tried:
2400
2401 * Extending forward into following adjacent free chunk.
2402 * Shifting backwards, joining preceding adjacent space
2403 * Both shifting backwards and extending forward.
2404 * Extending into newly sbrked space
2405
2406 Unless the #define REALLOC_ZERO_BYTES_FREES is set, realloc with a
2407 size argument of zero (re)allocates a minimum-sized chunk.
2408
2409 If the reallocation is for less space, and the new request is for
2410 a `small' (<512 bytes) size, then the newly unused space is lopped
2411 off and freed.
2412
2413 The old unix realloc convention of allowing the last-free'd chunk
2414 to be used as an argument to realloc is no longer supported.
2415 I don't know of any programs still relying on this feature,
2416 and allowing it would also allow too many other incorrect
2417 usages of realloc to be sensible.
2418
2419
2420 */
2421
2422
2423 #if __STD_C
2424 Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
/* [<][>][^][v][top][bottom][index][help] */
2425 #else
2426 Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
2427 #endif
2428 {
2429 INTERNAL_SIZE_T nb; /* padded request size */
2430
2431 mchunkptr oldp; /* chunk corresponding to oldmem */
2432 INTERNAL_SIZE_T oldsize; /* its size */
2433
2434 mchunkptr newp; /* chunk to return */
2435 INTERNAL_SIZE_T newsize; /* its size */
2436 Void_t* newmem; /* corresponding user mem */
2437
2438 mchunkptr next; /* next contiguous chunk after oldp */
2439 INTERNAL_SIZE_T nextsize; /* its size */
2440
2441 mchunkptr prev; /* previous contiguous chunk before oldp */
2442 INTERNAL_SIZE_T prevsize; /* its size */
2443
2444 mchunkptr remainder; /* holds split off extra space from newp */
2445 INTERNAL_SIZE_T remainder_size; /* its size */
2446
2447 mchunkptr bck; /* misc temp for linking */
2448 mchunkptr fwd; /* misc temp for linking */
2449
2450 #ifdef REALLOC_ZERO_BYTES_FREES
2451 if (bytes == 0) { fREe(oldmem); return 0; }
2452 #endif
2453
2454
2455 /* realloc of null is supposed to be same as malloc */
2456 if (oldmem == 0) return mALLOc(bytes);
2457
2458 newp = oldp = mem2chunk(oldmem);
2459 newsize = oldsize = chunksize(oldp);
2460
2461
2462 nb = request2size(bytes);
2463
2464 #if HAVE_MMAP
2465 if (chunk_is_mmapped(oldp))
2466 {
2467 #if HAVE_MREMAP
2468 newp = mremap_chunk(oldp, nb);
2469 if(newp) return chunk2mem(newp);
2470 #endif
2471 /* Note the extra SIZE_SZ overhead. */
2472 if(oldsize - SIZE_SZ >= nb) return oldmem; /* do nothing */
2473 /* Must alloc, copy, free. */
2474 newmem = mALLOc(bytes);
2475 if (newmem == 0) return 0; /* propagate failure */
2476 MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ);
2477 munmap_chunk(oldp);
2478 return newmem;
2479 }
2480 #endif
2481
2482 check_inuse_chunk(oldp);
2483
2484 if ((long)(oldsize) < (long)(nb))
2485 {
2486
2487 /* Try expanding forward */
2488
2489 next = chunk_at_offset(oldp, oldsize);
2490 if (next == top || !inuse(next))
2491 {
2492 nextsize = chunksize(next);
2493
2494 /* Forward into top only if a remainder */
2495 if (next == top)
2496 {
2497 if ((long)(nextsize + newsize) >= (long)(nb + MINSIZE))
2498 {
2499 newsize += nextsize;
2500 top = chunk_at_offset(oldp, nb);
2501 set_head(top, (newsize - nb) | PREV_INUSE);
2502 set_head_size(oldp, nb);
2503 return chunk2mem(oldp);
2504 }
2505 }
2506
2507 /* Forward into next chunk */
2508 else if (((long)(nextsize + newsize) >= (long)(nb)))
2509 {
2510 unlink(next, bck, fwd);
2511 newsize += nextsize;
2512 goto split;
2513 }
2514 }
2515 else
2516 {
2517 next = 0;
2518 nextsize = 0;
2519 }
2520
2521 /* Try shifting backwards. */
2522
2523 if (!prev_inuse(oldp))
2524 {
2525 prev = prev_chunk(oldp);
2526 prevsize = chunksize(prev);
2527
2528 /* try forward + backward first to save a later consolidation */
2529
2530 if (next != 0)
2531 {
2532 /* into top */
2533 if (next == top)
2534 {
2535 if ((long)(nextsize + prevsize + newsize) >= (long)(nb + MINSIZE))
2536 {
2537 unlink(prev, bck, fwd);
2538 newp = prev;
2539 newsize += prevsize + nextsize;
2540 newmem = chunk2mem(newp);
2541 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2542 top = chunk_at_offset(newp, nb);
2543 set_head(top, (newsize - nb) | PREV_INUSE);
2544 set_head_size(newp, nb);
2545 return newmem;
2546 }
2547 }
2548
2549 /* into next chunk */
2550 else if (((long)(nextsize + prevsize + newsize) >= (long)(nb)))
2551 {
2552 unlink(next, bck, fwd);
2553 unlink(prev, bck, fwd);
2554 newp = prev;
2555 newsize += nextsize + prevsize;
2556 newmem = chunk2mem(newp);
2557 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2558 goto split;
2559 }
2560 }
2561
2562 /* backward only */
2563 if (prev != 0 && (long)(prevsize + newsize) >= (long)nb)
2564 {
2565 unlink(prev, bck, fwd);
2566 newp = prev;
2567 newsize += prevsize;
2568 newmem = chunk2mem(newp);
2569 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2570 goto split;
2571 }
2572 }
2573
2574 /* Must allocate */
2575
2576 newmem = mALLOc (bytes);
2577
2578 if (newmem == 0) /* propagate failure */
2579 return 0;
2580
2581 /* Avoid copy if newp is next chunk after oldp. */
2582 /* (This can only happen when new chunk is sbrk'ed.) */
2583
2584 if ( (newp = mem2chunk(newmem)) == next_chunk(oldp))
2585 {
2586 newsize += chunksize(newp);
2587 newp = oldp;
2588 goto split;
2589 }
2590
2591 /* Otherwise copy, free, and exit */
2592 MALLOC_COPY(newmem, oldmem, oldsize - SIZE_SZ);
2593 fREe(oldmem);
2594 return newmem;
2595 }
2596
2597
2598 split: /* split off extra room in old or expanded chunk */
2599
2600 if (newsize - nb >= MINSIZE) /* split off remainder */
2601 {
2602 remainder = chunk_at_offset(newp, nb);
2603 remainder_size = newsize - nb;
2604 set_head_size(newp, nb);
2605 set_head(remainder, remainder_size | PREV_INUSE);
2606 set_inuse_bit_at_offset(remainder, remainder_size);
2607 fREe(chunk2mem(remainder)); /* let free() deal with it */
2608 }
2609 else
2610 {
2611 set_head_size(newp, newsize);
2612 set_inuse_bit_at_offset(newp, newsize);
2613 }
2614
2615 check_inuse_chunk(newp);
2616 return chunk2mem(newp);
2617 }
2618
2619
2620
2621
2622 /*
2623
2624 memalign algorithm:
2625
2626 memalign requests more than enough space from malloc, finds a spot
2627 within that chunk that meets the alignment request, and then
2628 possibly frees the leading and trailing space.
2629
2630 The alignment argument must be a power of two. This property is not
2631 checked by memalign, so misuse may result in random runtime errors.
2632
2633 8-byte alignment is guaranteed by normal malloc calls, so don't
2634 bother calling memalign with an argument of 8 or less.
2635
2636 Overreliance on memalign is a sure way to fragment space.
2637
2638 */
2639
2640
2641 #if __STD_C
2642 Void_t* mEMALIGn(size_t alignment, size_t bytes)
/* [<][>][^][v][top][bottom][index][help] */
2643 #else
2644 Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
2645 #endif
2646 {
2647 INTERNAL_SIZE_T nb; /* padded request size */
2648 char* m; /* memory returned by malloc call */
2649 mchunkptr p; /* corresponding chunk */
2650 char* brk; /* alignment point within p */
2651 mchunkptr newp; /* chunk to return */
2652 INTERNAL_SIZE_T newsize; /* its size */
2653 INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
2654 mchunkptr remainder; /* spare room at end to split off */
2655 long remainder_size; /* its size */
2656
2657 /* If need less alignment than we give anyway, just relay to malloc */
2658
2659 if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
2660
2661 /* Otherwise, ensure that it is at least a minimum chunk size */
2662
2663 if (alignment < MINSIZE) alignment = MINSIZE;
2664
2665 /* Call malloc with worst case padding to hit alignment. */
2666
2667 nb = request2size(bytes);
2668 m = (char*)(mALLOc(nb + alignment + MINSIZE));
2669
2670 if (m == 0) return 0; /* propagate failure */
2671
2672 p = mem2chunk(m);
2673
2674 if ((((unsigned long)(m)) % alignment) == 0) /* aligned */
2675 {
2676 #if HAVE_MMAP
2677 if(chunk_is_mmapped(p))
2678 return chunk2mem(p); /* nothing more to do */
2679 #endif
2680 }
2681 else /* misaligned */
2682 {
2683 /*
2684 Find an aligned spot inside chunk.
2685 Since we need to give back leading space in a chunk of at
2686 least MINSIZE, if the first calculation places us at
2687 a spot with less than MINSIZE leader, we can move to the
2688 next aligned spot -- we've allocated enough total room so that
2689 this is always possible.
2690 */
2691
2692 brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) & -alignment);
2693 if ((long)(brk - (char*)(p)) < MINSIZE) brk = brk + alignment;
2694
2695 newp = (mchunkptr)brk;
2696 leadsize = brk - (char*)(p);
2697 newsize = chunksize(p) - leadsize;
2698
2699 #if HAVE_MMAP
2700 if(chunk_is_mmapped(p))
2701 {
2702 newp->prev_size = p->prev_size + leadsize;
2703 set_head(newp, newsize|IS_MMAPPED);
2704 return chunk2mem(newp);
2705 }
2706 #endif
2707
2708 /* give back leader, use the rest */
2709
2710 set_head(newp, newsize | PREV_INUSE);
2711 set_inuse_bit_at_offset(newp, newsize);
2712 set_head_size(p, leadsize);
2713 fREe(chunk2mem(p));
2714 p = newp;
2715
2716 assert (newsize >= nb && (((unsigned long)(chunk2mem(p))) % alignment) == 0);
2717 }
2718
2719 /* Also give back spare room at the end */
2720
2721 remainder_size = chunksize(p) - nb;
2722
2723 if (remainder_size >= (long)MINSIZE)
2724 {
2725 remainder = chunk_at_offset(p, nb);
2726 set_head(remainder, remainder_size | PREV_INUSE);
2727 set_head_size(p, nb);
2728 fREe(chunk2mem(remainder));
2729 }
2730
2731 check_inuse_chunk(p);
2732 return chunk2mem(p);
2733
2734 }
2735
2736
2737
2738
2739 /*
2740 valloc just invokes memalign with alignment argument equal
2741 to the page size of the system (or as near to this as can
2742 be figured out from all the includes/defines above.)
2743 */
2744
2745 #if __STD_C
2746 Void_t* vALLOc(size_t bytes)
/* [<][>][^][v][top][bottom][index][help] */
2747 #else
2748 Void_t* vALLOc(bytes) size_t bytes;
2749 #endif
2750 {
2751 return mEMALIGn (malloc_getpagesize, bytes);
2752 }
2753
2754 /*
2755 pvalloc just invokes valloc for the nearest pagesize
2756 that will accommodate request
2757 */
2758
2759
2760 #if __STD_C
2761 Void_t* pvALLOc(size_t bytes)
/* [<][>][^][v][top][bottom][index][help] */
2762 #else
2763 Void_t* pvALLOc(bytes) size_t bytes;
2764 #endif
2765 {
2766 size_t pagesize = malloc_getpagesize;
2767 return mEMALIGn (pagesize, (bytes + pagesize - 1) & ~(pagesize - 1));
2768 }
2769
2770 /*
2771
2772 calloc calls malloc, then zeroes out the allocated chunk.
2773
2774 */
2775
2776 #if __STD_C
2777 Void_t* cALLOc(size_t n, size_t elem_size)
/* [<][>][^][v][top][bottom][index][help] */
2778 #else
2779 Void_t* cALLOc(n, elem_size) size_t n; size_t elem_size;
2780 #endif
2781 {
2782 mchunkptr p;
2783 INTERNAL_SIZE_T csz;
2784
2785 INTERNAL_SIZE_T sz = n * elem_size;
2786
2787 /* check if expand_top called, in which case don't need to clear */
2788 #if MORECORE_CLEARS
2789 mchunkptr oldtop = top;
2790 INTERNAL_SIZE_T oldtopsize = chunksize(top);
2791 #endif
2792 Void_t* mem = mALLOc (sz);
2793
2794 if (mem == 0)
2795 return 0;
2796 else
2797 {
2798 p = mem2chunk(mem);
2799
2800 /* Two optional cases in which clearing not necessary */
2801
2802
2803 #if HAVE_MMAP
2804 if (chunk_is_mmapped(p)) return mem;
2805 #endif
2806
2807 csz = chunksize(p);
2808
2809 #if MORECORE_CLEARS
2810 if (p == oldtop && csz > oldtopsize)
2811 {
2812 /* clear only the bytes from non-freshly-sbrked memory */
2813 csz = oldtopsize;
2814 }
2815 #endif
2816
2817 MALLOC_ZERO(mem, csz - SIZE_SZ);
2818 return mem;
2819 }
2820 }
2821
2822 /*
2823
2824 cfree just calls free. It is needed/defined on some systems
2825 that pair it with calloc, presumably for odd historical reasons.
2826
2827 */
2828
2829 #if !defined(INTERNAL_LINUX_C_LIB) || !defined(__ELF__)
2830 #if __STD_C
2831 void cfree(Void_t *mem)
/* [<][>][^][v][top][bottom][index][help] */
2832 #else
2833 void cfree(mem) Void_t *mem;
2834 #endif
2835 {
2836 free(mem);
2837 }
2838 #endif
2839
2840
2841
2842 /*
2843
2844 Malloc_trim gives memory back to the system (via negative
2845 arguments to sbrk) if there is unused memory at the `high' end of
2846 the malloc pool. You can call this after freeing large blocks of
2847 memory to potentially reduce the system-level memory requirements
2848 of a program. However, it cannot guarantee to reduce memory. Under
2849 some allocation patterns, some large free blocks of memory will be
2850 locked between two used chunks, so they cannot be given back to
2851 the system.
2852
2853 The `pad' argument to malloc_trim represents the amount of free
2854 trailing space to leave untrimmed. If this argument is zero,
2855 only the minimum amount of memory to maintain internal data
2856 structures will be left (one page or less). Non-zero arguments
2857 can be supplied to maintain enough trailing space to service
2858 future expected allocations without having to re-obtain memory
2859 from the system.
2860
2861 Malloc_trim returns 1 if it actually released any memory, else 0.
2862
2863 */
2864
2865 #if __STD_C
2866 int malloc_trim(size_t pad)
/* [<][>][^][v][top][bottom][index][help] */
2867 #else
2868 int malloc_trim(pad) size_t pad;
2869 #endif
2870 {
2871 long top_size; /* Amount of top-most memory */
2872 long extra; /* Amount to release */
2873 char* current_brk; /* address returned by pre-check sbrk call */
2874 char* new_brk; /* address returned by negative sbrk call */
2875
2876 unsigned long pagesz = malloc_getpagesize;
2877
2878 top_size = chunksize(top);
2879 extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
2880
2881 if (extra < (long)pagesz) /* Not enough memory to release */
2882 return 0;
2883
2884 else
2885 {
2886 /* Test to make sure no one else called sbrk */
2887 current_brk = (char*)(MORECORE (0));
2888 if (current_brk != (char*)(top) + top_size)
2889 return 0; /* Apparently we don't own memory; must fail */
2890
2891 else
2892 {
2893 new_brk = (char*)(MORECORE (-extra));
2894
2895 if (new_brk == (char*)(MORECORE_FAILURE)) /* sbrk failed? */
2896 {
2897 /* Try to figure out what we have */
2898 current_brk = (char*)(MORECORE (0));
2899 top_size = current_brk - (char*)top;
2900 if (top_size >= (long)MINSIZE) /* if not, we are very very dead! */
2901 {
2902 sbrked_mem = current_brk - sbrk_base;
2903 set_head(top, top_size | PREV_INUSE);
2904 }
2905 check_chunk(top);
2906 return 0;
2907 }
2908
2909 else
2910 {
2911 /* Success. Adjust top accordingly. */
2912 set_head(top, (top_size - extra) | PREV_INUSE);
2913 sbrked_mem -= extra;
2914 check_chunk(top);
2915 return 1;
2916 }
2917 }
2918 }
2919 }
2920
2921
2922
2923 /*
2924 malloc_usable_size:
2925
2926 This routine tells you how many bytes you can actually use in an
2927 allocated chunk, which may be more than you requested (although
2928 often not). You can use this many bytes without worrying about
2929 overwriting other allocated objects. Not a particularly great
2930 programming practice, but still sometimes useful.
2931
2932 */
2933
2934 #if __STD_C
2935 size_t malloc_usable_size(Void_t* mem)
/* [<][>][^][v][top][bottom][index][help] */
2936 #else
2937 size_t malloc_usable_size(mem) Void_t* mem;
2938 #endif
2939 {
2940 mchunkptr p;
2941 if (mem == 0)
2942 return 0;
2943 else
2944 {
2945 p = mem2chunk(mem);
2946 if(!chunk_is_mmapped(p))
2947 {
2948 if (!inuse(p)) return 0;
2949 check_inuse_chunk(p);
2950 return chunksize(p) - SIZE_SZ;
2951 }
2952 return chunksize(p) - 2*SIZE_SZ;
2953 }
2954 }
2955
2956
2957
2958
2959 /* Utility to update current_mallinfo for malloc_stats and mallinfo() */
2960
2961 static void malloc_update_mallinfo()
/* [<][>][^][v][top][bottom][index][help] */
2962 {
2963 int i;
2964 mbinptr b;
2965 mchunkptr p;
2966 #if DEBUG
2967 mchunkptr q;
2968 #endif
2969
2970 INTERNAL_SIZE_T avail = chunksize(top);
2971 int navail = ((long)(avail) >= (long)MINSIZE)? 1 : 0;
2972
2973 for (i = 1; i < NAV; ++i)
2974 {
2975 b = bin_at(i);
2976 for (p = last(b); p != b; p = p->bk)
2977 {
2978 #if DEBUG
2979 check_free_chunk(p);
2980 for (q = next_chunk(p);
2981 q < top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
2982 q = next_chunk(q))
2983 check_inuse_chunk(q);
2984 #endif
2985 avail += chunksize(p);
2986 navail++;
2987 }
2988 }
2989
2990 current_mallinfo.ordblks = navail;
2991 current_mallinfo.uordblks = sbrked_mem - avail;
2992 current_mallinfo.fordblks = avail;
2993 current_mallinfo.hblks = n_mmaps;
2994 current_mallinfo.hblkhd = mmapped_mem;
2995 current_mallinfo.keepcost = chunksize(top);
2996
2997 }
2998
2999
3000
3001 /*
3002
3003 malloc_stats:
3004
3005 Prints on stderr the amount of space obtain from the system (both
3006 via sbrk and mmap), the maximum amount (which may be more than
3007 current if malloc_trim and/or munmap got called), the maximum
3008 number of simultaneous mmap regions used, and the current number
3009 of bytes allocated via malloc (or realloc, etc) but not yet
3010 freed. (Note that this is the number of bytes allocated, not the
3011 number requested. It will be larger than the number requested
3012 because of alignment and bookkeeping overhead.)
3013
3014 */
3015
3016 void malloc_stats()
/* [<][>][^][v][top][bottom][index][help] */
3017 {
3018 malloc_update_mallinfo();
3019 fprintf(stderr, "max system bytes = %10u\n",
3020 (unsigned int)(max_total_mem));
3021 fprintf(stderr, "system bytes = %10u\n",
3022 (unsigned int)(sbrked_mem + mmapped_mem));
3023 fprintf(stderr, "in use bytes = %10u\n",
3024 (unsigned int)(current_mallinfo.uordblks + mmapped_mem));
3025 #if HAVE_MMAP
3026 fprintf(stderr, "max mmap regions = %10u\n",
3027 (unsigned int)max_n_mmaps);
3028 #endif
3029 }
3030
3031 /*
3032 mallinfo returns a copy of updated current mallinfo.
3033 */
3034
3035 struct mallinfo mALLINFo()
/* [<][>][^][v][top][bottom][index][help] */
3036 {
3037 malloc_update_mallinfo();
3038 return current_mallinfo;
3039 }
3040
3041
3042
3043
3044 /*
3045 mallopt:
3046
3047 mallopt is the general SVID/XPG interface to tunable parameters.
3048 The format is to provide a (parameter-number, parameter-value) pair.
3049 mallopt then sets the corresponding parameter to the argument
3050 value if it can (i.e., so long as the value is meaningful),
3051 and returns 1 if successful else 0.
3052
3053 See descriptions of tunable parameters above.
3054
3055 */
3056
3057 #if __STD_C
3058 int mALLOPt(int param_number, int value)
/* [<][>][^][v][top][bottom][index][help] */
3059 #else
3060 int mALLOPt(param_number, value) int param_number; int value;
3061 #endif
3062 {
3063 switch(param_number)
3064 {
3065 case M_TRIM_THRESHOLD:
3066 trim_threshold = value; return 1;
3067 case M_TOP_PAD:
3068 top_pad = value; return 1;
3069 case M_MMAP_THRESHOLD:
3070 mmap_threshold = value; return 1;
3071 case M_MMAP_MAX:
3072 #if HAVE_MMAP
3073 n_mmaps_max = value; return 1;
3074 #else
3075 if (value != 0) return 0; else n_mmaps_max = value; return 1;
3076 #endif
3077
3078 default:
3079 return 0;
3080 }
3081 }
3082
3083 /*
3084
3085 History:
3086
3087 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
3088 * Fixed ordering problem with boundary-stamping
3089
3090 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
3091 * Added pvalloc, as recommended by H.J. Liu
3092 * Added 64bit pointer support mainly from Wolfram Gloger
3093 * Added anonymously donated WIN32 sbrk emulation
3094 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
3095 * malloc_extend_top: fix mask error that caused wastage after
3096 foreign sbrks
3097 * Add linux mremap support code from HJ Liu
3098
3099 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
3100 * Integrated most documentation with the code.
3101 * Add support for mmap, with help from
3102 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3103 * Use last_remainder in more cases.
3104 * Pack bins using idea from colin@nyx10.cs.du.edu
3105 * Use ordered bins instead of best-fit threshhold
3106 * Eliminate block-local decls to simplify tracing and debugging.
3107 * Support another case of realloc via move into top
3108 * Fix error occuring when initial sbrk_base not word-aligned.
3109 * Rely on page size for units instead of SBRK_UNIT to
3110 avoid surprises about sbrk alignment conventions.
3111 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
3112 (raymond@es.ele.tue.nl) for the suggestion.
3113 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
3114 * More precautions for cases where other routines call sbrk,
3115 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
3116 * Added macros etc., allowing use in linux libc from
3117 H.J. Lu (hjl@gnu.ai.mit.edu)
3118 * Inverted this history list
3119
3120 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
3121 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
3122 * Removed all preallocation code since under current scheme
3123 the work required to undo bad preallocations exceeds
3124 the work saved in good cases for most test programs.
3125 * No longer use return list or unconsolidated bins since
3126 no scheme using them consistently outperforms those that don't
3127 given above changes.
3128 * Use best fit for very large chunks to prevent some worst-cases.
3129 * Added some support for debugging
3130
3131 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
3132 * Removed footers when chunks are in use. Thanks to
3133 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
3134
3135 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
3136 * Added malloc_trim, with help from Wolfram Gloger
3137 (wmglo@Dent.MED.Uni-Muenchen.DE).
3138
3139 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
3140
3141 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
3142 * realloc: try to expand in both directions
3143 * malloc: swap order of clean-bin strategy;
3144 * realloc: only conditionally expand backwards
3145 * Try not to scavenge used bins
3146 * Use bin counts as a guide to preallocation
3147 * Occasionally bin return list chunks in first scan
3148 * Add a few optimizations from colin@nyx10.cs.du.edu
3149
3150 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
3151 * faster bin computation & slightly different binning
3152 * merged all consolidations to one part of malloc proper
3153 (eliminating old malloc_find_space & malloc_clean_bin)
3154 * Scan 2 returns chunks (not just 1)
3155 * Propagate failure in realloc if malloc returns 0
3156 * Add stuff to allow compilation on non-ANSI compilers
3157 from kpv@research.att.com
3158
3159 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
3160 * removed potential for odd address access in prev_chunk
3161 * removed dependency on getpagesize.h
3162 * misc cosmetics and a bit more internal documentation
3163 * anticosmetics: mangled names in macros to evade debugger strangeness
3164 * tested on sparc, hp-700, dec-mips, rs6000
3165 with gcc & native cc (hp, dec only) allowing
3166 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
3167
3168 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
3169 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
3170 structure of old version, but most details differ.)
3171
3172 */
3173
3174