| /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 4 -*- |
| * vim: set ts=8 sts=4 et sw=4 tw=99: |
| * This Source Code Form is subject to the terms of the Mozilla Public |
| * License, v. 2.0. If a copy of the MPL was not distributed with this |
| * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ |
| |
| /* |
| * This code implements an incremental mark-and-sweep garbage collector, with |
| * most sweeping carried out in the background on a parallel thread. |
| * |
| * Full vs. zone GC |
| * ---------------- |
| * |
| * The collector can collect all zones at once, or a subset. These types of |
| * collection are referred to as a full GC and a zone GC respectively. |
| * |
| * The atoms zone is only collected in a full GC since objects in any zone may |
| * have pointers to atoms, and these are not recorded in the cross compartment |
| * pointer map. Also, the atoms zone is not collected if any thread has an |
| * AutoKeepAtoms instance on the stack, or there are any exclusive threads using |
| * the runtime. |
| * |
| * It is possible for an incremental collection that started out as a full GC to |
| * become a zone GC if new zones are created during the course of the |
| * collection. |
| * |
| * Incremental collection |
| * ---------------------- |
| * |
| * For a collection to be carried out incrementally the following conditions |
| * must be met: |
| * - the collection must be run by calling js::GCSlice() rather than js::GC() |
| * - the GC mode must have been set to JSGC_MODE_INCREMENTAL with |
| * JS_SetGCParameter() |
| * - no thread may have an AutoKeepAtoms instance on the stack |
| * |
| * The last condition is an engine-internal mechanism to ensure that incremental |
| * collection is not carried out without the correct barriers being implemented. |
| * For more information see 'Incremental marking' below. |
| * |
| * If the collection is not incremental, all foreground activity happens inside |
| * a single call to GC() or GCSlice(). However the collection is not complete |
| * until the background sweeping activity has finished. |
| * |
| * An incremental collection proceeds as a series of slices, interleaved with |
| * mutator activity, i.e. running JavaScript code. Slices are limited by a time |
| * budget. The slice finishes as soon as possible after the requested time has |
| * passed. |
| * |
| * Collector states |
| * ---------------- |
| * |
| * The collector proceeds through the following states, the current state being |
| * held in JSRuntime::gcIncrementalState: |
| * |
| * - MARK_ROOTS - marks the stack and other roots |
| * - MARK - incrementally marks reachable things |
| * - SWEEP - sweeps zones in groups and continues marking unswept zones |
| * |
| * The MARK_ROOTS activity always takes place in the first slice. The next two |
| * states can take place over one or more slices. |
| * |
| * In other words an incremental collection proceeds like this: |
| * |
| * Slice 1: MARK_ROOTS: Roots pushed onto the mark stack. |
| * MARK: The mark stack is processed by popping an element, |
| * marking it, and pushing its children. |
| * |
| * ... JS code runs ... |
| * |
| * Slice 2: MARK: More mark stack processing. |
| * |
| * ... JS code runs ... |
| * |
| * Slice n-1: MARK: More mark stack processing. |
| * |
| * ... JS code runs ... |
| * |
| * Slice n: MARK: Mark stack is completely drained. |
| * SWEEP: Select first group of zones to sweep and sweep them. |
| * |
| * ... JS code runs ... |
| * |
| * Slice n+1: SWEEP: Mark objects in unswept zones that were newly |
| * identified as alive (see below). Then sweep more zone |
| * groups. |
| * |
| * ... JS code runs ... |
| * |
| * Slice n+2: SWEEP: Mark objects in unswept zones that were newly |
| * identified as alive. Then sweep more zone groups. |
| * |
| * ... JS code runs ... |
| * |
| * Slice m: SWEEP: Sweeping is finished, and background sweeping |
| * started on the helper thread. |
| * |
| * ... JS code runs, remaining sweeping done on background thread ... |
| * |
| * When background sweeping finishes the GC is complete. |
| * |
| * Incremental marking |
| * ------------------- |
| * |
| * Incremental collection requires close collaboration with the mutator (i.e., |
| * JS code) to guarantee correctness. |
| * |
| * - During an incremental GC, if a memory location (except a root) is written |
| * to, then the value it previously held must be marked. Write barriers |
| * ensure this. |
| * |
| * - Any object that is allocated during incremental GC must start out marked. |
| * |
| * - Roots are marked in the first slice and hence don't need write barriers. |
| * Roots are things like the C stack and the VM stack. |
| * |
| * The problem that write barriers solve is that between slices the mutator can |
| * change the object graph. We must ensure that it cannot do this in such a way |
| * that makes us fail to mark a reachable object (marking an unreachable object |
| * is tolerable). |
| * |
| * We use a snapshot-at-the-beginning algorithm to do this. This means that we |
| * promise to mark at least everything that is reachable at the beginning of |
| * collection. To implement it we mark the old contents of every non-root memory |
| * location written to by the mutator while the collection is in progress, using |
| * write barriers. This is described in gc/Barrier.h. |
| * |
| * Incremental sweeping |
| * -------------------- |
| * |
| * Sweeping is difficult to do incrementally because object finalizers must be |
| * run at the start of sweeping, before any mutator code runs. The reason is |
| * that some objects use their finalizers to remove themselves from caches. If |
| * mutator code was allowed to run after the start of sweeping, it could observe |
| * the state of the cache and create a new reference to an object that was just |
| * about to be destroyed. |
| * |
| * Sweeping all finalizable objects in one go would introduce long pauses, so |
| * instead sweeping broken up into groups of zones. Zones which are not yet |
| * being swept are still marked, so the issue above does not apply. |
| * |
| * The order of sweeping is restricted by cross compartment pointers - for |
| * example say that object |a| from zone A points to object |b| in zone B and |
| * neither object was marked when we transitioned to the SWEEP phase. Imagine we |
| * sweep B first and then return to the mutator. It's possible that the mutator |
| * could cause |a| to become alive through a read barrier (perhaps it was a |
| * shape that was accessed via a shape table). Then we would need to mark |b|, |
| * which |a| points to, but |b| has already been swept. |
| * |
| * So if there is such a pointer then marking of zone B must not finish before |
| * marking of zone A. Pointers which form a cycle between zones therefore |
| * restrict those zones to being swept at the same time, and these are found |
| * using Tarjan's algorithm for finding the strongly connected components of a |
| * graph. |
| * |
| * GC things without finalizers, and things with finalizers that are able to run |
| * in the background, are swept on the background thread. This accounts for most |
| * of the sweeping work. |
| * |
| * Reset |
| * ----- |
| * |
| * During incremental collection it is possible, although unlikely, for |
| * conditions to change such that incremental collection is no longer safe. In |
| * this case, the collection is 'reset' by ResetIncrementalGC(). If we are in |
| * the mark state, this just stops marking, but if we have started sweeping |
| * already, we continue until we have swept the current zone group. Following a |
| * reset, a new non-incremental collection is started. |
| * |
| * Compacting GC |
| * ------------- |
| * |
| * Compacting GC happens at the end of a major GC as part of the last slice. |
| * There are three parts: |
| * |
| * - Arenas are selected for compaction. |
| * - The contents of those arenas are moved to new arenas. |
| * - All references to moved things are updated. |
| */ |
| |
| #include "jsgcinlines.h" |
| |
| #include "mozilla/ArrayUtils.h" |
| #include "mozilla/DebugOnly.h" |
| #include "mozilla/MacroForEach.h" |
| #include "mozilla/MemoryReporting.h" |
| #include "mozilla/Move.h" |
| |
| #include <ctype.h> |
| #include <string.h> |
| #if defined(STARBOARD) |
| #elif !defined(XP_WIN) |
| # include <sys/mman.h> |
| # include <unistd.h> |
| #endif |
| |
| #include "jsapi.h" |
| #include "jsatom.h" |
| #include "jscntxt.h" |
| #include "jscompartment.h" |
| #include "jsfriendapi.h" |
| #include "jsobj.h" |
| #include "jsprf.h" |
| #include "jsscript.h" |
| #include "jstypes.h" |
| #include "jsutil.h" |
| #include "jswatchpoint.h" |
| #include "jsweakmap.h" |
| #ifdef XP_WIN |
| # include "jswin.h" |
| #endif |
| |
| #include "gc/FindSCCs.h" |
| #include "gc/GCInternals.h" |
| #include "gc/GCTrace.h" |
| #include "gc/Marking.h" |
| #include "gc/Memory.h" |
| #include "jit/BaselineJIT.h" |
| #include "jit/IonCode.h" |
| #include "jit/JitcodeMap.h" |
| #include "js/SliceBudget.h" |
| #include "proxy/DeadObjectProxy.h" |
| #include "vm/Debugger.h" |
| #include "vm/ProxyObject.h" |
| #include "vm/Shape.h" |
| #include "vm/String.h" |
| #include "vm/Symbol.h" |
| #include "vm/Time.h" |
| #include "vm/TraceLogging.h" |
| #include "vm/WrapperObject.h" |
| |
| #include "jsobjinlines.h" |
| #include "jsscriptinlines.h" |
| |
| #include "vm/Stack-inl.h" |
| #include "vm/String-inl.h" |
| |
| using namespace js; |
| using namespace js::gc; |
| |
| using mozilla::ArrayLength; |
| using mozilla::Maybe; |
| using mozilla::Swap; |
| |
| using JS::AutoGCRooter; |
| |
| /* Perform a Full GC every 20 seconds if MaybeGC is called */ |
| static const uint64_t GC_IDLE_FULL_SPAN = 20 * 1000 * 1000; |
| |
| /* Increase the IGC marking slice time if we are in highFrequencyGC mode. */ |
| static const int IGC_MARK_SLICE_MULTIPLIER = 2; |
| |
| const AllocKind gc::slotsToThingKind[] = { |
| /* 0 */ AllocKind::OBJECT0, AllocKind::OBJECT2, AllocKind::OBJECT2, AllocKind::OBJECT4, |
| /* 4 */ AllocKind::OBJECT4, AllocKind::OBJECT8, AllocKind::OBJECT8, AllocKind::OBJECT8, |
| /* 8 */ AllocKind::OBJECT8, AllocKind::OBJECT12, AllocKind::OBJECT12, AllocKind::OBJECT12, |
| /* 12 */ AllocKind::OBJECT12, AllocKind::OBJECT16, AllocKind::OBJECT16, AllocKind::OBJECT16, |
| /* 16 */ AllocKind::OBJECT16 |
| }; |
| |
| static_assert(JS_ARRAY_LENGTH(slotsToThingKind) == SLOTS_TO_THING_KIND_LIMIT, |
| "We have defined a slot count for each kind."); |
| |
| // Assert that SortedArenaList::MinThingSize is <= the real minimum thing size. |
| #define CHECK_MIN_THING_SIZE_INNER(x_) \ |
| static_assert(x_ >= SortedArenaList::MinThingSize, \ |
| #x_ " is less than SortedArenaList::MinThingSize!"); |
| #define CHECK_MIN_THING_SIZE(...) { __VA_ARGS__ }; /* Define the array. */ \ |
| MOZ_FOR_EACH(CHECK_MIN_THING_SIZE_INNER, (), (__VA_ARGS__ UINT32_MAX)) |
| |
| const uint32_t Arena::ThingSizes[] = CHECK_MIN_THING_SIZE( |
| sizeof(JSFunction), /* AllocKind::FUNCTION */ |
| sizeof(FunctionExtended), /* AllocKind::FUNCTION_EXTENDED */ |
| sizeof(JSObject_Slots0), /* AllocKind::OBJECT0 */ |
| sizeof(JSObject_Slots0), /* AllocKind::OBJECT0_BACKGROUND */ |
| sizeof(JSObject_Slots2), /* AllocKind::OBJECT2 */ |
| sizeof(JSObject_Slots2), /* AllocKind::OBJECT2_BACKGROUND */ |
| sizeof(JSObject_Slots4), /* AllocKind::OBJECT4 */ |
| sizeof(JSObject_Slots4), /* AllocKind::OBJECT4_BACKGROUND */ |
| sizeof(JSObject_Slots8), /* AllocKind::OBJECT8 */ |
| sizeof(JSObject_Slots8), /* AllocKind::OBJECT8_BACKGROUND */ |
| sizeof(JSObject_Slots12), /* AllocKind::OBJECT12 */ |
| sizeof(JSObject_Slots12), /* AllocKind::OBJECT12_BACKGROUND */ |
| sizeof(JSObject_Slots16), /* AllocKind::OBJECT16 */ |
| sizeof(JSObject_Slots16), /* AllocKind::OBJECT16_BACKGROUND */ |
| sizeof(JSScript), /* AllocKind::SCRIPT */ |
| sizeof(LazyScript), /* AllocKind::LAZY_SCRIPT */ |
| sizeof(Shape), /* AllocKind::SHAPE */ |
| sizeof(AccessorShape), /* AllocKind::ACCESSOR_SHAPE */ |
| sizeof(BaseShape), /* AllocKind::BASE_SHAPE */ |
| sizeof(ObjectGroup), /* AllocKind::OBJECT_GROUP */ |
| sizeof(JSFatInlineString), /* AllocKind::FAT_INLINE_STRING */ |
| sizeof(JSString), /* AllocKind::STRING */ |
| sizeof(JSExternalString), /* AllocKind::EXTERNAL_STRING */ |
| sizeof(JS::Symbol), /* AllocKind::SYMBOL */ |
| sizeof(jit::JitCode), /* AllocKind::JITCODE */ |
| ); |
| |
| #undef CHECK_MIN_THING_SIZE_INNER |
| #undef CHECK_MIN_THING_SIZE |
| |
| #define OFFSET(type) uint32_t(sizeof(ArenaHeader) + (ArenaSize - sizeof(ArenaHeader)) % sizeof(type)) |
| |
| const uint32_t Arena::FirstThingOffsets[] = { |
| OFFSET(JSFunction), /* AllocKind::FUNCTION */ |
| OFFSET(FunctionExtended), /* AllocKind::FUNCTION_EXTENDED */ |
| OFFSET(JSObject_Slots0), /* AllocKind::OBJECT0 */ |
| OFFSET(JSObject_Slots0), /* AllocKind::OBJECT0_BACKGROUND */ |
| OFFSET(JSObject_Slots2), /* AllocKind::OBJECT2 */ |
| OFFSET(JSObject_Slots2), /* AllocKind::OBJECT2_BACKGROUND */ |
| OFFSET(JSObject_Slots4), /* AllocKind::OBJECT4 */ |
| OFFSET(JSObject_Slots4), /* AllocKind::OBJECT4_BACKGROUND */ |
| OFFSET(JSObject_Slots8), /* AllocKind::OBJECT8 */ |
| OFFSET(JSObject_Slots8), /* AllocKind::OBJECT8_BACKGROUND */ |
| OFFSET(JSObject_Slots12), /* AllocKind::OBJECT12 */ |
| OFFSET(JSObject_Slots12), /* AllocKind::OBJECT12_BACKGROUND */ |
| OFFSET(JSObject_Slots16), /* AllocKind::OBJECT16 */ |
| OFFSET(JSObject_Slots16), /* AllocKind::OBJECT16_BACKGROUND */ |
| OFFSET(JSScript), /* AllocKind::SCRIPT */ |
| OFFSET(LazyScript), /* AllocKind::LAZY_SCRIPT */ |
| OFFSET(Shape), /* AllocKind::SHAPE */ |
| OFFSET(AccessorShape), /* AllocKind::ACCESSOR_SHAPE */ |
| OFFSET(BaseShape), /* AllocKind::BASE_SHAPE */ |
| OFFSET(ObjectGroup), /* AllocKind::OBJECT_GROUP */ |
| OFFSET(JSFatInlineString), /* AllocKind::FAT_INLINE_STRING */ |
| OFFSET(JSString), /* AllocKind::STRING */ |
| OFFSET(JSExternalString), /* AllocKind::EXTERNAL_STRING */ |
| OFFSET(JS::Symbol), /* AllocKind::SYMBOL */ |
| OFFSET(jit::JitCode), /* AllocKind::JITCODE */ |
| }; |
| |
| #undef OFFSET |
| |
| struct js::gc::FinalizePhase |
| { |
| size_t length; |
| const AllocKind* kinds; |
| gcstats::Phase statsPhase; |
| }; |
| |
| #define PHASE(x, p) { ArrayLength(x), x, p } |
| |
| /* |
| * Finalization order for incrementally swept things. |
| */ |
| |
| static const AllocKind IncrementalPhaseStrings[] = { |
| AllocKind::EXTERNAL_STRING |
| }; |
| |
| static const AllocKind IncrementalPhaseScripts[] = { |
| AllocKind::SCRIPT, |
| AllocKind::LAZY_SCRIPT |
| }; |
| |
| static const AllocKind IncrementalPhaseJitCode[] = { |
| AllocKind::JITCODE |
| }; |
| |
| static const FinalizePhase IncrementalFinalizePhases[] = { |
| PHASE(IncrementalPhaseStrings, gcstats::PHASE_SWEEP_STRING), |
| PHASE(IncrementalPhaseScripts, gcstats::PHASE_SWEEP_SCRIPT), |
| PHASE(IncrementalPhaseJitCode, gcstats::PHASE_SWEEP_JITCODE) |
| }; |
| |
| /* |
| * Finalization order for things swept in the background. |
| */ |
| |
| static const AllocKind BackgroundPhaseObjects[] = { |
| AllocKind::FUNCTION, |
| AllocKind::FUNCTION_EXTENDED, |
| AllocKind::OBJECT0_BACKGROUND, |
| AllocKind::OBJECT2_BACKGROUND, |
| AllocKind::OBJECT4_BACKGROUND, |
| AllocKind::OBJECT8_BACKGROUND, |
| AllocKind::OBJECT12_BACKGROUND, |
| AllocKind::OBJECT16_BACKGROUND |
| }; |
| |
| static const AllocKind BackgroundPhaseStringsAndSymbols[] = { |
| AllocKind::FAT_INLINE_STRING, |
| AllocKind::STRING, |
| AllocKind::SYMBOL |
| }; |
| |
| static const AllocKind BackgroundPhaseShapes[] = { |
| AllocKind::SHAPE, |
| AllocKind::ACCESSOR_SHAPE, |
| AllocKind::BASE_SHAPE, |
| AllocKind::OBJECT_GROUP |
| }; |
| |
| static const FinalizePhase BackgroundFinalizePhases[] = { |
| PHASE(BackgroundPhaseObjects, gcstats::PHASE_SWEEP_OBJECT), |
| PHASE(BackgroundPhaseStringsAndSymbols, gcstats::PHASE_SWEEP_STRING), |
| PHASE(BackgroundPhaseShapes, gcstats::PHASE_SWEEP_SHAPE) |
| }; |
| |
| #undef PHASE |
| |
| template<> |
| JSObject* |
| ArenaCellIterImpl::get<JSObject>() const |
| { |
| MOZ_ASSERT(!done()); |
| return reinterpret_cast<JSObject*>(getCell()); |
| } |
| |
| #ifdef DEBUG |
| void |
| ArenaHeader::checkSynchronizedWithFreeList() const |
| { |
| /* |
| * Do not allow to access the free list when its real head is still stored |
| * in FreeLists and is not synchronized with this one. |
| */ |
| MOZ_ASSERT(allocated()); |
| |
| /* |
| * We can be called from the background finalization thread when the free |
| * list in the zone can mutate at any moment. We cannot do any |
| * checks in this case. |
| */ |
| if (IsBackgroundFinalized(getAllocKind()) && zone->runtimeFromAnyThread()->gc.onBackgroundThread()) |
| return; |
| |
| FreeSpan firstSpan = firstFreeSpan.decompact(arenaAddress()); |
| if (firstSpan.isEmpty()) |
| return; |
| const FreeList* freeList = zone->arenas.getFreeList(getAllocKind()); |
| if (freeList->isEmpty() || firstSpan.arenaAddress() != freeList->arenaAddress()) |
| return; |
| |
| /* |
| * Here this arena has free things, FreeList::lists[thingKind] is not |
| * empty and also points to this arena. Thus they must be the same. |
| */ |
| MOZ_ASSERT(freeList->isSameNonEmptySpan(firstSpan)); |
| } |
| #endif |
| |
| void |
| ArenaHeader::unmarkAll() |
| { |
| uintptr_t* word = chunk()->bitmap.arenaBits(this); |
| memset(word, 0, ArenaBitmapWords * sizeof(uintptr_t)); |
| } |
| |
| /* static */ void |
| Arena::staticAsserts() |
| { |
| static_assert(JS_ARRAY_LENGTH(ThingSizes) == size_t(AllocKind::LIMIT), |
| "We haven't defined all thing sizes."); |
| static_assert(JS_ARRAY_LENGTH(FirstThingOffsets) == size_t(AllocKind::LIMIT), |
| "We haven't defined all offsets."); |
| } |
| |
| void |
| Arena::setAsFullyUnused(AllocKind thingKind) |
| { |
| FreeSpan fullSpan; |
| size_t thingSize = Arena::thingSize(thingKind); |
| fullSpan.initFinal(thingsStart(thingKind), thingsEnd() - thingSize, thingSize); |
| aheader.setFirstFreeSpan(&fullSpan); |
| } |
| |
| template<typename T> |
| inline size_t |
| Arena::finalize(FreeOp* fop, AllocKind thingKind, size_t thingSize) |
| { |
| /* Enforce requirements on size of T. */ |
| MOZ_ASSERT(thingSize % CellSize == 0); |
| MOZ_ASSERT(thingSize <= 255); |
| |
| MOZ_ASSERT(aheader.allocated()); |
| MOZ_ASSERT(thingKind == aheader.getAllocKind()); |
| MOZ_ASSERT(thingSize == aheader.getThingSize()); |
| MOZ_ASSERT(!aheader.hasDelayedMarking); |
| MOZ_ASSERT(!aheader.markOverflow); |
| MOZ_ASSERT(!aheader.allocatedDuringIncremental); |
| |
| uintptr_t firstThing = thingsStart(thingKind); |
| uintptr_t firstThingOrSuccessorOfLastMarkedThing = firstThing; |
| uintptr_t lastThing = thingsEnd() - thingSize; |
| |
| FreeSpan newListHead; |
| FreeSpan* newListTail = &newListHead; |
| size_t nmarked = 0; |
| |
| if (MOZ_UNLIKELY(MemProfiler::enabled())) { |
| for (ArenaCellIterUnderFinalize i(&aheader); !i.done(); i.next()) { |
| T* t = i.get<T>(); |
| if (t->asTenured().isMarked()) |
| MemProfiler::MarkTenured(reinterpret_cast<void*>(t)); |
| } |
| } |
| |
| for (ArenaCellIterUnderFinalize i(&aheader); !i.done(); i.next()) { |
| T* t = i.get<T>(); |
| if (t->asTenured().isMarked()) { |
| uintptr_t thing = reinterpret_cast<uintptr_t>(t); |
| if (thing != firstThingOrSuccessorOfLastMarkedThing) { |
| // We just finished passing over one or more free things, |
| // so record a new FreeSpan. |
| newListTail->initBoundsUnchecked(firstThingOrSuccessorOfLastMarkedThing, |
| thing - thingSize); |
| newListTail = newListTail->nextSpanUnchecked(); |
| } |
| firstThingOrSuccessorOfLastMarkedThing = thing + thingSize; |
| nmarked++; |
| } else { |
| t->finalize(fop); |
| JS_POISON(t, JS_SWEPT_TENURED_PATTERN, thingSize); |
| TraceTenuredFinalize(t); |
| } |
| } |
| |
| if (nmarked == 0) { |
| // Do nothing. The caller will update the arena header appropriately. |
| MOZ_ASSERT(newListTail == &newListHead); |
| JS_EXTRA_POISON(data, JS_SWEPT_TENURED_PATTERN, sizeof(data)); |
| return nmarked; |
| } |
| |
| MOZ_ASSERT(firstThingOrSuccessorOfLastMarkedThing != firstThing); |
| uintptr_t lastMarkedThing = firstThingOrSuccessorOfLastMarkedThing - thingSize; |
| if (lastThing == lastMarkedThing) { |
| // If the last thing was marked, we will have already set the bounds of |
| // the final span, and we just need to terminate the list. |
| newListTail->initAsEmpty(); |
| } else { |
| // Otherwise, end the list with a span that covers the final stretch of free things. |
| newListTail->initFinal(firstThingOrSuccessorOfLastMarkedThing, lastThing, thingSize); |
| } |
| |
| #ifdef DEBUG |
| size_t nfree = 0; |
| for (const FreeSpan* span = &newListHead; !span->isEmpty(); span = span->nextSpan()) |
| nfree += span->length(thingSize); |
| MOZ_ASSERT(nfree + nmarked == thingsPerArena(thingSize)); |
| #endif |
| aheader.setFirstFreeSpan(&newListHead); |
| return nmarked; |
| } |
| |
| // Finalize arenas from src list, releasing empty arenas if keepArenas wasn't |
| // specified and inserting the others into the appropriate destination size |
| // bins. |
| template<typename T> |
| static inline bool |
| FinalizeTypedArenas(FreeOp* fop, |
| ArenaHeader** src, |
| SortedArenaList& dest, |
| AllocKind thingKind, |
| SliceBudget& budget, |
| ArenaLists::KeepArenasEnum keepArenas) |
| { |
| // When operating in the foreground, take the lock at the top. |
| Maybe<AutoLockGC> maybeLock; |
| if (!fop->onBackgroundThread()) |
| maybeLock.emplace(fop->runtime()); |
| |
| // During background sweeping free arenas are released later on in |
| // sweepBackgroundThings(). |
| MOZ_ASSERT_IF(fop->onBackgroundThread(), keepArenas == ArenaLists::KEEP_ARENAS); |
| |
| size_t thingSize = Arena::thingSize(thingKind); |
| size_t thingsPerArena = Arena::thingsPerArena(thingSize); |
| |
| while (ArenaHeader* aheader = *src) { |
| *src = aheader->next; |
| size_t nmarked = aheader->getArena()->finalize<T>(fop, thingKind, thingSize); |
| size_t nfree = thingsPerArena - nmarked; |
| |
| if (nmarked) |
| dest.insertAt(aheader, nfree); |
| else if (keepArenas == ArenaLists::KEEP_ARENAS) |
| aheader->chunk()->recycleArena(aheader, dest, thingKind, thingsPerArena); |
| else |
| fop->runtime()->gc.releaseArena(aheader, maybeLock.ref()); |
| |
| budget.step(thingsPerArena); |
| if (budget.isOverBudget()) |
| return false; |
| } |
| |
| return true; |
| } |
| |
| /* |
| * Finalize the list. On return, |al|'s cursor points to the first non-empty |
| * arena in the list (which may be null if all arenas are full). |
| */ |
| static bool |
| FinalizeArenas(FreeOp* fop, |
| ArenaHeader** src, |
| SortedArenaList& dest, |
| AllocKind thingKind, |
| SliceBudget& budget, |
| ArenaLists::KeepArenasEnum keepArenas) |
| { |
| switch (thingKind) { |
| case AllocKind::FUNCTION: |
| case AllocKind::FUNCTION_EXTENDED: |
| case AllocKind::OBJECT0: |
| case AllocKind::OBJECT0_BACKGROUND: |
| case AllocKind::OBJECT2: |
| case AllocKind::OBJECT2_BACKGROUND: |
| case AllocKind::OBJECT4: |
| case AllocKind::OBJECT4_BACKGROUND: |
| case AllocKind::OBJECT8: |
| case AllocKind::OBJECT8_BACKGROUND: |
| case AllocKind::OBJECT12: |
| case AllocKind::OBJECT12_BACKGROUND: |
| case AllocKind::OBJECT16: |
| case AllocKind::OBJECT16_BACKGROUND: |
| return FinalizeTypedArenas<JSObject>(fop, src, dest, thingKind, budget, keepArenas); |
| case AllocKind::SCRIPT: |
| return FinalizeTypedArenas<JSScript>(fop, src, dest, thingKind, budget, keepArenas); |
| case AllocKind::LAZY_SCRIPT: |
| return FinalizeTypedArenas<LazyScript>(fop, src, dest, thingKind, budget, keepArenas); |
| case AllocKind::SHAPE: |
| return FinalizeTypedArenas<Shape>(fop, src, dest, thingKind, budget, keepArenas); |
| case AllocKind::ACCESSOR_SHAPE: |
| return FinalizeTypedArenas<AccessorShape>(fop, src, dest, thingKind, budget, keepArenas); |
| case AllocKind::BASE_SHAPE: |
| return FinalizeTypedArenas<BaseShape>(fop, src, dest, thingKind, budget, keepArenas); |
| case AllocKind::OBJECT_GROUP: |
| return FinalizeTypedArenas<ObjectGroup>(fop, src, dest, thingKind, budget, keepArenas); |
| case AllocKind::STRING: |
| return FinalizeTypedArenas<JSString>(fop, src, dest, thingKind, budget, keepArenas); |
| case AllocKind::FAT_INLINE_STRING: |
| return FinalizeTypedArenas<JSFatInlineString>(fop, src, dest, thingKind, budget, keepArenas); |
| case AllocKind::EXTERNAL_STRING: |
| return FinalizeTypedArenas<JSExternalString>(fop, src, dest, thingKind, budget, keepArenas); |
| case AllocKind::SYMBOL: |
| return FinalizeTypedArenas<JS::Symbol>(fop, src, dest, thingKind, budget, keepArenas); |
| case AllocKind::JITCODE: |
| return FinalizeTypedArenas<jit::JitCode>(fop, src, dest, thingKind, budget, keepArenas); |
| default: |
| MOZ_CRASH("Invalid alloc kind"); |
| } |
| } |
| |
| Chunk* |
| ChunkPool::pop() |
| { |
| MOZ_ASSERT(bool(head_) == bool(count_)); |
| if (!count_) |
| return nullptr; |
| return remove(head_); |
| } |
| |
| void |
| ChunkPool::push(Chunk* chunk) |
| { |
| MOZ_ASSERT(!chunk->info.next); |
| MOZ_ASSERT(!chunk->info.prev); |
| |
| chunk->info.age = 0; |
| chunk->info.next = head_; |
| if (head_) |
| head_->info.prev = chunk; |
| head_ = chunk; |
| ++count_; |
| |
| MOZ_ASSERT(verify()); |
| } |
| |
| Chunk* |
| ChunkPool::remove(Chunk* chunk) |
| { |
| MOZ_ASSERT(count_ > 0); |
| MOZ_ASSERT(contains(chunk)); |
| |
| if (head_ == chunk) |
| head_ = chunk->info.next; |
| if (chunk->info.prev) |
| chunk->info.prev->info.next = chunk->info.next; |
| if (chunk->info.next) |
| chunk->info.next->info.prev = chunk->info.prev; |
| chunk->info.next = chunk->info.prev = nullptr; |
| --count_; |
| |
| MOZ_ASSERT(verify()); |
| return chunk; |
| } |
| |
| #ifdef DEBUG |
| bool |
| ChunkPool::contains(Chunk* chunk) const |
| { |
| verify(); |
| for (Chunk* cursor = head_; cursor; cursor = cursor->info.next) { |
| if (cursor == chunk) |
| return true; |
| } |
| return false; |
| } |
| |
| bool |
| ChunkPool::verify() const |
| { |
| MOZ_ASSERT(bool(head_) == bool(count_)); |
| uint32_t count = 0; |
| for (Chunk* cursor = head_; cursor; cursor = cursor->info.next, ++count) { |
| MOZ_ASSERT_IF(cursor->info.prev, cursor->info.prev->info.next == cursor); |
| MOZ_ASSERT_IF(cursor->info.next, cursor->info.next->info.prev == cursor); |
| } |
| MOZ_ASSERT(count_ == count); |
| return true; |
| } |
| #endif |
| |
| void |
| ChunkPool::Iter::next() |
| { |
| MOZ_ASSERT(!done()); |
| current_ = current_->info.next; |
| } |
| |
| ChunkPool |
| GCRuntime::expireEmptyChunkPool(bool shrinkBuffers, const AutoLockGC& lock) |
| { |
| /* |
| * Return old empty chunks to the system while preserving the order of |
| * other chunks in the list. This way, if the GC runs several times |
| * without emptying the list, the older chunks will stay at the tail |
| * and are more likely to reach the max age. |
| */ |
| MOZ_ASSERT(emptyChunks(lock).verify()); |
| ChunkPool expired; |
| unsigned freeChunkCount = 0; |
| for (ChunkPool::Iter iter(emptyChunks(lock)); !iter.done();) { |
| Chunk* chunk = iter.get(); |
| iter.next(); |
| |
| MOZ_ASSERT(chunk->unused()); |
| MOZ_ASSERT(!fullChunks(lock).contains(chunk)); |
| MOZ_ASSERT(!availableChunks(lock).contains(chunk)); |
| if (freeChunkCount >= tunables.maxEmptyChunkCount() || |
| (freeChunkCount >= tunables.minEmptyChunkCount(lock) && |
| (shrinkBuffers || chunk->info.age == MAX_EMPTY_CHUNK_AGE))) |
| { |
| emptyChunks(lock).remove(chunk); |
| prepareToFreeChunk(chunk->info); |
| expired.push(chunk); |
| } else { |
| /* Keep the chunk but increase its age. */ |
| ++freeChunkCount; |
| ++chunk->info.age; |
| } |
| } |
| MOZ_ASSERT(expired.verify()); |
| MOZ_ASSERT(emptyChunks(lock).verify()); |
| MOZ_ASSERT(emptyChunks(lock).count() <= tunables.maxEmptyChunkCount()); |
| MOZ_ASSERT_IF(shrinkBuffers, emptyChunks(lock).count() <= tunables.minEmptyChunkCount(lock)); |
| return expired; |
| } |
| |
| static void |
| FreeChunkPool(JSRuntime* rt, ChunkPool& pool) |
| { |
| for (ChunkPool::Iter iter(pool); !iter.done();) { |
| Chunk* chunk = iter.get(); |
| iter.next(); |
| pool.remove(chunk); |
| MOZ_ASSERT(!chunk->info.numArenasFreeCommitted); |
| UnmapPages(static_cast<void*>(chunk), ChunkSize); |
| } |
| MOZ_ASSERT(pool.count() == 0); |
| } |
| |
| void |
| GCRuntime::freeEmptyChunks(JSRuntime* rt, const AutoLockGC& lock) |
| { |
| FreeChunkPool(rt, emptyChunks(lock)); |
| } |
| |
| /* static */ Chunk* |
| Chunk::allocate(JSRuntime* rt) |
| { |
| Chunk* chunk = static_cast<Chunk*>(MapAlignedPages(ChunkSize, ChunkSize)); |
| if (!chunk) |
| return nullptr; |
| chunk->init(rt); |
| rt->gc.stats.count(gcstats::STAT_NEW_CHUNK); |
| return chunk; |
| } |
| |
| inline void |
| GCRuntime::prepareToFreeChunk(ChunkInfo& info) |
| { |
| MOZ_ASSERT(numArenasFreeCommitted >= info.numArenasFreeCommitted); |
| numArenasFreeCommitted -= info.numArenasFreeCommitted; |
| stats.count(gcstats::STAT_DESTROY_CHUNK); |
| #ifdef DEBUG |
| /* |
| * Let FreeChunkPool detect a missing prepareToFreeChunk call before it |
| * frees chunk. |
| */ |
| info.numArenasFreeCommitted = 0; |
| #endif |
| } |
| |
| void Chunk::decommitAllArenas(JSRuntime* rt) |
| { |
| decommittedArenas.clear(true); |
| MarkPagesUnused(&arenas[0], ArenasPerChunk * ArenaSize); |
| |
| info.freeArenasHead = nullptr; |
| info.lastDecommittedArenaOffset = 0; |
| info.numArenasFree = ArenasPerChunk; |
| info.numArenasFreeCommitted = 0; |
| } |
| |
| void |
| Chunk::init(JSRuntime* rt) |
| { |
| JS_POISON(this, JS_FRESH_TENURED_PATTERN, ChunkSize); |
| |
| /* |
| * We clear the bitmap to guard against JS::GCThingIsMarkedGray being called |
| * on uninitialized data, which would happen before the first GC cycle. |
| */ |
| bitmap.clear(); |
| |
| /* |
| * Decommit the arenas. We do this after poisoning so that if the OS does |
| * not have to recycle the pages, we still get the benefit of poisoning. |
| */ |
| decommitAllArenas(rt); |
| |
| /* Initialize the chunk info. */ |
| info.init(); |
| new (&info.trailer) ChunkTrailer(rt); |
| |
| /* The rest of info fields are initialized in pickChunk. */ |
| } |
| |
| /* |
| * Search for and return the next decommitted Arena. Our goal is to keep |
| * lastDecommittedArenaOffset "close" to a free arena. We do this by setting |
| * it to the most recently freed arena when we free, and forcing it to |
| * the last alloc + 1 when we allocate. |
| */ |
| uint32_t |
| Chunk::findDecommittedArenaOffset() |
| { |
| /* Note: lastFreeArenaOffset can be past the end of the list. */ |
| for (unsigned i = info.lastDecommittedArenaOffset; i < ArenasPerChunk; i++) |
| if (decommittedArenas.get(i)) |
| return i; |
| for (unsigned i = 0; i < info.lastDecommittedArenaOffset; i++) |
| if (decommittedArenas.get(i)) |
| return i; |
| MOZ_CRASH("No decommitted arenas found."); |
| } |
| |
| ArenaHeader* |
| Chunk::fetchNextDecommittedArena() |
| { |
| MOZ_ASSERT(info.numArenasFreeCommitted == 0); |
| MOZ_ASSERT(info.numArenasFree > 0); |
| |
| unsigned offset = findDecommittedArenaOffset(); |
| info.lastDecommittedArenaOffset = offset + 1; |
| --info.numArenasFree; |
| decommittedArenas.unset(offset); |
| |
| Arena* arena = &arenas[offset]; |
| MarkPagesInUse(arena, ArenaSize); |
| arena->aheader.setAsNotAllocated(); |
| |
| return &arena->aheader; |
| } |
| |
| inline void |
| GCRuntime::updateOnFreeArenaAlloc(const ChunkInfo& info) |
| { |
| MOZ_ASSERT(info.numArenasFreeCommitted <= numArenasFreeCommitted); |
| --numArenasFreeCommitted; |
| } |
| |
| inline ArenaHeader* |
| Chunk::fetchNextFreeArena(JSRuntime* rt) |
| { |
| MOZ_ASSERT(info.numArenasFreeCommitted > 0); |
| MOZ_ASSERT(info.numArenasFreeCommitted <= info.numArenasFree); |
| |
| ArenaHeader* aheader = info.freeArenasHead; |
| info.freeArenasHead = aheader->next; |
| --info.numArenasFreeCommitted; |
| --info.numArenasFree; |
| rt->gc.updateOnFreeArenaAlloc(info); |
| |
| return aheader; |
| } |
| |
| ArenaHeader* |
| Chunk::allocateArena(JSRuntime* rt, Zone* zone, AllocKind thingKind, const AutoLockGC& lock) |
| { |
| ArenaHeader* aheader = info.numArenasFreeCommitted > 0 |
| ? fetchNextFreeArena(rt) |
| : fetchNextDecommittedArena(); |
| aheader->init(zone, thingKind); |
| updateChunkListAfterAlloc(rt, lock); |
| return aheader; |
| } |
| |
| inline void |
| GCRuntime::updateOnArenaFree(const ChunkInfo& info) |
| { |
| ++numArenasFreeCommitted; |
| } |
| |
| void |
| Chunk::addArenaToFreeList(JSRuntime* rt, ArenaHeader* aheader) |
| { |
| MOZ_ASSERT(!aheader->allocated()); |
| aheader->next = info.freeArenasHead; |
| info.freeArenasHead = aheader; |
| ++info.numArenasFreeCommitted; |
| ++info.numArenasFree; |
| rt->gc.updateOnArenaFree(info); |
| } |
| |
| void |
| Chunk::addArenaToDecommittedList(JSRuntime* rt, const ArenaHeader* aheader) |
| { |
| ++info.numArenasFree; |
| decommittedArenas.set(Chunk::arenaIndex(aheader->arenaAddress())); |
| } |
| |
| void |
| Chunk::recycleArena(ArenaHeader* aheader, SortedArenaList& dest, AllocKind thingKind, |
| size_t thingsPerArena) |
| { |
| aheader->getArena()->setAsFullyUnused(thingKind); |
| dest.insertAt(aheader, thingsPerArena); |
| } |
| |
| void |
| Chunk::releaseArena(JSRuntime* rt, ArenaHeader* aheader, const AutoLockGC& lock) |
| { |
| MOZ_ASSERT(aheader->allocated()); |
| MOZ_ASSERT(!aheader->hasDelayedMarking); |
| |
| aheader->setAsNotAllocated(); |
| addArenaToFreeList(rt, aheader); |
| updateChunkListAfterFree(rt, lock); |
| } |
| |
| bool |
| Chunk::decommitOneFreeArena(JSRuntime* rt, AutoLockGC& lock) |
| { |
| MOZ_ASSERT(info.numArenasFreeCommitted > 0); |
| ArenaHeader* aheader = fetchNextFreeArena(rt); |
| updateChunkListAfterAlloc(rt, lock); |
| |
| bool ok; |
| { |
| AutoUnlockGC unlock(lock); |
| ok = MarkPagesUnused(aheader->getArena(), ArenaSize); |
| } |
| |
| if (ok) |
| addArenaToDecommittedList(rt, aheader); |
| else |
| addArenaToFreeList(rt, aheader); |
| updateChunkListAfterFree(rt, lock); |
| |
| return ok; |
| } |
| |
| void |
| Chunk::decommitAllArenasWithoutUnlocking(const AutoLockGC& lock) |
| { |
| for (size_t i = 0; i < ArenasPerChunk; ++i) { |
| if (decommittedArenas.get(i) || arenas[i].aheader.allocated()) |
| continue; |
| |
| if (MarkPagesUnused(&arenas[i], ArenaSize)) { |
| info.numArenasFreeCommitted--; |
| decommittedArenas.set(i); |
| } |
| } |
| } |
| |
| void |
| Chunk::updateChunkListAfterAlloc(JSRuntime* rt, const AutoLockGC& lock) |
| { |
| if (MOZ_UNLIKELY(!hasAvailableArenas())) { |
| rt->gc.availableChunks(lock).remove(this); |
| rt->gc.fullChunks(lock).push(this); |
| } |
| } |
| |
| void |
| Chunk::updateChunkListAfterFree(JSRuntime* rt, const AutoLockGC& lock) |
| { |
| if (info.numArenasFree == 1) { |
| rt->gc.fullChunks(lock).remove(this); |
| rt->gc.availableChunks(lock).push(this); |
| } else if (!unused()) { |
| MOZ_ASSERT(!rt->gc.fullChunks(lock).contains(this)); |
| MOZ_ASSERT(rt->gc.availableChunks(lock).contains(this)); |
| MOZ_ASSERT(!rt->gc.emptyChunks(lock).contains(this)); |
| } else { |
| MOZ_ASSERT(unused()); |
| rt->gc.availableChunks(lock).remove(this); |
| decommitAllArenas(rt); |
| rt->gc.emptyChunks(lock).push(this); |
| } |
| } |
| |
| inline bool |
| GCRuntime::wantBackgroundAllocation(const AutoLockGC& lock) const |
| { |
| // To minimize memory waste, we do not want to run the background chunk |
| // allocation if we already have some empty chunks or when the runtime has |
| // a small heap size (and therefore likely has a small growth rate). |
| return allocTask.enabled() && |
| emptyChunks(lock).count() < tunables.minEmptyChunkCount(lock) && |
| (fullChunks(lock).count() + availableChunks(lock).count()) >= 4; |
| } |
| |
| void |
| GCRuntime::startBackgroundAllocTaskIfIdle() |
| { |
| AutoLockHelperThreadState helperLock; |
| if (allocTask.isRunning()) |
| return; |
| |
| // Join the previous invocation of the task. This will return immediately |
| // if the thread has never been started. |
| allocTask.joinWithLockHeld(); |
| allocTask.startWithLockHeld(); |
| } |
| |
| Chunk* |
| GCRuntime::pickChunk(const AutoLockGC& lock, |
| AutoMaybeStartBackgroundAllocation& maybeStartBackgroundAllocation) |
| { |
| if (availableChunks(lock).count()) |
| return availableChunks(lock).head(); |
| |
| Chunk* chunk = emptyChunks(lock).pop(); |
| if (!chunk) { |
| chunk = Chunk::allocate(rt); |
| if (!chunk) |
| return nullptr; |
| MOZ_ASSERT(chunk->info.numArenasFreeCommitted == 0); |
| } |
| |
| MOZ_ASSERT(chunk->unused()); |
| MOZ_ASSERT(!fullChunks(lock).contains(chunk)); |
| |
| if (wantBackgroundAllocation(lock)) |
| maybeStartBackgroundAllocation.tryToStartBackgroundAllocation(rt); |
| |
| chunkAllocationSinceLastGC = true; |
| |
| availableChunks(lock).push(chunk); |
| |
| return chunk; |
| } |
| |
| ArenaHeader* |
| GCRuntime::allocateArena(Chunk* chunk, Zone* zone, AllocKind thingKind, const AutoLockGC& lock) |
| { |
| MOZ_ASSERT(chunk->hasAvailableArenas()); |
| |
| // Fail the allocation if we are over our heap size limits. |
| if (!rt->isHeapMinorCollecting() && |
| !isHeapCompacting() && |
| usage.gcBytes() >= tunables.gcMaxBytes()) |
| { |
| return nullptr; |
| } |
| |
| ArenaHeader* aheader = chunk->allocateArena(rt, zone, thingKind, lock); |
| zone->usage.addGCArena(); |
| |
| // Trigger an incremental slice if needed. |
| if (!rt->isHeapMinorCollecting() && !isHeapCompacting()) |
| maybeAllocTriggerZoneGC(zone, lock); |
| |
| return aheader; |
| } |
| |
| void |
| GCRuntime::releaseArena(ArenaHeader* aheader, const AutoLockGC& lock) |
| { |
| aheader->zone->usage.removeGCArena(); |
| if (isBackgroundSweeping()) |
| aheader->zone->threshold.updateForRemovedArena(tunables); |
| return aheader->chunk()->releaseArena(rt, aheader, lock); |
| } |
| |
| GCRuntime::GCRuntime(JSRuntime* rt) : |
| rt(rt), |
| systemZone(nullptr), |
| nursery(rt), |
| storeBuffer(rt, nursery), |
| stats(rt), |
| marker(rt), |
| usage(nullptr), |
| mMemProfiler(rt), |
| maxMallocBytes(0), |
| nextCellUniqueId_(LargestTaggedNullCellPointer + 1), // Ensure disjoint from null tagged pointers. |
| numArenasFreeCommitted(0), |
| verifyPreData(nullptr), |
| chunkAllocationSinceLastGC(false), |
| nextFullGCTime(0), |
| lastGCTime(PRMJ_Now()), |
| mode(JSGC_MODE_INCREMENTAL), |
| numActiveZoneIters(0), |
| decommitThreshold(32 * 1024 * 1024), |
| cleanUpEverything(false), |
| grayBufferState(GCRuntime::GrayBufferState::Unused), |
| grayBitsValid(false), |
| majorGCTriggerReason(JS::gcreason::NO_REASON), |
| minorGCTriggerReason(JS::gcreason::NO_REASON), |
| fullGCForAtomsRequested_(false), |
| minorGCNumber(0), |
| majorGCNumber(0), |
| jitReleaseNumber(0), |
| number(0), |
| startNumber(0), |
| isFull(false), |
| #ifdef DEBUG |
| disableStrictProxyCheckingCount(0), |
| #endif |
| incrementalState(gc::NO_INCREMENTAL), |
| lastMarkSlice(false), |
| sweepOnBackgroundThread(false), |
| foundBlackGrayEdges(false), |
| freeLifoAlloc(JSRuntime::TEMP_LIFO_ALLOC_PRIMARY_CHUNK_SIZE), |
| zoneGroupIndex(0), |
| zoneGroups(nullptr), |
| currentZoneGroup(nullptr), |
| sweepZone(nullptr), |
| sweepKindIndex(0), |
| abortSweepAfterCurrentGroup(false), |
| arenasAllocatedDuringSweep(nullptr), |
| startedCompacting(false), |
| relocatedArenasToRelease(nullptr), |
| #ifdef JS_GC_MARKING_VALIDATION |
| markingValidator(nullptr), |
| #endif |
| interFrameGC(false), |
| defaultTimeBudget_(SliceBudget::UnlimitedTimeBudget), |
| incrementalAllowed(true), |
| generationalDisabled(0), |
| compactingEnabled(true), |
| compactingDisabledCount(0), |
| manipulatingDeadZones(false), |
| objectsMarkedInDeadZones(0), |
| poked(false), |
| #ifdef JS_GC_ZEAL |
| zealMode(0), |
| zealFrequency(0), |
| nextScheduled(0), |
| deterministicOnly(false), |
| incrementalLimit(0), |
| #endif |
| validate(true), |
| fullCompartmentChecks(false), |
| mallocBytesUntilGC(0), |
| mallocGCTriggered(false), |
| alwaysPreserveCode(false), |
| #ifdef DEBUG |
| inUnsafeRegion(0), |
| noGCOrAllocationCheck(0), |
| #endif |
| lock(nullptr), |
| allocTask(rt, emptyChunks_), |
| helperState(rt) |
| { |
| setGCMode(JSGC_MODE_GLOBAL); |
| } |
| |
| #ifdef JS_GC_ZEAL |
| |
| void |
| GCRuntime::getZeal(uint8_t* zeal, uint32_t* frequency, uint32_t* scheduled) |
| { |
| *zeal = zealMode; |
| *frequency = zealFrequency; |
| *scheduled = nextScheduled; |
| } |
| |
| const char* gc::ZealModeHelpText = |
| " Specifies how zealous the garbage collector should be. Values for level:\n" |
| " 0: Normal amount of collection\n" |
| " 1: Collect when roots are added or removed\n" |
| " 2: Collect when every N allocations (default: 100)\n" |
| " 3: Collect when the window paints (browser only)\n" |
| " 4: Verify pre write barriers between instructions\n" |
| " 5: Verify pre write barriers between paints\n" |
| " 6: Verify stack rooting\n" |
| " 7: Collect the nursery every N nursery allocations\n" |
| " 8: Incremental GC in two slices: 1) mark roots 2) finish collection\n" |
| " 9: Incremental GC in two slices: 1) mark all 2) new marking and finish\n" |
| " 10: Incremental GC in multiple slices\n" |
| " 11: unused\n" |
| " 12: unused\n" |
| " 13: Check internal hashtables on minor GC\n" |
| " 14: Perform a shrinking collection every N allocations\n"; |
| |
| void |
| GCRuntime::setZeal(uint8_t zeal, uint32_t frequency) |
| { |
| if (verifyPreData) |
| VerifyBarriers(rt, PreBarrierVerifier); |
| |
| if (zealMode == ZealGenerationalGCValue) { |
| evictNursery(JS::gcreason::DEBUG_GC); |
| nursery.leaveZealMode(); |
| } |
| |
| if (zeal == ZealGenerationalGCValue) |
| nursery.enterZealMode(); |
| |
| bool schedule = zeal >= js::gc::ZealAllocValue; |
| zealMode = zeal; |
| zealFrequency = frequency; |
| nextScheduled = schedule ? frequency : 0; |
| } |
| |
| void |
| GCRuntime::setNextScheduled(uint32_t count) |
| { |
| nextScheduled = count; |
| } |
| |
| bool |
| GCRuntime::parseAndSetZeal(const char* str) |
| { |
| int zeal = -1; |
| int frequency = -1; |
| |
| if (isdigit(str[0])) { |
| zeal = atoi(str); |
| |
| const char* p = strchr(str, ','); |
| if (!p) |
| frequency = JS_DEFAULT_ZEAL_FREQ; |
| else |
| frequency = atoi(p + 1); |
| } |
| |
| if (zeal < 0 || zeal > ZealLimit || frequency <= 0) { |
| fprintf(stderr, "Format: JS_GC_ZEAL=level[,N]\n"); |
| fputs(ZealModeHelpText, stderr); |
| return false; |
| } |
| |
| setZeal(zeal, frequency); |
| return true; |
| } |
| |
| #endif |
| |
| /* |
| * Lifetime in number of major GCs for type sets attached to scripts containing |
| * observed types. |
| */ |
| static const uint64_t JIT_SCRIPT_RELEASE_TYPES_PERIOD = 20; |
| |
| bool |
| GCRuntime::init(uint32_t maxbytes, uint32_t maxNurseryBytes) |
| { |
| InitMemorySubsystem(); |
| |
| lock = PR_NewLock(); |
| if (!lock) |
| return false; |
| |
| if (!rootsHash.init(256)) |
| return false; |
| |
| if (!helperState.init()) |
| return false; |
| |
| /* |
| * Separate gcMaxMallocBytes from gcMaxBytes but initialize to maxbytes |
| * for default backward API compatibility. |
| */ |
| AutoLockGC lock(rt); |
| tunables.setParameter(JSGC_MAX_BYTES, maxbytes, lock); |
| setMaxMallocBytes(maxbytes); |
| |
| const char* size = js_sb_getenv("JSGC_MARK_STACK_LIMIT"); |
| if (size) |
| setMarkStackLimit(atoi(size), lock); |
| |
| jitReleaseNumber = majorGCNumber + JIT_SCRIPT_RELEASE_TYPES_PERIOD; |
| |
| if (!nursery.init(maxNurseryBytes)) |
| return false; |
| |
| if (!nursery.isEnabled()) { |
| MOZ_ASSERT(nursery.nurserySize() == 0); |
| ++rt->gc.generationalDisabled; |
| } else { |
| MOZ_ASSERT(nursery.nurserySize() > 0); |
| if (!storeBuffer.enable()) |
| return false; |
| } |
| |
| #ifdef JS_GC_ZEAL |
| const char* zealSpec = js_sb_getenv("JS_GC_ZEAL"); |
| if (zealSpec && zealSpec[0] && !parseAndSetZeal(zealSpec)) |
| return false; |
| #endif |
| |
| if (!InitTrace(*this)) |
| return false; |
| |
| if (!marker.init(mode)) |
| return false; |
| |
| return true; |
| } |
| |
| void |
| GCRuntime::finish() |
| { |
| /* Wait for the nursery sweeping to end. */ |
| if (rt->gc.nursery.isEnabled()) |
| rt->gc.nursery.waitBackgroundFreeEnd(); |
| |
| /* |
| * Wait until the background finalization and allocation stops and the |
| * helper thread shuts down before we forcefully release any remaining GC |
| * memory. |
| */ |
| helperState.finish(); |
| allocTask.cancel(GCParallelTask::CancelAndWait); |
| |
| #ifdef JS_GC_ZEAL |
| /* Free memory associated with GC verification. */ |
| finishVerifier(); |
| #endif |
| |
| /* Delete all remaining zones. */ |
| if (rt->gcInitialized) { |
| AutoSetThreadIsSweeping threadIsSweeping; |
| for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) { |
| for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) |
| js_delete(comp.get()); |
| js_delete(zone.get()); |
| } |
| } |
| |
| zones.clear(); |
| |
| FreeChunkPool(rt, fullChunks_); |
| FreeChunkPool(rt, availableChunks_); |
| FreeChunkPool(rt, emptyChunks_); |
| |
| if (lock) { |
| PR_DestroyLock(lock); |
| lock = nullptr; |
| } |
| |
| FinishTrace(); |
| } |
| |
| template <typename T> |
| static void |
| FinishPersistentRootedChain(mozilla::LinkedList<PersistentRooted<T>>& list) |
| { |
| while (!list.isEmpty()) |
| list.getFirst()->reset(); |
| } |
| |
| void |
| js::gc::FinishPersistentRootedChains(RootLists& roots) |
| { |
| FinishPersistentRootedChain(roots.getPersistentRootedList<JSObject*>()); |
| FinishPersistentRootedChain(roots.getPersistentRootedList<JSScript*>()); |
| FinishPersistentRootedChain(roots.getPersistentRootedList<JSString*>()); |
| FinishPersistentRootedChain(roots.getPersistentRootedList<jsid>()); |
| FinishPersistentRootedChain(roots.getPersistentRootedList<Value>()); |
| FinishPersistentRootedChain(roots.heapRoots_[THING_ROOT_TRACEABLE]); |
| } |
| |
| void |
| GCRuntime::finishRoots() |
| { |
| if (rootsHash.initialized()) |
| rootsHash.clear(); |
| |
| FinishPersistentRootedChains(rt->mainThread.roots); |
| } |
| |
| void |
| GCRuntime::setParameter(JSGCParamKey key, uint32_t value, AutoLockGC& lock) |
| { |
| switch (key) { |
| case JSGC_MAX_MALLOC_BYTES: |
| setMaxMallocBytes(value); |
| for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) |
| zone->setGCMaxMallocBytes(maxMallocBytesAllocated() * 0.9); |
| break; |
| case JSGC_SLICE_TIME_BUDGET: |
| defaultTimeBudget_ = value ? value : SliceBudget::UnlimitedTimeBudget; |
| break; |
| case JSGC_MARK_STACK_LIMIT: |
| setMarkStackLimit(value, lock); |
| break; |
| case JSGC_DECOMMIT_THRESHOLD: |
| decommitThreshold = value * 1024 * 1024; |
| break; |
| case JSGC_MODE: |
| mode = JSGCMode(value); |
| MOZ_ASSERT(mode == JSGC_MODE_GLOBAL || |
| mode == JSGC_MODE_COMPARTMENT || |
| mode == JSGC_MODE_INCREMENTAL); |
| break; |
| case JSGC_COMPACTING_ENABLED: |
| compactingEnabled = value != 0; |
| break; |
| default: |
| tunables.setParameter(key, value, lock); |
| for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) { |
| zone->threshold.updateAfterGC(zone->usage.gcBytes(), GC_NORMAL, tunables, |
| schedulingState, lock); |
| } |
| } |
| } |
| |
| void |
| GCSchedulingTunables::setParameter(JSGCParamKey key, uint32_t value, const AutoLockGC& lock) |
| { |
| switch(key) { |
| case JSGC_MAX_BYTES: |
| gcMaxBytes_ = value; |
| break; |
| case JSGC_HIGH_FREQUENCY_TIME_LIMIT: |
| highFrequencyThresholdUsec_ = value * PRMJ_USEC_PER_MSEC; |
| break; |
| case JSGC_HIGH_FREQUENCY_LOW_LIMIT: |
| highFrequencyLowLimitBytes_ = value * 1024 * 1024; |
| if (highFrequencyLowLimitBytes_ >= highFrequencyHighLimitBytes_) |
| highFrequencyHighLimitBytes_ = highFrequencyLowLimitBytes_ + 1; |
| MOZ_ASSERT(highFrequencyHighLimitBytes_ > highFrequencyLowLimitBytes_); |
| break; |
| case JSGC_HIGH_FREQUENCY_HIGH_LIMIT: |
| MOZ_ASSERT(value > 0); |
| highFrequencyHighLimitBytes_ = value * 1024 * 1024; |
| if (highFrequencyHighLimitBytes_ <= highFrequencyLowLimitBytes_) |
| highFrequencyLowLimitBytes_ = highFrequencyHighLimitBytes_ - 1; |
| MOZ_ASSERT(highFrequencyHighLimitBytes_ > highFrequencyLowLimitBytes_); |
| break; |
| case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX: |
| highFrequencyHeapGrowthMax_ = value / 100.0; |
| MOZ_ASSERT(highFrequencyHeapGrowthMax_ / 0.85 > 1.0); |
| break; |
| case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN: |
| highFrequencyHeapGrowthMin_ = value / 100.0; |
| MOZ_ASSERT(highFrequencyHeapGrowthMin_ / 0.85 > 1.0); |
| break; |
| case JSGC_LOW_FREQUENCY_HEAP_GROWTH: |
| lowFrequencyHeapGrowth_ = value / 100.0; |
| MOZ_ASSERT(lowFrequencyHeapGrowth_ / 0.9 > 1.0); |
| break; |
| case JSGC_DYNAMIC_HEAP_GROWTH: |
| dynamicHeapGrowthEnabled_ = value != 0; |
| break; |
| case JSGC_DYNAMIC_MARK_SLICE: |
| dynamicMarkSliceEnabled_ = value != 0; |
| break; |
| case JSGC_ALLOCATION_THRESHOLD: |
| gcZoneAllocThresholdBase_ = value * 1024 * 1024; |
| break; |
| case JSGC_MIN_EMPTY_CHUNK_COUNT: |
| minEmptyChunkCount_ = value; |
| if (minEmptyChunkCount_ > maxEmptyChunkCount_) |
| maxEmptyChunkCount_ = minEmptyChunkCount_; |
| MOZ_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_); |
| break; |
| case JSGC_MAX_EMPTY_CHUNK_COUNT: |
| maxEmptyChunkCount_ = value; |
| if (minEmptyChunkCount_ > maxEmptyChunkCount_) |
| minEmptyChunkCount_ = maxEmptyChunkCount_; |
| MOZ_ASSERT(maxEmptyChunkCount_ >= minEmptyChunkCount_); |
| break; |
| default: |
| MOZ_CRASH("Unknown GC parameter."); |
| } |
| } |
| |
| uint32_t |
| GCRuntime::getParameter(JSGCParamKey key, const AutoLockGC& lock) |
| { |
| switch (key) { |
| case JSGC_MAX_BYTES: |
| return uint32_t(tunables.gcMaxBytes()); |
| case JSGC_MAX_MALLOC_BYTES: |
| return maxMallocBytes; |
| case JSGC_BYTES: |
| return uint32_t(usage.gcBytes()); |
| case JSGC_MODE: |
| return uint32_t(mode); |
| case JSGC_UNUSED_CHUNKS: |
| return uint32_t(emptyChunks(lock).count()); |
| case JSGC_TOTAL_CHUNKS: |
| return uint32_t(fullChunks(lock).count() + |
| availableChunks(lock).count() + |
| emptyChunks(lock).count()); |
| case JSGC_SLICE_TIME_BUDGET: |
| if (defaultTimeBudget_ == SliceBudget::UnlimitedTimeBudget) { |
| return 0; |
| } else { |
| MOZ_RELEASE_ASSERT(defaultTimeBudget_ >= 0); |
| MOZ_RELEASE_ASSERT(defaultTimeBudget_ < UINT32_MAX); |
| return uint32_t(defaultTimeBudget_); |
| } |
| case JSGC_MARK_STACK_LIMIT: |
| return marker.maxCapacity(); |
| case JSGC_HIGH_FREQUENCY_TIME_LIMIT: |
| return tunables.highFrequencyThresholdUsec(); |
| case JSGC_HIGH_FREQUENCY_LOW_LIMIT: |
| return tunables.highFrequencyLowLimitBytes() / 1024 / 1024; |
| case JSGC_HIGH_FREQUENCY_HIGH_LIMIT: |
| return tunables.highFrequencyHighLimitBytes() / 1024 / 1024; |
| case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MAX: |
| return uint32_t(tunables.highFrequencyHeapGrowthMax() * 100); |
| case JSGC_HIGH_FREQUENCY_HEAP_GROWTH_MIN: |
| return uint32_t(tunables.highFrequencyHeapGrowthMin() * 100); |
| case JSGC_LOW_FREQUENCY_HEAP_GROWTH: |
| return uint32_t(tunables.lowFrequencyHeapGrowth() * 100); |
| case JSGC_DYNAMIC_HEAP_GROWTH: |
| return tunables.isDynamicHeapGrowthEnabled(); |
| case JSGC_DYNAMIC_MARK_SLICE: |
| return tunables.isDynamicMarkSliceEnabled(); |
| case JSGC_ALLOCATION_THRESHOLD: |
| return tunables.gcZoneAllocThresholdBase() / 1024 / 1024; |
| case JSGC_MIN_EMPTY_CHUNK_COUNT: |
| return tunables.minEmptyChunkCount(lock); |
| case JSGC_MAX_EMPTY_CHUNK_COUNT: |
| return tunables.maxEmptyChunkCount(); |
| case JSGC_COMPACTING_ENABLED: |
| return compactingEnabled; |
| default: |
| MOZ_ASSERT(key == JSGC_NUMBER); |
| return uint32_t(number); |
| } |
| } |
| |
| void |
| GCRuntime::setMarkStackLimit(size_t limit, AutoLockGC& lock) |
| { |
| MOZ_ASSERT(!rt->isHeapBusy()); |
| AutoUnlockGC unlock(lock); |
| AutoStopVerifyingBarriers pauseVerification(rt, false); |
| marker.setMaxCapacity(limit); |
| } |
| |
| bool |
| GCRuntime::addBlackRootsTracer(JSTraceDataOp traceOp, void* data) |
| { |
| AssertHeapIsIdle(rt); |
| return !!blackRootTracers.append(Callback<JSTraceDataOp>(traceOp, data)); |
| } |
| |
| void |
| GCRuntime::removeBlackRootsTracer(JSTraceDataOp traceOp, void* data) |
| { |
| // Can be called from finalizers |
| for (size_t i = 0; i < blackRootTracers.length(); i++) { |
| Callback<JSTraceDataOp>* e = &blackRootTracers[i]; |
| if (e->op == traceOp && e->data == data) { |
| blackRootTracers.erase(e); |
| } |
| } |
| } |
| |
| void |
| GCRuntime::setGrayRootsTracer(JSTraceDataOp traceOp, void* data) |
| { |
| AssertHeapIsIdle(rt); |
| grayRootTracer.op = traceOp; |
| grayRootTracer.data = data; |
| } |
| |
| void |
| GCRuntime::setGCCallback(JSGCCallback callback, void* data) |
| { |
| gcCallback.op = callback; |
| gcCallback.data = data; |
| } |
| |
| void |
| GCRuntime::callGCCallback(JSGCStatus status) const |
| { |
| if (gcCallback.op) |
| gcCallback.op(rt, status, gcCallback.data); |
| } |
| |
| namespace { |
| |
| class AutoNotifyGCActivity { |
| public: |
| explicit AutoNotifyGCActivity(GCRuntime& gc) : gc_(gc) { |
| if (!gc_.isIncrementalGCInProgress()) { |
| gcstats::AutoPhase ap(gc_.stats, gcstats::PHASE_GC_BEGIN); |
| gc_.callGCCallback(JSGC_BEGIN); |
| } |
| } |
| ~AutoNotifyGCActivity() { |
| if (!gc_.isIncrementalGCInProgress()) { |
| gcstats::AutoPhase ap(gc_.stats, gcstats::PHASE_GC_END); |
| gc_.callGCCallback(JSGC_END); |
| } |
| } |
| |
| private: |
| GCRuntime& gc_; |
| }; |
| |
| } // (anon) |
| |
| bool |
| GCRuntime::addFinalizeCallback(JSFinalizeCallback callback, void* data) |
| { |
| return finalizeCallbacks.append(Callback<JSFinalizeCallback>(callback, data)); |
| } |
| |
| void |
| GCRuntime::removeFinalizeCallback(JSFinalizeCallback callback) |
| { |
| for (Callback<JSFinalizeCallback>* p = finalizeCallbacks.begin(); |
| p < finalizeCallbacks.end(); p++) |
| { |
| if (p->op == callback) { |
| finalizeCallbacks.erase(p); |
| break; |
| } |
| } |
| } |
| |
| void |
| GCRuntime::callFinalizeCallbacks(FreeOp* fop, JSFinalizeStatus status) const |
| { |
| for (auto& p : finalizeCallbacks) { |
| p.op(fop, status, !isFull, p.data); |
| } |
| } |
| |
| bool |
| GCRuntime::addWeakPointerZoneGroupCallback(JSWeakPointerZoneGroupCallback callback, void* data) |
| { |
| return updateWeakPointerZoneGroupCallbacks.append( |
| Callback<JSWeakPointerZoneGroupCallback>(callback, data)); |
| } |
| |
| void |
| GCRuntime::removeWeakPointerZoneGroupCallback(JSWeakPointerZoneGroupCallback callback) |
| { |
| for (auto& p : updateWeakPointerZoneGroupCallbacks) { |
| if (p.op == callback) { |
| updateWeakPointerZoneGroupCallbacks.erase(&p); |
| break; |
| } |
| } |
| } |
| |
| void |
| GCRuntime::callWeakPointerZoneGroupCallbacks() const |
| { |
| for (auto const& p : updateWeakPointerZoneGroupCallbacks) { |
| p.op(rt, p.data); |
| } |
| } |
| |
| bool |
| GCRuntime::addWeakPointerCompartmentCallback(JSWeakPointerCompartmentCallback callback, void* data) |
| { |
| return updateWeakPointerCompartmentCallbacks.append( |
| Callback<JSWeakPointerCompartmentCallback>(callback, data)); |
| } |
| |
| void |
| GCRuntime::removeWeakPointerCompartmentCallback(JSWeakPointerCompartmentCallback callback) |
| { |
| for (auto& p : updateWeakPointerCompartmentCallbacks) { |
| if (p.op == callback) { |
| updateWeakPointerCompartmentCallbacks.erase(&p); |
| break; |
| } |
| } |
| } |
| |
| void |
| GCRuntime::callWeakPointerCompartmentCallbacks(JSCompartment* comp) const |
| { |
| for (auto const& p : updateWeakPointerCompartmentCallbacks) { |
| p.op(rt, comp, p.data); |
| } |
| } |
| |
| JS::GCSliceCallback |
| GCRuntime::setSliceCallback(JS::GCSliceCallback callback) { |
| return stats.setSliceCallback(callback); |
| } |
| |
| bool |
| GCRuntime::addRoot(Value* vp, const char* name) |
| { |
| /* |
| * Sometimes Firefox will hold weak references to objects and then convert |
| * them to strong references by calling AddRoot (e.g., via PreserveWrapper, |
| * or ModifyBusyCount in workers). We need a read barrier to cover these |
| * cases. |
| */ |
| if (isIncrementalGCInProgress()) |
| HeapValue::writeBarrierPre(*vp); |
| |
| return rootsHash.put(vp, name); |
| } |
| |
| void |
| GCRuntime::removeRoot(Value* vp) |
| { |
| rootsHash.remove(vp); |
| poke(); |
| } |
| |
| extern JS_FRIEND_API(bool) |
| js::AddRawValueRoot(JSContext* cx, Value* vp, const char* name) |
| { |
| MOZ_ASSERT(vp); |
| MOZ_ASSERT(name); |
| bool ok = cx->runtime()->gc.addRoot(vp, name); |
| if (!ok) |
| JS_ReportOutOfMemory(cx); |
| return ok; |
| } |
| |
| extern JS_FRIEND_API(void) |
| js::RemoveRawValueRoot(JSContext* cx, Value* vp) |
| { |
| cx->runtime()->gc.removeRoot(vp); |
| } |
| |
| void |
| GCRuntime::setMaxMallocBytes(size_t value) |
| { |
| /* |
| * For compatibility treat any value that exceeds PTRDIFF_T_MAX to |
| * mean that value. |
| */ |
| maxMallocBytes = (ptrdiff_t(value) >= 0) ? value : size_t(-1) >> 1; |
| resetMallocBytes(); |
| for (ZonesIter zone(rt, WithAtoms); !zone.done(); zone.next()) |
| zone->setGCMaxMallocBytes(value); |
| } |
| |
| void |
| GCRuntime::resetMallocBytes() |
| { |
| mallocBytesUntilGC = ptrdiff_t(maxMallocBytes); |
| mallocGCTriggered = false; |
| } |
| |
| void |
| GCRuntime::updateMallocCounter(JS::Zone* zone, size_t nbytes) |
| { |
| mallocBytesUntilGC -= ptrdiff_t(nbytes); |
| if (MOZ_UNLIKELY(isTooMuchMalloc())) |
| onTooMuchMalloc(); |
| else if (zone) |
| zone->updateMallocCounter(nbytes); |
| } |
| |
| void |
| GCRuntime::onTooMuchMalloc() |
| { |
| if (!mallocGCTriggered) |
| mallocGCTriggered = triggerGC(JS::gcreason::TOO_MUCH_MALLOC); |
| } |
| |
| double |
| ZoneHeapThreshold::allocTrigger(bool highFrequencyGC) const |
| { |
| return (highFrequencyGC ? 0.85 : 0.9) * gcTriggerBytes(); |
| } |
| |
| /* static */ double |
| ZoneHeapThreshold::computeZoneHeapGrowthFactorForHeapSize(size_t lastBytes, |
| const GCSchedulingTunables& tunables, |
| const GCSchedulingState& state) |
| { |
| if (!tunables.isDynamicHeapGrowthEnabled()) |
| return 3.0; |
| |
| // For small zones, our collection heuristics do not matter much: favor |
| // something simple in this case. |
| if (lastBytes < 1 * 1024 * 1024) |
| return tunables.lowFrequencyHeapGrowth(); |
| |
| // If GC's are not triggering in rapid succession, use a lower threshold so |
| // that we will collect garbage sooner. |
| if (!state.inHighFrequencyGCMode()) |
| return tunables.lowFrequencyHeapGrowth(); |
| |
| // The heap growth factor depends on the heap size after a GC and the GC |
| // frequency. For low frequency GCs (more than 1sec between GCs) we let |
| // the heap grow to 150%. For high frequency GCs we let the heap grow |
| // depending on the heap size: |
| // lastBytes < highFrequencyLowLimit: 300% |
| // lastBytes > highFrequencyHighLimit: 150% |
| // otherwise: linear interpolation between 300% and 150% based on lastBytes |
| |
| // Use shorter names to make the operation comprehensible. |
| double minRatio = tunables.highFrequencyHeapGrowthMin(); |
| double maxRatio = tunables.highFrequencyHeapGrowthMax(); |
| double lowLimit = tunables.highFrequencyLowLimitBytes(); |
| double highLimit = tunables.highFrequencyHighLimitBytes(); |
| |
| if (lastBytes <= lowLimit) |
| return maxRatio; |
| |
| if (lastBytes >= highLimit) |
| return minRatio; |
| |
| double factor = maxRatio - ((maxRatio - minRatio) * ((lastBytes - lowLimit) / |
| (highLimit - lowLimit))); |
| MOZ_ASSERT(factor >= minRatio); |
| MOZ_ASSERT(factor <= maxRatio); |
| return factor; |
| } |
| |
| /* static */ size_t |
| ZoneHeapThreshold::computeZoneTriggerBytes(double growthFactor, size_t lastBytes, |
| JSGCInvocationKind gckind, |
| const GCSchedulingTunables& tunables, |
| const AutoLockGC& lock) |
| { |
| size_t base = gckind == GC_SHRINK |
| ? Max(lastBytes, tunables.minEmptyChunkCount(lock) * ChunkSize) |
| : Max(lastBytes, tunables.gcZoneAllocThresholdBase()); |
| double trigger = double(base) * growthFactor; |
| return size_t(Min(double(tunables.gcMaxBytes()), trigger)); |
| } |
| |
| void |
| ZoneHeapThreshold::updateAfterGC(size_t lastBytes, JSGCInvocationKind gckind, |
| const GCSchedulingTunables& tunables, |
| const GCSchedulingState& state, const AutoLockGC& lock) |
| { |
| gcHeapGrowthFactor_ = computeZoneHeapGrowthFactorForHeapSize(lastBytes, tunables, state); |
| gcTriggerBytes_ = computeZoneTriggerBytes(gcHeapGrowthFactor_, lastBytes, gckind, tunables, |
| lock); |
| } |
| |
| void |
| ZoneHeapThreshold::updateForRemovedArena(const GCSchedulingTunables& tunables) |
| { |
| size_t amount = ArenaSize * gcHeapGrowthFactor_; |
| |
| MOZ_ASSERT(amount > 0); |
| MOZ_ASSERT(gcTriggerBytes_ >= amount); |
| |
| if (gcTriggerBytes_ - amount < tunables.gcZoneAllocThresholdBase() * gcHeapGrowthFactor_) |
| return; |
| |
| gcTriggerBytes_ -= amount; |
| } |
| |
| void |
| GCMarker::delayMarkingArena(ArenaHeader* aheader) |
| { |
| if (aheader->hasDelayedMarking) { |
| /* Arena already scheduled to be marked later */ |
| return; |
| } |
| aheader->setNextDelayedMarking(unmarkedArenaStackTop); |
| unmarkedArenaStackTop = aheader; |
| markLaterArenas++; |
| } |
| |
| void |
| GCMarker::delayMarkingChildren(const void* thing) |
| { |
| const TenuredCell* cell = TenuredCell::fromPointer(thing); |
| cell->arenaHeader()->markOverflow = 1; |
| delayMarkingArena(cell->arenaHeader()); |
| } |
| |
| inline void |
| ArenaLists::prepareForIncrementalGC(JSRuntime* rt) |
| { |
| for (auto i : AllAllocKinds()) { |
| FreeList* freeList = &freeLists[i]; |
| if (!freeList->isEmpty()) { |
| ArenaHeader* aheader = freeList->arenaHeader(); |
| aheader->allocatedDuringIncremental = true; |
| rt->gc.marker.delayMarkingArena(aheader); |
| } |
| } |
| } |
| |
| /* Compacting GC */ |
| |
| bool |
| GCRuntime::shouldCompact() |
| { |
| // Compact on shrinking GC if enabled, but skip compacting in incremental |
| // GCs if we are currently animating. |
| return invocationKind == GC_SHRINK && isCompactingGCEnabled() && |
| (!isIncremental || rt->lastAnimationTime + PRMJ_USEC_PER_SEC < PRMJ_Now()); |
| } |
| |
| void |
| GCRuntime::disableCompactingGC() |
| { |
| MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt)); |
| ++compactingDisabledCount; |
| } |
| |
| void |
| GCRuntime::enableCompactingGC() |
| { |
| MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt)); |
| MOZ_ASSERT(compactingDisabledCount > 0); |
| --compactingDisabledCount; |
| } |
| |
| bool |
| GCRuntime::isCompactingGCEnabled() const |
| { |
| MOZ_ASSERT(CurrentThreadCanAccessRuntime(rt)); |
| return compactingEnabled && compactingDisabledCount == 0; |
| } |
| |
| AutoDisableCompactingGC::AutoDisableCompactingGC(JSRuntime* rt) |
| : gc(rt->gc) |
| { |
| gc.disableCompactingGC(); |
| } |
| |
| AutoDisableCompactingGC::~AutoDisableCompactingGC() |
| { |
| gc.enableCompactingGC(); |
| } |
| |
| static bool |
| CanRelocateZone(Zone* zone) |
| { |
| return !zone->isAtomsZone() && !zone->isSelfHostingZone(); |
| } |
| |
| static bool |
| CanRelocateAllocKind(AllocKind kind) |
| { |
| return IsObjectAllocKind(kind); |
| } |
| |
| size_t ArenaHeader::countFreeCells() |
| { |
| size_t count = 0; |
| size_t thingSize = getThingSize(); |
| FreeSpan firstSpan(getFirstFreeSpan()); |
| for (const FreeSpan* span = &firstSpan; !span->isEmpty(); span = span->nextSpan()) |
| count += span->length(thingSize); |
| return count; |
| } |
| |
| size_t ArenaHeader::countUsedCells() |
| { |
| return Arena::thingsPerArena(getThingSize()) - countFreeCells(); |
| } |
| |
| ArenaHeader* |
| ArenaList::removeRemainingArenas(ArenaHeader** arenap) |
| { |
| // This is only ever called to remove arenas that are after the cursor, so |
| // we don't need to update it. |
| #ifdef DEBUG |
| for (ArenaHeader* arena = *arenap; arena; arena = arena->next) |
| MOZ_ASSERT(cursorp_ != &arena->next); |
| #endif |
| ArenaHeader* remainingArenas = *arenap; |
| *arenap = nullptr; |
| check(); |
| return remainingArenas; |
| } |
| |
| static bool |
| ShouldRelocateAllArenas(JS::gcreason::Reason reason) |
| { |
| return reason == JS::gcreason::DEBUG_GC; |
| } |
| |
| /* |
| * Choose which arenas to relocate all cells from. Return an arena cursor that |
| * can be passed to removeRemainingArenas(). |
| */ |
| ArenaHeader** |
| ArenaList::pickArenasToRelocate(size_t& arenaTotalOut, size_t& relocTotalOut) |
| { |
| // Relocate the greatest number of arenas such that the number of used cells |
| // in relocated arenas is less than or equal to the number of free cells in |
| // unrelocated arenas. In other words we only relocate cells we can move |
| // into existing arenas, and we choose the least full areans to relocate. |
| // |
| // This is made easier by the fact that the arena list has been sorted in |
| // descending order of number of used cells, so we will always relocate a |
| // tail of the arena list. All we need to do is find the point at which to |
| // start relocating. |
| |
| check(); |
| |
| if (isCursorAtEnd()) |
| return nullptr; |
| |
| ArenaHeader** arenap = cursorp_; // Next arena to consider for relocation. |
| size_t previousFreeCells = 0; // Count of free cells before arenap. |
| size_t followingUsedCells = 0; // Count of used cells after arenap. |
| size_t fullArenaCount = 0; // Number of full arenas (not relocated). |
| size_t nonFullArenaCount = 0; // Number of non-full arenas (considered for relocation). |
| size_t arenaIndex = 0; // Index of the next arena to consider. |
| |
| for (ArenaHeader* arena = head_; arena != *cursorp_; arena = arena->next) |
| fullArenaCount++; |
| |
| for (ArenaHeader* arena = *cursorp_; arena; arena = arena->next) { |
| followingUsedCells += arena->countUsedCells(); |
| nonFullArenaCount++; |
| } |
| |
| mozilla::DebugOnly<size_t> lastFreeCells(0); |
| size_t cellsPerArena = Arena::thingsPerArena((*arenap)->getThingSize()); |
| |
| while (*arenap) { |
| ArenaHeader* arena = *arenap; |
| if (followingUsedCells <= previousFreeCells) |
| break; |
| |
| size_t freeCells = arena->countFreeCells(); |
| size_t usedCells = cellsPerArena - freeCells; |
| followingUsedCells -= usedCells; |
| #ifdef DEBUG |
| MOZ_ASSERT(freeCells >= lastFreeCells); |
| lastFreeCells = freeCells; |
| #endif |
| previousFreeCells += freeCells; |
| arenap = &arena->next; |
| arenaIndex++; |
| } |
| |
| size_t relocCount = nonFullArenaCount - arenaIndex; |
| MOZ_ASSERT(relocCount < nonFullArenaCount); |
| MOZ_ASSERT((relocCount == 0) == (!*arenap)); |
| arenaTotalOut += fullArenaCount + nonFullArenaCount; |
| relocTotalOut += relocCount; |
| |
| return arenap; |
| } |
| |
| #ifdef DEBUG |
| inline bool |
| PtrIsInRange(const void* ptr, const void* start, size_t length) |
| { |
| return uintptr_t(ptr) - uintptr_t(start) < length; |
| } |
| #endif |
| |
| static TenuredCell* |
| AllocRelocatedCell(Zone* zone, AllocKind thingKind, size_t thingSize) |
| { |
| AutoEnterOOMUnsafeRegion oomUnsafe; |
| void* dstAlloc = zone->arenas.allocateFromFreeList(thingKind, thingSize); |
| if (!dstAlloc) |
| dstAlloc = GCRuntime::refillFreeListInGC(zone, thingKind); |
| if (!dstAlloc) { |
| // This can only happen in zeal mode or debug builds as we don't |
| // otherwise relocate more cells than we have existing free space |
| // for. |
| oomUnsafe.crash("Could not allocate new arena while compacting"); |
| } |
| return TenuredCell::fromPointer(dstAlloc); |
| } |
| |
| static void |
| RelocateCell(Zone* zone, TenuredCell* src, AllocKind thingKind, size_t thingSize) |
| { |
| JS::AutoSuppressGCAnalysis nogc(zone->runtimeFromMainThread()); |
| |
| // Allocate a new cell. |
| MOZ_ASSERT(zone == src->zone()); |
| TenuredCell* dst = AllocRelocatedCell(zone, thingKind, thingSize); |
| |
| // Copy source cell contents to destination. |
| memcpy(dst, src, thingSize); |
| |
| // Move any uid attached to the object. |
| src->zone()->transferUniqueId(dst, src); |
| |
| if (IsObjectAllocKind(thingKind)) { |
| JSObject* srcObj = static_cast<JSObject*>(static_cast<Cell*>(src)); |
| JSObject* dstObj = static_cast<JSObject*>(static_cast<Cell*>(dst)); |
| |
| if (srcObj->isNative()) { |
| NativeObject* srcNative = &srcObj->as<NativeObject>(); |
| NativeObject* dstNative = &dstObj->as<NativeObject>(); |
| |
| // Fixup the pointer to inline object elements if necessary. |
| if (srcNative->hasFixedElements()) |
| dstNative->setFixedElements(); |
| |
| // For copy-on-write objects that own their elements, fix up the |
| // owner pointer to point to the relocated object. |
| if (srcNative->denseElementsAreCopyOnWrite()) { |
| HeapPtrNativeObject& owner = dstNative->getElementsHeader()->ownerObject(); |
| if (owner == srcNative) |
| owner = dstNative; |
| } |
| } |
| |
| // Call object moved hook if present. |
| if (JSObjectMovedOp op = srcObj->getClass()->ext.objectMovedOp) |
| op(dstObj, srcObj); |
| |
| MOZ_ASSERT_IF(dstObj->isNative(), |
| !PtrIsInRange((const Value*)dstObj->as<NativeObject>().getDenseElements(), |
| src, thingSize)); |
| } |
| |
| // Copy the mark bits. |
| dst->copyMarkBitsFrom(src); |
| |
| // Mark source cell as forwarded and leave a pointer to the destination. |
| RelocationOverlay* overlay = RelocationOverlay::fromCell(src); |
| overlay->forwardTo(dst); |
| } |
| |
| static void |
| RelocateArena(ArenaHeader* aheader, SliceBudget& sliceBudget) |
| { |
| MOZ_ASSERT(aheader->allocated()); |
| MOZ_ASSERT(!aheader->hasDelayedMarking); |
| MOZ_ASSERT(!aheader->markOverflow); |
| MOZ_ASSERT(!aheader->allocatedDuringIncremental); |
| |
| Zone* zone = aheader->zone; |
| |
| AllocKind thingKind = aheader->getAllocKind(); |
| size_t thingSize = aheader->getThingSize(); |
| |
| for (ArenaCellIterUnderFinalize i(aheader); !i.done(); i.next()) { |
| RelocateCell(zone, i.getCell(), thingKind, thingSize); |
| sliceBudget.step(); |
| } |
| |
| #ifdef DEBUG |
| for (ArenaCellIterUnderFinalize i(aheader); !i.done(); i.next()) { |
| TenuredCell* src = i.getCell(); |
| MOZ_ASSERT(RelocationOverlay::isCellForwarded(src)); |
| TenuredCell* dest = Forwarded(src); |
| MOZ_ASSERT(src->isMarked(BLACK) == dest->isMarked(BLACK)); |
| MOZ_ASSERT(src->isMarked(GRAY) == dest->isMarked(GRAY)); |
| } |
| #endif |
| } |
| |
| static inline bool |
| ShouldProtectRelocatedArenas(JS::gcreason::Reason reason) |
| { |
| // For zeal mode collections we don't release the relocated arenas |
| // immediately. Instead we protect them and keep them around until the next |
| // collection so we can catch any stray accesses to them. |
| #ifdef DEBUG |
| return reason == JS::gcreason::DEBUG_GC; |
| #else |
| return false; |
| #endif |
| } |
| |
| /* |
| * Relocate all arenas identified by pickArenasToRelocate: for each arena, |
| * relocate each cell within it, then add it to a list of relocated arenas. |
| */ |
| ArenaHeader* |
| ArenaList::relocateArenas(ArenaHeader* toRelocate, ArenaHeader* relocated, SliceBudget& sliceBudget, |
| gcstats::Statistics& stats) |
| { |
| check(); |
| |
| while (ArenaHeader* arena = toRelocate) { |
| toRelocate = arena->next; |
| RelocateArena(arena, sliceBudget); |
| // Prepend to list of relocated arenas |
| arena->next = relocated; |
| relocated = arena; |
| stats.count(gcstats::STAT_ARENA_RELOCATED); |
| } |
| |
| check(); |
| |
| return relocated; |
| } |
| |
| // Skip compacting zones unless we can free a certain proportion of their GC |
| // heap memory. |
| static const double MIN_ZONE_RECLAIM_PERCENT = 2.0; |
| |
| static bool |
| IsOOMReason(JS::gcreason::Reason reason) |
| { |
| return reason == JS::gcreason::LAST_DITCH || |
| reason == JS::gcreason::MEM_PRESSURE; |
| } |
| |
| static bool |
| ShouldRelocateZone(size_t arenaCount, size_t relocCount, JS::gcreason::Reason reason) |
| { |
| if (relocCount == 0) |
| return false; |
| |
| if (IsOOMReason(reason)) |
| return true; |
| |
| return (relocCount * 100.0) / arenaCount >= MIN_ZONE_RECLAIM_PERCENT; |
| } |
| |
| bool |
| ArenaLists::relocateArenas(Zone* zone, ArenaHeader*& relocatedListOut, JS::gcreason::Reason reason, |
| SliceBudget& sliceBudget, gcstats::Statistics& stats) |
| { |
| // This is only called from the main thread while we are doing a GC, so |
| // there is no need to lock. |
| MOZ_ASSERT(CurrentThreadCanAccessRuntime(runtime_)); |
| MOZ_ASSERT(runtime_->gc.isHeapCompacting()); |
| MOZ_ASSERT(!runtime_->gc.isBackgroundSweeping()); |
| |
| // Flush all the freeLists back into the arena headers |
| purge(); |
| checkEmptyFreeLists(); |
| |
| if (ShouldRelocateAllArenas(reason)) { |
| zone->prepareForCompacting(); |
| for (auto i : AllAllocKinds()) { |
| if (CanRelocateAllocKind(i)) { |
| ArenaList& al = arenaLists[i]; |
| ArenaHeader* allArenas = al.head(); |
| al.clear(); |
| relocatedListOut = al.relocateArenas(allArenas, relocatedListOut, sliceBudget, stats); |
| } |
| } |
| } else { |
| size_t arenaCount = 0; |
| size_t relocCount = 0; |
| AllAllocKindArray<ArenaHeader**> toRelocate; |
| |
| for (auto i : AllAllocKinds()) { |
| toRelocate[i] = nullptr; |
| if (CanRelocateAllocKind(i)) |
| toRelocate[i] = arenaLists[i].pickArenasToRelocate(arenaCount, relocCount); |
| } |
| |
| if (!ShouldRelocateZone(arenaCount, relocCount, reason)) |
| return false; |
| |
| zone->prepareForCompacting(); |
| for (auto i : AllAllocKinds()) { |
| if (toRelocate[i]) { |
| ArenaList& al = arenaLists[i]; |
| ArenaHeader* arenas = al.removeRemainingArenas(toRelocate[i]); |
| relocatedListOut = al.relocateArenas(arenas, relocatedListOut, sliceBudget, stats); |
| } |
| } |
| } |
| |
| // When we allocate new locations for cells, we use |
| // allocateFromFreeList(). Reset the free list again so that |
| // AutoCopyFreeListToArenasForGC doesn't complain that the free lists are |
| // different now. |
| purge(); |
| checkEmptyFreeLists(); |
| |
| return true; |
| } |
| |
| bool |
| GCRuntime::relocateArenas(Zone* zone, JS::gcreason::Reason reason, ArenaHeader*& relocatedListOut, |
| SliceBudget& sliceBudget) |
| { |
| gcstats::AutoPhase ap(stats, gcstats::PHASE_COMPACT_MOVE); |
| |
| MOZ_ASSERT(!zone->isPreservingCode()); |
| MOZ_ASSERT(CanRelocateZone(zone)); |
| |
| jit::StopAllOffThreadCompilations(zone); |
| |
| if (!zone->arenas.relocateArenas(zone, relocatedListOut, reason, sliceBudget, stats)) |
| return false; |
| |
| #ifdef DEBUG |
| // Check that we did as much compaction as we should have. There |
| // should always be less than one arena's worth of free cells. |
| for (auto i : AllAllocKinds()) { |
| size_t thingsPerArena = Arena::thingsPerArena(Arena::thingSize(i)); |
| if (CanRelocateAllocKind(i)) { |
| ArenaList& al = zone->arenas.arenaLists[i]; |
| size_t freeCells = 0; |
| for (ArenaHeader* arena = al.arenaAfterCursor(); arena; arena = arena->next) |
| freeCells += arena->countFreeCells(); |
| MOZ_ASSERT(freeCells < thingsPerArena); |
| } |
| } |
| #endif |
| |
| return true; |
| } |
| |
| void |
| MovingTracer::onObjectEdge(JSObject** objp) |
| { |
| if (IsForwarded(*objp)) |
| *objp = Forwarded(*objp); |
| } |
| |
| void |
| Zone::prepareForCompacting() |
| { |
| FreeOp* fop = runtimeFromMainThread()->defaultFreeOp(); |
| discardJitCode(fop); |
| } |
| |
| void |
| GCRuntime::sweepTypesAfterCompacting(Zone* zone) |
| { |
| FreeOp* fop = rt->defaultFreeOp(); |
| zone->beginSweepTypes(fop, rt->gc.releaseObservedTypes && !zone->isPreservingCode()); |
| |
| AutoClearTypeInferenceStateOnOOM oom(zone); |
| |
| for (ZoneCellIterUnderGC i(zone, AllocKind::SCRIPT); !i.done(); i.next()) { |
| JSScript* script = i.get<JSScript>(); |
| script->maybeSweepTypes(&oom); |
| } |
| |
| for (ZoneCellIterUnderGC i(zone, AllocKind::OBJECT_GROUP); !i.done(); i.next()) { |
| ObjectGroup* group = i.get<ObjectGroup>(); |
| group->maybeSweep(&oom); |
| } |
| |
| zone->types.endSweep(rt); |
| } |
| |
| void |
| GCRuntime::sweepZoneAfterCompacting(Zone* zone) |
| { |
| MOZ_ASSERT(zone->isCollecting()); |
| FreeOp* fop = rt->defaultFreeOp(); |
| sweepTypesAfterCompacting(zone); |
| zone->sweepBreakpoints(fop); |
| zone->sweepWeakMaps(); |
| |
| for (CompartmentsInZoneIter c(zone); !c.done(); c.next()) { |
| c->sweepInnerViews(); |
| c->sweepBaseShapeTable(); |
| c->sweepInitialShapeTable(); |
| c->objectGroups.sweep(fop); |
| c->sweepRegExps(); |
| c->sweepSavedStacks(); |
| c->sweepGlobalObject(fop); |
| c->sweepObjectPendingMetadata(); |
| c->sweepSelfHostingScriptSource(); |
| c->sweepDebugScopes(); |
| c->sweepJitCompartment(fop); |
| c->sweepNativeIterators(); |
| c->sweepTemplateObjects(); |
| } |
| } |
| |
| template <typename T> |
| static void |
| UpdateCellPointersTyped(MovingTracer* trc, ArenaHeader* arena, JS::TraceKind traceKind) |
| { |
| for (ArenaCellIterUnderGC i(arena); !i.done(); i.next()) { |
| T* cell = reinterpret_cast<T*>(i.getCell()); |
| cell->fixupAfterMovingGC(); |
| TraceChildren(trc, cell, traceKind); |
| } |
| } |
| |
| /* |
| * Update the interal pointers for all cells in an arena. |
| */ |
| static void |
| UpdateCellPointers(MovingTracer* trc, ArenaHeader* arena) |
| { |
| AllocKind kind = arena->getAllocKind(); |
| JS::TraceKind traceKind = MapAllocToTraceKind(kind); |
| |
| switch (kind) { |
| case AllocKind::FUNCTION: |
| case AllocKind::FUNCTION_EXTENDED: |
| case AllocKind::OBJECT0: |
| case AllocKind::OBJECT0_BACKGROUND: |
| case AllocKind::OBJECT2: |
| case AllocKind::OBJECT2_BACKGROUND: |
| case AllocKind::OBJECT4: |
| case AllocKind::OBJECT4_BACKGROUND: |
| case AllocKind::OBJECT8: |
| case AllocKind::OBJECT8_BACKGROUND: |
| case AllocKind::OBJECT12: |
| case AllocKind::OBJECT12_BACKGROUND: |
| case AllocKind::OBJECT16: |
| case AllocKind::OBJECT16_BACKGROUND: |
| UpdateCellPointersTyped<JSObject>(trc, arena, traceKind); |
| return; |
| case AllocKind::SCRIPT: |
| UpdateCellPointersTyped<JSScript>(trc, arena, traceKind); |
| return; |
| case AllocKind::LAZY_SCRIPT: |
| UpdateCellPointersTyped<LazyScript>(trc, arena, traceKind); |
| return; |
| case AllocKind::SHAPE: |
| UpdateCellPointersTyped<Shape>(trc, arena, traceKind); |
| return; |
| case AllocKind::ACCESSOR_SHAPE: |
| UpdateCellPointersTyped<AccessorShape>(trc, arena, traceKind); |
| return; |
| case AllocKind::BASE_SHAPE: |
| UpdateCellPointersTyped<BaseShape>(trc, arena, traceKind); |
| return; |
| case AllocKind::OBJECT_GROUP: |
| UpdateCellPointersTyped<ObjectGroup>(trc, arena, traceKind); |
| return; |
| case AllocKind::JITCODE: |
| UpdateCellPointersTyped<jit::JitCode>(trc, arena, traceKind); |
| return; |
| default: |
| MOZ_CRASH("Invalid alloc kind for UpdateCellPointers"); |
| } |
| } |
| |
| namespace js { |
| namespace gc { |
| |
| struct ArenasToUpdate |
| { |
| enum KindsToUpdate { |
| FOREGROUND = 1, |
| BACKGROUND = 2, |
| ALL = FOREGROUND | BACKGROUND |
| }; |
| ArenasToUpdate(Zone* zone, KindsToUpdate kinds); |
| bool done() { return kind == AllocKind::LIMIT; } |
| ArenaHeader* getArenasToUpdate(AutoLockHelperThreadState& lock, unsigned max); |
| |
| private: |
| KindsToUpdate kinds; // Selects which thing kinds to iterate |
| Zone* zone; // Zone to process |
| AllocKind kind; // Current alloc kind to process |
| ArenaHeader* arena; // Next arena to process |
| |
| AllocKind nextAllocKind(AllocKind i) { return AllocKind(uint8_t(i) + 1); } |
| bool shouldProcessKind(AllocKind kind); |
| ArenaHeader* next(AutoLockHelperThreadState& lock); |
| }; |
| |
| bool ArenasToUpdate::shouldProcessKind(AllocKind kind) |
| { |
| MOZ_ASSERT(IsValidAllocKind(kind)); |
| |
| // GC things that do not contain JSObject pointers don't need updating. |
| if (kind == AllocKind::FAT_INLINE_STRING || |
| kind == AllocKind::STRING || |
| kind == AllocKind::EXTERNAL_STRING || |
| kind == AllocKind::SYMBOL) |
| { |
| return false; |
| } |
| |
| // We try to update as many GC things in parallel as we can, but there are |
| // kinds for which this might not be safe: |
| // - we assume JSObjects that are foreground finalized are not safe to |
| // update in parallel |
| // - updating a shape touches child shapes in fixupShapeTreeAfterMovingGC() |
| if (js::gc::IsBackgroundFinalized(kind) && |
| kind != AllocKind::SHAPE && |
| kind != AllocKind::ACCESSOR_SHAPE) |
| { |
| return (kinds & BACKGROUND) != 0; |
| } else { |
| return (kinds & FOREGROUND) != 0; |
| } |
| } |
| |
| ArenasToUpdate::ArenasToUpdate(Zone* zone, KindsToUpdate kinds) |
| : kinds(kinds), zone(zone), kind(AllocKind::FIRST), arena(nullptr) |
| { |
| MOZ_ASSERT(zone->isGCCompacting()); |
| MOZ_ASSERT(kinds && !(kinds & ~ALL)); |
| } |
| |
| ArenaHeader* |
| ArenasToUpdate::next(AutoLockHelperThreadState& lock) |
| { |
| // Find the next arena to update. |
| // |
| // This iterates through the GC thing kinds filtered by shouldProcessKind(), |
| // and then through thea arenas of that kind. All state is held in the |
| // object and we just return when we find an arena. |
| |
| for (; kind < AllocKind::LIMIT; kind = nextAllocKind(kind)) { |
| if (shouldProcessKind(kind)) { |
| if (!arena) |
| arena = zone->arenas.getFirstArena(kind); |
| else |
| arena = arena->next; |
| if (arena) |
| return arena; |
| } |
| } |
| |
| MOZ_ASSERT(!arena); |
| MOZ_ASSERT(done()); |
| return nullptr; |
| } |
| |
| ArenaHeader* |
| ArenasToUpdate::getArenasToUpdate(AutoLockHelperThreadState& lock, unsigned count) |
| { |
| if (done()) |
| return nullptr; |
| |
| ArenaHeader* head = nullptr; |
| ArenaHeader* tail = nullptr; |
| |
| for (unsigned i = 0; i < count; ++i) { |
| ArenaHeader* arena = next(lock); |
| if (!arena) |
| break; |
| |
| if (tail) |
| tail->setNextArenaToUpdate(arena); |
| else |
| head = arena; |
| tail = arena; |
| } |
| |
| return head; |
| } |
| |
| struct UpdateCellPointersTask : public GCParallelTask |
| { |
| // Number of arenas to update in one block. |
| #ifdef DEBUG |
| static const unsigned ArenasToProcess = 16; |
| #else |
| static const unsigned ArenasToProcess = 256; |
| #endif |
| |
| UpdateCellPointersTask() : rt_(nullptr), source_(nullptr), arenaList_(nullptr) {} |
| void init(JSRuntime* rt, ArenasToUpdate* source, AutoLockHelperThreadState& lock); |
| ~UpdateCellPointersTask() override { join(); } |
| |
| private: |
| JSRuntime* rt_; |
| ArenasToUpdate* source_; |
| ArenaHeader* arenaList_; |
| |
| virtual void run() override; |
| void getArenasToUpdate(AutoLockHelperThreadState& lock); |
| void updateArenas(); |
| }; |
| |
| void |
| UpdateCellPointersTask::init(JSRuntime* rt, ArenasToUpdate* source, AutoLockHelperThreadState& lock) |
| { |
| rt_ = rt; |
| source_ = source; |
| getArenasToUpdate(lock); |
| } |
| |
| void |
| UpdateCellPointersTask::getArenasToUpdate(AutoLockHelperThreadState& lock) |
| { |
| arenaList_ = source_->getArenasToUpdate(lock, ArenasToProcess); |
| } |
| |
| void |
| UpdateCellPointersTask::updateArenas() |
| { |
| MovingTracer trc(rt_); |
| for (ArenaHeader* arena = arenaList_; |
| arena; |
| arena = arena->getNextArenaToUpdateAndUnlink()) |
| { |
| UpdateCellPointers(&trc, arena); |
| } |
| arenaList_ = nullptr; |
| } |
| |
| /* virtual */ void |
| UpdateCellPointersTask::run() |
| { |
| MOZ_ASSERT(!HelperThreadState().isLocked()); |
| while (arenaList_) { |
| updateArenas(); |
| { |
| AutoLockHelperThreadState lock; |
| getArenasToUpdate(lock); |
| } |
| } |
| } |
| |
| } // namespace gc |
| } // namespace js |
| |
| void |
| GCRuntime::updateAllCellPointersParallel(MovingTracer* trc, Zone* zone) |
| { |
| AutoDisableProxyCheck noProxyCheck(rt); // These checks assert when run in parallel. |
| |
| const size_t minTasks = 2; |
| const size_t maxTasks = 8; |
| size_t targetTaskCount = HelperThreadState().cpuCount / 2; |
| size_t taskCount = Min(Max(targetTaskCount, minTasks), maxTasks); |
| UpdateCellPointersTask bgTasks[maxTasks]; |
| UpdateCellPointersTask fgTask; |
| |
| ArenasToUpdate fgArenas(zone, ArenasToUpdate::FOREGROUND); |
| ArenasToUpdate bgArenas(zone, ArenasToUpdate::BACKGROUND); |
| |
| unsigned tasksStarted = 0; |
| { |
| AutoLockHelperThreadState lock; |
| unsigned i; |
| for (i = 0; i < taskCount && !bgArenas.done(); ++i) { |
| bgTasks[i].init(rt, &bgArenas, lock); |
| startTask(bgTasks[i], gcstats::PHASE_COMPACT_UPDATE_CELLS); |
| } |
| tasksStarted = i; |
| |
| fgTask.init(rt, &fgArenas, lock); |
| } |
| |
| fgTask.runFromMainThread(rt); |
| |
| { |
| AutoLockHelperThreadState lock; |
| for (unsigned i = 0; i < tasksStarted; ++i) |
| joinTask(bgTasks[i], gcstats::PHASE_COMPACT_UPDATE_CELLS); |
| } |
| } |
| |
| void |
| GCRuntime::updateAllCellPointersSerial(MovingTracer* trc, Zone* zone) |
| { |
| UpdateCellPointersTask task; |
| { |
| AutoLockHelperThreadState lock; |
| ArenasToUpdate allArenas(zone, ArenasToUpdate::ALL); |
| task.init(rt, &allArenas, lock); |
| } |
| task.runFromMainThread(rt); |
| } |
| |
| /* |
| * Update pointers to relocated cells by doing a full heap traversal and sweep. |
| * |
| * The latter is necessary to update weak references which are not marked as |
| * part of the traversal. |
| */ |
| void |
| GCRuntime::updatePointersToRelocatedCells(Zone* zone) |
| { |
| MOZ_ASSERT(zone->isGCCompacting()); |
| MOZ_ASSERT(rt->currentThreadHasExclusiveAccess()); |
| |
| gcstats::AutoPhase ap(stats, gcstats::PHASE_COMPACT_UPDATE); |
| MovingTracer trc(rt); |
| |
| // Fixup compartment global pointers as these get accessed during marking. |
| for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) |
| comp->fixupAfterMovingGC(); |
| JSCompartment::fixupCrossCompartmentWrappersAfterMovingGC(&trc); |
| |
| // Mark roots to update them. |
| { |
| markRuntime(&trc, MarkRuntime); |
| |
| gcstats::AutoPhase ap(stats, gcstats::PHASE_MARK_ROOTS); |
| Debugger::markAll(&trc); |
| Debugger::markIncomingCrossCompartmentEdges(&trc); |
| |
| WeakMapBase::markAll(zone, &trc); |
| for (CompartmentsInZoneIter c(zone); !c.done(); c.next()) { |
| c->trace(&trc); |
| if (c->watchpointMap) |
| c->watchpointMap->markAll(&trc); |
| } |
| |
| // Mark all gray roots, making sure we call the trace callback to get the |
| // current set. |
| if (JSTraceDataOp op = grayRootTracer.op) |
| (*op)(&trc, grayRootTracer.data); |
| } |
| |
| // Sweep everything to fix up weak pointers |
| WatchpointMap::sweepAll(rt); |
| Debugger::sweepAll(rt->defaultFreeOp()); |
| jit::JitRuntime::SweepJitcodeGlobalTable(rt); |
| rt->gc.sweepZoneAfterCompacting(zone); |
| |
| // Type inference may put more blocks here to free. |
| freeLifoAlloc.freeAll(); |
| |
| // Clear runtime caches that can contain cell pointers. |
| // TODO: Should possibly just call purgeRuntime() here. |
| rt->newObjectCache.purge(); |
| rt->nativeIterCache.purge(); |
| |
| // Call callbacks to get the rest of the system to fixup other untraced pointers. |
| callWeakPointerZoneGroupCallbacks(); |
| for (CompartmentsInZoneIter comp(zone); !comp.done(); comp.next()) |
| callWeakPointerCompartmentCallbacks(comp); |
| |
| // Finally, iterate through all cells that can contain JSObject pointers to |
| // update them. Since updating each cell is independent we try to |
| // parallelize this as much as possible. |
| if (CanUseExtraThreads()) |
| updateAllCellPointersParallel(&trc, zone); |
| else |
| updateAllCellPointersSerial(&trc, zone); |
| } |
| |
| void |
| GCRuntime::protectAndHoldArenas(ArenaHeader* arenaList) |
| { |
| for (ArenaHeader* arena = arenaList; arena; ) { |
| MOZ_ASSERT(arena->allocated()); |
| ArenaHeader* next = arena->next; |
| if (!next) { |
| // Prepend to hold list before we protect the memory. |
| arena->next = relocatedArenasToRelease; |
| relocatedArenasToRelease = arenaList; |
| } |
| ProtectPages(arena, ArenaSize); |
| arena = next; |
| } |
| } |
| |
| void |
| GCRuntime::unprotectHeldRelocatedArenas() |
| { |
| for (ArenaHeader* arena = relocatedArenasToRelease; arena; arena = arena->next) { |
| UnprotectPages(arena, ArenaSize); |
| MOZ_ASSERT(arena->allocated()); |
| } |
| } |
| |
| void |
| GCRuntime::releaseRelocatedArenas(ArenaHeader* arenaList) |
| { |
| AutoLockGC lock(rt); |
| releaseRelocatedArenasWithoutUnlocking(arenaList, lock); |
| expireChunksAndArenas(true, lock); |
| } |
| |
| void |
| GCRuntime::releaseRelocatedArenasWithoutUnlocking(ArenaHeader* arenaList, const AutoLockGC& lock) |
| { |
| // Release the relocated arenas, now containing only forwarding pointers |
| unsigned count = 0; |
| while (arenaList) { |
| ArenaHeader* aheader = arenaList; |
| arenaList = arenaList->next; |
| |
| // Clear the mark bits |
| aheader->unmarkAll(); |
| |
| // Mark arena as empty |
| AllocKind thingKind = aheader->getAllocKind(); |
| size_t thingSize = aheader->getThingSize(); |
| Arena* arena = aheader->getArena(); |
| FreeSpan fullSpan; |
| fullSpan.initFinal(arena->thingsStart(thingKind), arena->thingsEnd() - thingSize, thingSize); |
| aheader->setFirstFreeSpan(&fullSpan); |
| |
| #if defined(JS_CRASH_DIAGNOSTICS) || defined(JS_GC_ZEAL) |
| JS_POISON(reinterpret_cast<void*>(arena->thingsStart(thingKind)), |
| JS_MOVED_TENURED_PATTERN, Arena::thingsSpan(thingSize)); |
| #endif |
| |
| releaseArena(aheader, lock); |
| ++count; |
| } |
| } |
| |
| // In debug mode we don't always release relocated arenas straight away. |
| // Sometimes protect them instead and hold onto them until the next GC sweep |
| // phase to catch any pointers to them that didn't get forwarded. |
| |
| void |
| GCRuntime::releaseHeldRelocatedArenas() |
| { |
| #ifdef DEBUG |
| unprotectHeldRelocatedArenas(); |
| releaseRelocatedArenas(relocatedArenasToRelease); |
| relocatedArenasToRelease = nullptr; |
| #endif |
| } |
| |
| void |
| GCRuntime::releaseHeldRelocatedArenasWithoutUnlocking(const AutoLockGC& lock) |
| { |
| #ifdef DEBUG |
| unprotectHeldRelocatedArenas(); |
| releaseRelocatedArenasWithoutUnlocking(relocatedArenasToRelease, lock); |
| relocatedArenasToRelease = nullptr; |
| #endif |
| } |
| |
| void |
| ReleaseArenaList(JSRuntime* rt, ArenaHeader* aheader, const AutoLockGC& lock) |
| { |
| ArenaHeader* next; |
| for (; aheader; aheader = next) { |
| next = aheader->next; |
| rt->gc.releaseArena(aheader, lock); |
| } |
| } |
| |
| ArenaLists::~ArenaLists() |
| { |
| AutoLockGC lock(runtime_); |
| |
| for (auto i : AllAllocKinds()) { |
| /* |
| * We can only call this during the shutdown after the last GC when |
| * the background finalization is disabled. |
| */ |
| MOZ_ASSERT(backgroundFinalizeState[i] == BFS_DONE); |
| ReleaseArenaList(runtime_, arenaLists[i].head(), lock); |
| } |
| ReleaseArenaList(runtime_, incrementalSweptArenas.head(), lock); |
| |
| for (auto i : ObjectAllocKinds()) |
| ReleaseArenaList(runtime_, savedObjectArenas[i].head(), lock); |
| ReleaseArenaList(runtime_, savedEmptyObjectArenas, lock); |
| } |
| |
| void |
| ArenaLists::finalizeNow(FreeOp* fop, const FinalizePhase& phase) |
| { |
| gcstats::AutoPhase ap(fop->runtime()->gc.stats, phase.statsPhase); |
| for (unsigned i = 0; i < phase.length; ++i) |
| finalizeNow(fop, phase.kinds[i], RELEASE_ARENAS, nullptr); |
| } |
| |
| void |
| ArenaLists::finalizeNow(FreeOp* fop, AllocKind thingKind, KeepArenasEnum keepArenas, ArenaHeader** empty) |
| { |
| MOZ_ASSERT(!IsBackgroundFinalized(thingKind)); |
| forceFinalizeNow(fop, thingKind, keepArenas, empty); |
| } |
| |
| void |
| ArenaLists::forceFinalizeNow(FreeOp* fop, AllocKind thingKind, KeepArenasEnum keepArenas, ArenaHeader** empty) |
| { |
| MOZ_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE); |
| |
| ArenaHeader* arenas = arenaLists[thingKind].head(); |
| if (!arenas) |
| return; |
| arenaLists[thingKind].clear(); |
| |
| size_t thingsPerArena = Arena::thingsPerArena(Arena::thingSize(thingKind)); |
| SortedArenaList finalizedSorted(thingsPerArena); |
| |
| auto unlimited = SliceBudget::unlimited(); |
| FinalizeArenas(fop, &arenas, finalizedSorted, thingKind, unlimited, keepArenas); |
| MOZ_ASSERT(!arenas); |
| |
| if (empty) { |
| MOZ_ASSERT(keepArenas == KEEP_ARENAS); |
| finalizedSorted.extractEmpty(empty); |
| } |
| |
| arenaLists[thingKind] = finalizedSorted.toArenaList(); |
| } |
| |
| void |
| ArenaLists::queueForForegroundSweep(FreeOp* fop, const FinalizePhase& phase) |
| { |
| gcstats::AutoPhase ap(fop->runtime()->gc.stats, phase.statsPhase); |
| for (unsigned i = 0; i < phase.length; ++i) |
| queueForForegroundSweep(fop, phase.kinds[i]); |
| } |
| |
| void |
| ArenaLists::queueForForegroundSweep(FreeOp* fop, AllocKind thingKind) |
| { |
| MOZ_ASSERT(!IsBackgroundFinalized(thingKind)); |
| MOZ_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE); |
| MOZ_ASSERT(!arenaListsToSweep[thingKind]); |
| |
| arenaListsToSweep[thingKind] = arenaLists[thingKind].head(); |
| arenaLists[thingKind].clear(); |
| } |
| |
| void |
| ArenaLists::queueForBackgroundSweep(FreeOp* fop, const FinalizePhase& phase) |
| { |
| gcstats::AutoPhase ap(fop->runtime()->gc.stats, phase.statsPhase); |
| for (unsigned i = 0; i < phase.length; ++i) |
| queueForBackgroundSweep(fop, phase.kinds[i]); |
| } |
| |
| inline void |
| ArenaLists::queueForBackgroundSweep(FreeOp* fop, AllocKind thingKind) |
| { |
| MOZ_ASSERT(IsBackgroundFinalized(thingKind)); |
| |
| ArenaList* al = &arenaLists[thingKind]; |
| if (al->isEmpty()) { |
| MOZ_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE); |
| return; |
| } |
| |
| MOZ_ASSERT(backgroundFinalizeState[thingKind] == BFS_DONE); |
| |
| arenaListsToSweep[thingKind] = al->head(); |
| al->clear(); |
| backgroundFinalizeState[thingKind] = BFS_RUN; |
| } |
| |
| /*static*/ void |
| ArenaLists::backgroundFinalize(FreeOp* fop, ArenaHeader* listHead, ArenaHeader** empty) |
| { |
| MOZ_ASSERT(listHead); |
| MOZ_ASSERT(empty); |
| |
| AllocKind thingKind = listHead->getAllocKind(); |
| Zone* zone = listHead->zone; |
| |
| size_t thingsPerArena = Arena::thingsPerArena(Arena::thingSize(thingKind)); |
| SortedArenaList finalizedSorted(thingsPerArena); |
| |
| auto unlimited = SliceBudget::unlimited(); |
| FinalizeArenas(fop, &listHead, finalizedSorted, thingKind, unlimited, KEEP_ARENAS); |
| MOZ_ASSERT(!listHead); |
| |
| finalizedSorted.extractEmpty(empty); |
| |
| // When arenas are queued for background finalization, all arenas are moved |
| // to arenaListsToSweep[], leaving the arenaLists[] empty. However, new |
| // arenas may be allocated before background finalization finishes; now that |
| // finalization is complete, we want to merge these lists back together. |
| ArenaLists* lists = &zone->arenas; |
| ArenaList* al = &lists->arenaLists[thingKind]; |
| |
| // Flatten |finalizedSorted| into a regular ArenaList. |
| ArenaList finalized = finalizedSorted.toArenaList(); |
| |
| // We must take the GC lock to be able to safely modify the ArenaList; |
| // however, this does not by itself make the changes visible to all threads, |
| // as not all threads take the GC lock to read the ArenaLists. |
| // That safety is provided by the ReleaseAcquire memory ordering of the |
| // background finalize state, which we explicitly set as the final step. |
| { |
| AutoLockGC lock(fop->runtime()); |
| MOZ_ASSERT(lists->backgroundFinalizeState[thingKind] == BFS_RUN); |
| |
| // Join |al| and |finalized| into a single list. |
| *al = finalized.insertListWithCursorAtEnd(*al); |
| |
| lists->arenaListsToSweep[thingKind] = nullptr; |
| } |
| |
| lists->backgroundFinalizeState[thingKind] = BFS_DONE; |
| } |
| |
| void |
| ArenaLists::queueForegroundObjectsForSweep(FreeOp* fop) |
| { |
| gcstats::AutoPhase ap(fop->runtime()->gc.stats, gcstats::PHASE_SWEEP_OBJECT); |
| |
| #ifdef DEBUG |
| for (auto i : ObjectAllocKinds()) { // Braces needed to appease MSVC 2013. |
| MOZ_ASSERT(savedObjectArenas[i].isEmpty()); |
| } |
| MOZ_ASSERT(savedEmptyObjectArenas == nullptr); |
| #endif |
| |
| // Foreground finalized objects must be finalized at the beginning of the |
| // sweep phase, before control can return to the mutator. Otherwise, |
| // mutator behavior can resurrect certain objects whose references would |
| // otherwise have been erased by the finalizer. |
| finalizeNow(fop, AllocKind::OBJECT0, KEEP_ARENAS, &savedEmptyObjectArenas); |
| finalizeNow(fop, AllocKind::OBJECT2, KEEP_ARENAS, &savedEmptyObjectArenas); |
| finalizeNow(fop, AllocKind::OBJECT4, KEEP_ARENAS, &savedEmptyObjectArenas); |
| finalizeNow(fop, AllocKind::OBJECT8, KEEP_ARENAS, &savedEmptyObjectArenas); |
| finalizeNow(fop, AllocKind::OBJECT12, KEEP_ARENAS, &savedEmptyObjectArenas); |
| finalizeNow(fop, AllocKind::OBJECT16, KEEP_ARENAS, &savedEmptyObjectArenas); |
| |
| // Prevent the arenas from having new objects allocated into them. We need |
| // to know which objects are marked while we incrementally sweep dead |
| // references from type information. |
| savedObjectArenas[AllocKind::OBJECT0] = arenaLists[AllocKind::OBJECT0].copyAndClear(); |
| savedObjectArenas[AllocKind::OBJECT2] = arenaLists[AllocKind::OBJECT2].copyAndClear(); |
| savedObjectArenas[AllocKind::OBJECT4] = arenaLists[AllocKind::OBJECT4].copyAndClear(); |
| savedObjectArenas[AllocKind::OBJECT8] = arenaLists[AllocKind::OBJECT8].copyAndClear(); |
| savedObjectArenas[AllocKind::OBJECT12] = arenaLists[AllocKind::OBJECT12].copyAndClear(); |
| savedObjectArenas[AllocKind::OBJECT16] = arenaLists[AllocKind::OBJECT16].copyAndClear(); |
| } |
| |
| void |
| ArenaLists::mergeForegroundSweptObjectArenas() |
| { |
| AutoLockGC lock(runtime_); |
| ReleaseArenaList(runtime_, savedEmptyObjectArenas, lock); |
| savedEmptyObjectArenas = nullptr; |
| |
| mergeSweptArenas(AllocKind::OBJECT0); |
| mergeSweptArenas(AllocKind::OBJECT2); |
| mergeSweptArenas(AllocKind::OBJECT4); |
| mergeSweptArenas(AllocKind::OBJECT8); |
| mergeSweptArenas(AllocKind::OBJECT12); |
| mergeSweptArenas(AllocKind::OBJECT16); |
| } |
| |
| inline void |
| ArenaLists::mergeSweptArenas(AllocKind thingKind) |
| { |
| ArenaList* al = &arenaLists[thingKind]; |
| ArenaList* saved = &savedObjectArenas[thingKind]; |
| |
| *al = saved->insertListWithCursorAtEnd(*al); |
| saved->clear(); |
| } |
| |
| void |
| ArenaLists::queueForegroundThingsForSweep(FreeOp* fop) |
| { |
| gcShapeArenasToUpdate = arenaListsToSweep[AllocKind::SHAPE]; |
| gcAccessorShapeArenasToUpdate = arenaListsToSweep[AllocKind::ACCESSOR_SHAPE]; |
| gcObjectGroupArenasToUpdate = arenaListsToSweep[AllocKind::OBJECT_GROUP]; |
| gcScriptArenasToUpdate = arenaListsToSweep[AllocKind::SCRIPT]; |
| } |
| |
| /* static */ void* |
| GCRuntime::refillFreeListInGC(Zone* zone, AllocKind thingKind) |
| { |
| /* |
| * Called by compacting GC to refill a free list while we are in a GC. |
| */ |
| |
| MOZ_ASSERT(zone->arenas.freeLists[thingKind].isEmpty()); |
| mozilla::DebugOnly<JSRuntime*> rt = zone->runtimeFromMainThread(); |
| MOZ_ASSERT(rt->isHeapMajorCollecting()); |
| MOZ_ASSERT(!rt->gc.isBackgroundSweeping()); |
| |
| AutoMaybeStartBackgroundAllocation maybeStartBackgroundAllocation; |
| return zone->arenas.allocateFromArena(zone, thingKind, maybeStartBackgroundAllocation); |
| } |
| |
| SliceBudget::SliceBudget() |
| : timeBudget(UnlimitedTimeBudget), workBudget(UnlimitedWorkBudget) |
| { |
| makeUnlimited(); |
| |