Update v8 to 11.6.189.22

This commit is contained in:
James Chen 2023-10-08 13:47:06 +08:00
parent b46f367b1d
commit d5d175b31e
1174 changed files with 214317 additions and 150081 deletions

View File

@ -79,11 +79,6 @@ if(USE_SE_V8)
)
endif()
add_library(v8_inspector STATIC IMPORTED GLOBAL)
set_target_properties(v8_inspector PROPERTIES
IMPORTED_LOCATION ${platform_spec_path}/v8/libinspector.a
INTERFACE_INCLUDE_DIRECTORIES ${platform_spec_path}/include/v8
)
set(se_libs_name v8_monolith)
set(se_libs_include ${platform_spec_path}/include/v8)
endif()
@ -96,12 +91,6 @@ if(USE_WEBSOCKET_SERVER)
)
endif()
if(USE_SE_V8 AND USE_V8_DEBUGGER )
list(APPEND CC_EXTERNAL_LIBS
v8_inspector
)
endif()
############################# glslang #############################
set(glslang_libs_name glslang glslang-default-resource-limits MachineIndependent OGLCompiler OSDependent SPIRV SPIRV-Tools-opt SPIRV-Tools GenericCodeGen)
foreach(lib IN LISTS glslang_libs_name)

View File

@ -2,15 +2,20 @@ adamk@chromium.org
cbruni@chromium.org
leszeks@chromium.org
mlippautz@chromium.org
ulan@chromium.org
verwaest@chromium.org
yangguo@chromium.org
per-file *DEPS=file:../COMMON_OWNERS
per-file v8-internal.h=file:../COMMON_OWNERS
per-file v8-inspector.h=file:../src/inspector/OWNERS
per-file v8-inspector-protocol.h=file:../src/inspector/OWNERS
per-file v8-debug.h=file:../src/debug/OWNERS
per-file js_protocol.pdl=file:../src/inspector/OWNERS
per-file v8-inspector*=file:../src/inspector/OWNERS
per-file v8-inspector*=file:../src/inspector/OWNERS
# Needed by the auto_tag builder
per-file v8-version.h=v8-ci-autoroll-builder@chops-service-accounts.iam.gserviceaccount.com
# For branch updates:
per-file v8-version.h=file:../INFRA_OWNERS

View File

@ -2,6 +2,7 @@ include_rules = [
"-include",
"+v8config.h",
"+v8-platform.h",
"+v8-source-location.h",
"+cppgc",
"-src",
"+libplatform/libplatform.h",

View File

@ -1,5 +1,135 @@
# C++ Garbage Collection
# Oilpan: C++ Garbage Collection
This directory provides an open-source garbage collection library for C++.
Oilpan is an open-source garbage collection library for C++ that can be used stand-alone or in collaboration with V8's JavaScript garbage collector.
Oilpan implements mark-and-sweep garbage collection (GC) with limited compaction (for a subset of objects).
The library is under construction, meaning that *all APIs in this directory are incomplete and considered unstable and should not be used*.
**Key properties**
- Trace-based garbage collection;
- Incremental and concurrent marking;
- Incremental and concurrent sweeping;
- Precise on-heap memory layout;
- Conservative on-stack memory layout;
- Allows for collection with and without considering stack;
- Non-incremental and non-concurrent compaction for selected spaces;
See the [Hello World](https://chromium.googlesource.com/v8/v8/+/main/samples/cppgc/hello-world.cc) example on how to get started using Oilpan to manage C++ code.
Oilpan follows V8's project organization, see e.g. on how we accept [contributions](https://v8.dev/docs/contribute) and [provide a stable API](https://v8.dev/docs/api).
## Threading model
Oilpan features thread-local garbage collection and assumes heaps are not shared among threads.
In other words, objects are accessed and ultimately reclaimed by the garbage collector on the same thread that allocates them.
This allows Oilpan to run garbage collection in parallel with mutators running in other threads.
References to objects belonging to another thread's heap are modeled using cross-thread roots.
This is even true for on-heap to on-heap references.
Oilpan heaps may generally not be accessed from different threads unless otherwise noted.
## Heap partitioning
Oilpan's heaps are partitioned into spaces.
The space for an object is chosen depending on a number of criteria, e.g.:
- Objects over 64KiB are allocated in a large object space
- Objects can be assigned to a dedicated custom space.
Custom spaces can also be marked as compactable.
- Other objects are allocated in one of the normal page spaces bucketed depending on their size.
## Precise and conservative garbage collection
Oilpan supports two kinds of GCs:
1. **Conservative GC.**
A GC is called conservative when it is executed while the regular native stack is not empty.
In this case, the native stack might contain references to objects in Oilpan's heap, which should be kept alive.
The GC scans the native stack and treats the pointers discovered via the native stack as part of the root set.
This kind of GC is considered imprecise because values on stack other than references may accidentally appear as references to on-heap object, which means these objects will be kept alive despite being in practice unreachable from the application as an actual reference.
2. **Precise GC.**
A precise GC is triggered at the end of an event loop, which is controlled by an embedder via a platform.
At this point, it is guaranteed that there are no on-stack references pointing to Oilpan's heap.
This means there is no risk of confusing other value types with references.
Oilpan has precise knowledge of on-heap object layouts, and so it knows exactly where pointers lie in memory.
Oilpan can just start marking from the regular root set and collect all garbage precisely.
## Atomic, incremental and concurrent garbage collection
Oilpan has three modes of operation:
1. **Atomic GC.**
The entire GC cycle, including all its phases (e.g. see [Marking](#Marking-phase) and [Sweeping](#Sweeping-phase)), are executed back to back in a single pause.
This mode of operation is also known as Stop-The-World (STW) garbage collection.
It results in the most jank (due to a single long pause), but is overall the most efficient (e.g. no need for write barriers).
2. **Incremental GC.**
Garbage collection work is split up into multiple steps which are interleaved with the mutator, i.e. user code chunked into tasks.
Each step is a small chunk of work that is executed either as dedicated tasks between mutator tasks or, as needed, during mutator tasks.
Using incremental GC introduces the need for write barriers that record changes to the object graph so that a consistent state is observed and no objects are accidentally considered dead and reclaimed.
The incremental steps are followed by a smaller atomic pause to finalize garbage collection.
The smaller pause times, due to smaller chunks of work, helps with reducing jank.
3. **Concurrent GC.**
This is the most common type of GC.
It builds on top of incremental GC and offloads much of the garbage collection work away from the mutator thread and on to background threads.
Using concurrent GC allows the mutator thread to spend less time on GC and more on the actual mutator.
## Marking phase
The marking phase consists of the following steps:
1. Mark all objects in the root set.
2. Mark all objects transitively reachable from the root set by calling `Trace()` methods defined on each object.
3. Clear out all weak handles to unreachable objects and run weak callbacks.
The marking phase can be executed atomically in a stop-the-world manner, in which all 3 steps are executed one after the other.
Alternatively, it can also be executed incrementally/concurrently.
With incremental/concurrent marking, step 1 is executed in a short pause after which the mutator regains control.
Step 2 is repeatedly executed in an interleaved manner with the mutator.
When the GC is ready to finalize, i.e. step 2 is (almost) finished, another short pause is triggered in which step 2 is finished and step 3 is performed.
To prevent a user-after-free (UAF) issues it is required for Oilpan to know about all edges in the object graph.
This means that all pointers except on-stack pointers must be wrapped with Oilpan's handles (i.e., Persistent<>, Member<>, WeakMember<>).
Raw pointers to on-heap objects create an edge that Oilpan cannot observe and cause UAF issues
Thus, raw pointers shall not be used to reference on-heap objects (except for raw pointers on native stacks).
## Sweeping phase
The sweeping phase consists of the following steps:
1. Invoke pre-finalizers.
At this point, no destructors have been invoked and no memory has been reclaimed.
Pre-finalizers are allowed to access any other on-heap objects, even those that may get destructed.
2. Sweeping invokes destructors of the dead (unreachable) objects and reclaims memory to be reused by future allocations.
Assumptions should not be made about the order and the timing of their execution.
There is no guarantee on the order in which the destructors are invoked.
That's why destructors must not access any other on-heap objects (which might have already been destructed).
If some destructor unavoidably needs to access other on-heap objects, it will have to be converted to a pre-finalizer.
The pre-finalizer is allowed to access other on-heap objects.
The mutator is resumed before all destructors have ran.
For example, imagine a case where X is a client of Y, and Y holds a list of clients.
If the code relies on X's destructor removing X from the list, there is a risk that Y iterates the list and calls some method of X which may touch other on-heap objects.
This causes a use-after-free.
Care must be taken to make sure that X is explicitly removed from the list before the mutator resumes its execution in a way that doesn't rely on X's destructor (e.g. a pre-finalizer).
Similar to marking, sweeping can be executed in either an atomic stop-the-world manner or incrementally/concurrently.
With incremental/concurrent sweeping, step 2 is interleaved with mutator.
Incremental/concurrent sweeping can be atomically finalized in case it is needed to trigger another GC cycle.
Even with concurrent sweeping, destructors are guaranteed to run on the thread the object has been allocated on to preserve C++ semantics.
Notes:
* Weak processing runs only when the holder object of the WeakMember outlives the pointed object.
If the holder object and the pointed object die at the same time, weak processing doesn't run.
It is wrong to write code assuming that the weak processing always runs.
* Pre-finalizers are heavy because the thread needs to scan all pre-finalizers at each sweeping phase to determine which pre-finalizers should be invoked (the thread needs to invoke pre-finalizers of dead objects).
Adding pre-finalizers to frequently created objects should be avoided.

View File

@ -5,24 +5,38 @@
#ifndef INCLUDE_CPPGC_ALLOCATION_H_
#define INCLUDE_CPPGC_ALLOCATION_H_
#include <stdint.h>
#include <atomic>
#include <cstddef>
#include <cstdint>
#include <new>
#include <type_traits>
#include <utility>
#include "cppgc/custom-space.h"
#include "cppgc/garbage-collected.h"
#include "cppgc/internal/api-constants.h"
#include "cppgc/internal/gc-info.h"
#include "cppgc/type-traits.h"
#include "v8config.h" // NOLINT(build/include_directory)
#if defined(__has_attribute)
#if __has_attribute(assume_aligned)
#define CPPGC_DEFAULT_ALIGNED \
__attribute__((assume_aligned(api_constants::kDefaultAlignment)))
#define CPPGC_DOUBLE_WORD_ALIGNED \
__attribute__((assume_aligned(2 * api_constants::kDefaultAlignment)))
#endif // __has_attribute(assume_aligned)
#endif // defined(__has_attribute)
#if !defined(CPPGC_DEFAULT_ALIGNED)
#define CPPGC_DEFAULT_ALIGNED
#endif
#if !defined(CPPGC_DOUBLE_WORD_ALIGNED)
#define CPPGC_DOUBLE_WORD_ALIGNED
#endif
namespace cppgc {
template <typename T>
class MakeGarbageCollectedTraitBase;
namespace internal {
class ObjectAllocator;
} // namespace internal
/**
* AllocationHandle is used to allocate garbage-collected objects.
*/
@ -30,6 +44,9 @@ class AllocationHandle;
namespace internal {
// Similar to C++17 std::align_val_t;
enum class AlignVal : size_t {};
class V8_EXPORT MakeGarbageCollectedTraitInternal {
protected:
static inline void MarkObjectAsFullyConstructed(const void* payload) {
@ -39,36 +56,81 @@ class V8_EXPORT MakeGarbageCollectedTraitInternal {
const_cast<uint16_t*>(reinterpret_cast<const uint16_t*>(
reinterpret_cast<const uint8_t*>(payload) -
api_constants::kFullyConstructedBitFieldOffsetFromPayload)));
atomic_mutable_bitfield->fetch_or(api_constants::kFullyConstructedBitMask,
std::memory_order_release);
// It's safe to split use load+store here (instead of a read-modify-write
// operation), since it's guaranteed that this 16-bit bitfield is only
// modified by a single thread. This is cheaper in terms of code bloat (on
// ARM) and performance.
uint16_t value = atomic_mutable_bitfield->load(std::memory_order_relaxed);
value |= api_constants::kFullyConstructedBitMask;
atomic_mutable_bitfield->store(value, std::memory_order_release);
}
template <typename U, typename CustomSpace>
struct SpacePolicy {
static void* Allocate(AllocationHandle& handle, size_t size) {
// Custom space.
// Dispatch based on compile-time information.
//
// Default implementation is for a custom space with >`kDefaultAlignment` byte
// alignment.
template <typename GCInfoType, typename CustomSpace, size_t alignment>
struct AllocationDispatcher final {
static void* Invoke(AllocationHandle& handle, size_t size) {
static_assert(std::is_base_of<CustomSpaceBase, CustomSpace>::value,
"Custom space must inherit from CustomSpaceBase.");
static_assert(
!CustomSpace::kSupportsCompaction,
"Custom spaces that support compaction do not support allocating "
"objects with non-default (i.e. word-sized) alignment.");
return MakeGarbageCollectedTraitInternal::Allocate(
handle, size, static_cast<AlignVal>(alignment),
internal::GCInfoTrait<GCInfoType>::Index(), CustomSpace::kSpaceIndex);
}
};
// Fast path for regular allocations for the default space with
// `kDefaultAlignment` byte alignment.
template <typename GCInfoType>
struct AllocationDispatcher<GCInfoType, void,
api_constants::kDefaultAlignment>
final {
static void* Invoke(AllocationHandle& handle, size_t size) {
return MakeGarbageCollectedTraitInternal::Allocate(
handle, size, internal::GCInfoTrait<GCInfoType>::Index());
}
};
// Default space with >`kDefaultAlignment` byte alignment.
template <typename GCInfoType, size_t alignment>
struct AllocationDispatcher<GCInfoType, void, alignment> final {
static void* Invoke(AllocationHandle& handle, size_t size) {
return MakeGarbageCollectedTraitInternal::Allocate(
handle, size, static_cast<AlignVal>(alignment),
internal::GCInfoTrait<GCInfoType>::Index());
}
};
// Custom space with `kDefaultAlignment` byte alignment.
template <typename GCInfoType, typename CustomSpace>
struct AllocationDispatcher<GCInfoType, CustomSpace,
api_constants::kDefaultAlignment>
final {
static void* Invoke(AllocationHandle& handle, size_t size) {
static_assert(std::is_base_of<CustomSpaceBase, CustomSpace>::value,
"Custom space must inherit from CustomSpaceBase.");
return MakeGarbageCollectedTraitInternal::Allocate(
handle, size, internal::GCInfoTrait<U>::Index(),
handle, size, internal::GCInfoTrait<GCInfoType>::Index(),
CustomSpace::kSpaceIndex);
}
};
template <typename U>
struct SpacePolicy<U, void> {
static void* Allocate(AllocationHandle& handle, size_t size) {
// Default space.
return MakeGarbageCollectedTraitInternal::Allocate(
handle, size, internal::GCInfoTrait<U>::Index());
}
};
private:
static void* Allocate(cppgc::AllocationHandle& handle, size_t size,
GCInfoIndex index);
static void* Allocate(cppgc::AllocationHandle& handle, size_t size,
GCInfoIndex index, CustomSpaceIndex space_index);
static void* CPPGC_DEFAULT_ALIGNED Allocate(cppgc::AllocationHandle&, size_t,
GCInfoIndex);
static void* CPPGC_DOUBLE_WORD_ALIGNED Allocate(cppgc::AllocationHandle&,
size_t, AlignVal,
GCInfoIndex);
static void* CPPGC_DEFAULT_ALIGNED Allocate(cppgc::AllocationHandle&, size_t,
GCInfoIndex, CustomSpaceIndex);
static void* CPPGC_DOUBLE_WORD_ALIGNED Allocate(cppgc::AllocationHandle&,
size_t, AlignVal, GCInfoIndex,
CustomSpaceIndex);
friend class HeapObjectHeader;
};
@ -103,10 +165,22 @@ class MakeGarbageCollectedTraitBase
* \returns the memory to construct an object of type T on.
*/
V8_INLINE static void* Allocate(AllocationHandle& handle, size_t size) {
return SpacePolicy<
static_assert(
std::is_base_of<typename T::ParentMostGarbageCollectedType, T>::value,
"U of GarbageCollected<U> must be a base of T. Check "
"GarbageCollected<T> base class inheritance.");
static constexpr size_t kWantedAlignment =
alignof(T) < internal::api_constants::kDefaultAlignment
? internal::api_constants::kDefaultAlignment
: alignof(T);
static_assert(
kWantedAlignment <= internal::api_constants::kMaxSupportedAlignment,
"Requested alignment larger than alignof(std::max_align_t) bytes. "
"Please file a bug to possibly get this restriction lifted.");
return AllocationDispatcher<
typename internal::GCInfoFolding<
T, typename T::ParentMostGarbageCollectedType>::ResultType,
typename SpaceTrait<T>::Space>::Allocate(handle, size);
typename SpaceTrait<T>::Space, kWantedAlignment>::Invoke(handle, size);
}
/**
@ -201,7 +275,7 @@ struct PostConstructionCallbackTrait {
* \returns an instance of type T.
*/
template <typename T, typename... Args>
T* MakeGarbageCollected(AllocationHandle& handle, Args&&... args) {
V8_INLINE T* MakeGarbageCollected(AllocationHandle& handle, Args&&... args) {
T* object =
MakeGarbageCollectedTrait<T>::Call(handle, std::forward<Args>(args)...);
PostConstructionCallbackTrait<T>::Call(object);
@ -219,8 +293,9 @@ T* MakeGarbageCollected(AllocationHandle& handle, Args&&... args) {
* \returns an instance of type T.
*/
template <typename T, typename... Args>
T* MakeGarbageCollected(AllocationHandle& handle,
AdditionalBytes additional_bytes, Args&&... args) {
V8_INLINE T* MakeGarbageCollected(AllocationHandle& handle,
AdditionalBytes additional_bytes,
Args&&... args) {
T* object = MakeGarbageCollectedTrait<T>::Call(handle, additional_bytes,
std::forward<Args>(args)...);
PostConstructionCallbackTrait<T>::Call(object);
@ -229,4 +304,7 @@ T* MakeGarbageCollected(AllocationHandle& handle,
} // namespace cppgc
#undef CPPGC_DEFAULT_ALIGNED
#undef CPPGC_DOUBLE_WORD_ALIGNED
#endif // INCLUDE_CPPGC_ALLOCATION_H_

View File

@ -5,7 +5,6 @@
#ifndef INCLUDE_CPPGC_COMMON_H_
#define INCLUDE_CPPGC_COMMON_H_
// TODO(chromium:1056170): Remove dependency on v8.
#include "v8config.h" // NOLINT(build/include_directory)
namespace cppgc {

View File

@ -13,12 +13,62 @@
#include "cppgc/visitor.h"
namespace cppgc {
namespace internal {
// Wrapper around PersistentBase that allows accessing poisoned memory when
// using ASAN. This is needed as the GC of the heap that owns the value
// of a CTP, may clear it (heap termination, weakness) while the object
// holding the CTP may be poisoned as itself may be deemed dead.
class CrossThreadPersistentBase : public PersistentBase {
public:
CrossThreadPersistentBase() = default;
explicit CrossThreadPersistentBase(const void* raw) : PersistentBase(raw) {}
V8_CLANG_NO_SANITIZE("address") const void* GetValueFromGC() const {
return raw_;
}
V8_CLANG_NO_SANITIZE("address")
PersistentNode* GetNodeFromGC() const { return node_; }
V8_CLANG_NO_SANITIZE("address")
void ClearFromGC() const {
raw_ = nullptr;
SetNodeSafe(nullptr);
}
// GetNodeSafe() can be used for a thread-safe IsValid() check in a
// double-checked locking pattern. See ~BasicCrossThreadPersistent.
PersistentNode* GetNodeSafe() const {
return reinterpret_cast<std::atomic<PersistentNode*>*>(&node_)->load(
std::memory_order_acquire);
}
// The GC writes using SetNodeSafe() while holding the lock.
V8_CLANG_NO_SANITIZE("address")
void SetNodeSafe(PersistentNode* value) const {
#if defined(__has_feature)
#if __has_feature(address_sanitizer)
#define V8_IS_ASAN 1
#endif
#endif
#ifdef V8_IS_ASAN
__atomic_store(&node_, &value, __ATOMIC_RELEASE);
#else // !V8_IS_ASAN
// Non-ASAN builds can use atomics. This also covers MSVC which does not
// have the __atomic_store intrinsic.
reinterpret_cast<std::atomic<PersistentNode*>*>(&node_)->store(
value, std::memory_order_release);
#endif // !V8_IS_ASAN
#undef V8_IS_ASAN
}
};
template <typename T, typename WeaknessPolicy, typename LocationPolicy,
typename CheckingPolicy>
class BasicCrossThreadPersistent final : public PersistentBase,
class BasicCrossThreadPersistent final : public CrossThreadPersistentBase,
public LocationPolicy,
private WeaknessPolicy,
private CheckingPolicy {
@ -26,27 +76,51 @@ class BasicCrossThreadPersistent final : public PersistentBase,
using typename WeaknessPolicy::IsStrongPersistent;
using PointeeType = T;
~BasicCrossThreadPersistent() { Clear(); }
~BasicCrossThreadPersistent() {
// This implements fast path for destroying empty/sentinel.
//
// Simplified version of `AssignUnsafe()` to allow calling without a
// complete type `T`. Uses double-checked locking with a simple thread-safe
// check for a valid handle based on a node.
if (GetNodeSafe()) {
PersistentRegionLock guard;
const void* old_value = GetValue();
// The fast path check (GetNodeSafe()) does not acquire the lock. Recheck
// validity while holding the lock to ensure the reference has not been
// cleared.
if (IsValid(old_value)) {
CrossThreadPersistentRegion& region =
this->GetPersistentRegion(old_value);
region.FreeNode(GetNode());
SetNode(nullptr);
} else {
CPPGC_DCHECK(!GetNode());
}
}
// No need to call SetValue() as the handle is not used anymore. This can
// leave behind stale sentinel values but will always destroy the underlying
// node.
}
BasicCrossThreadPersistent( // NOLINT
BasicCrossThreadPersistent(
const SourceLocation& loc = SourceLocation::Current())
: LocationPolicy(loc) {}
BasicCrossThreadPersistent( // NOLINT
BasicCrossThreadPersistent(
std::nullptr_t, const SourceLocation& loc = SourceLocation::Current())
: LocationPolicy(loc) {}
BasicCrossThreadPersistent( // NOLINT
BasicCrossThreadPersistent(
SentinelPointer s, const SourceLocation& loc = SourceLocation::Current())
: PersistentBase(s), LocationPolicy(loc) {}
: CrossThreadPersistentBase(s), LocationPolicy(loc) {}
BasicCrossThreadPersistent( // NOLINT
BasicCrossThreadPersistent(
T* raw, const SourceLocation& loc = SourceLocation::Current())
: PersistentBase(raw), LocationPolicy(loc) {
: CrossThreadPersistentBase(raw), LocationPolicy(loc) {
if (!IsValid(raw)) return;
PersistentRegionLock guard;
CrossThreadPersistentRegion& region = this->GetPersistentRegion(raw);
SetNode(region.AllocateNode(this, &Trace));
SetNode(region.AllocateNode(this, &TraceAsRoot));
this->CheckPointer(raw);
}
@ -58,26 +132,27 @@ class BasicCrossThreadPersistent final : public PersistentBase,
friend class BasicCrossThreadPersistent;
};
BasicCrossThreadPersistent( // NOLINT
BasicCrossThreadPersistent(
UnsafeCtorTag, T* raw,
const SourceLocation& loc = SourceLocation::Current())
: PersistentBase(raw), LocationPolicy(loc) {
: CrossThreadPersistentBase(raw), LocationPolicy(loc) {
if (!IsValid(raw)) return;
CrossThreadPersistentRegion& region = this->GetPersistentRegion(raw);
SetNode(region.AllocateNode(this, &Trace));
SetNode(region.AllocateNode(this, &TraceAsRoot));
this->CheckPointer(raw);
}
BasicCrossThreadPersistent( // NOLINT
BasicCrossThreadPersistent(
T& raw, const SourceLocation& loc = SourceLocation::Current())
: BasicCrossThreadPersistent(&raw, loc) {}
template <typename U, typename MemberBarrierPolicy,
typename MemberWeaknessTag, typename MemberCheckingPolicy,
typename MemberStorageType,
typename = std::enable_if_t<std::is_base_of<T, U>::value>>
BasicCrossThreadPersistent( // NOLINT
BasicCrossThreadPersistent(
internal::BasicMember<U, MemberBarrierPolicy, MemberWeaknessTag,
MemberCheckingPolicy>
MemberCheckingPolicy, MemberStorageType>
member,
const SourceLocation& loc = SourceLocation::Current())
: BasicCrossThreadPersistent(member.Get(), loc) {}
@ -94,7 +169,7 @@ class BasicCrossThreadPersistent final : public PersistentBase,
template <typename U, typename OtherWeaknessPolicy,
typename OtherLocationPolicy, typename OtherCheckingPolicy,
typename = std::enable_if_t<std::is_base_of<T, U>::value>>
BasicCrossThreadPersistent( // NOLINT
BasicCrossThreadPersistent(
const BasicCrossThreadPersistent<U, OtherWeaknessPolicy,
OtherLocationPolicy,
OtherCheckingPolicy>& other,
@ -113,7 +188,7 @@ class BasicCrossThreadPersistent final : public PersistentBase,
BasicCrossThreadPersistent& operator=(
const BasicCrossThreadPersistent& other) {
PersistentRegionLock guard;
AssignUnsafe(other.Get());
AssignSafe(guard, other.Get());
return *this;
}
@ -125,7 +200,7 @@ class BasicCrossThreadPersistent final : public PersistentBase,
OtherLocationPolicy,
OtherCheckingPolicy>& other) {
PersistentRegionLock guard;
AssignUnsafe(other.Get());
AssignSafe(guard, other.Get());
return *this;
}
@ -139,33 +214,50 @@ class BasicCrossThreadPersistent final : public PersistentBase,
GetNode()->UpdateOwner(this);
other.SetValue(nullptr);
other.SetNode(nullptr);
this->CheckPointer(GetValue());
this->CheckPointer(Get());
return *this;
}
/**
* Assigns a raw pointer.
*
* Note: **Not thread-safe.**
*/
BasicCrossThreadPersistent& operator=(T* other) {
Assign(other);
AssignUnsafe(other);
return *this;
}
// Assignment from member.
template <typename U, typename MemberBarrierPolicy,
typename MemberWeaknessTag, typename MemberCheckingPolicy,
typename MemberStorageType,
typename = std::enable_if_t<std::is_base_of<T, U>::value>>
BasicCrossThreadPersistent& operator=(
internal::BasicMember<U, MemberBarrierPolicy, MemberWeaknessTag,
MemberCheckingPolicy>
MemberCheckingPolicy, MemberStorageType>
member) {
return operator=(member.Get());
}
/**
* Assigns a nullptr.
*
* \returns the handle.
*/
BasicCrossThreadPersistent& operator=(std::nullptr_t) {
Clear();
return *this;
}
/**
* Assigns the sentinel pointer.
*
* \returns the handle.
*/
BasicCrossThreadPersistent& operator=(SentinelPointer s) {
Assign(s);
PersistentRegionLock guard;
AssignSafe(guard, s);
return *this;
}
@ -187,24 +279,8 @@ class BasicCrossThreadPersistent final : public PersistentBase,
* Clears the stored object.
*/
void Clear() {
// Simplified version of `Assign()` to allow calling without a complete type
// `T`.
const void* old_value = GetValue();
if (IsValid(old_value)) {
PersistentRegionLock guard;
old_value = GetValue();
// The fast path check (IsValid()) does not acquire the lock. Reload
// the value to ensure the reference has not been cleared.
if (IsValid(old_value)) {
CrossThreadPersistentRegion& region =
this->GetPersistentRegion(old_value);
region.FreeNode(GetNode());
SetNode(nullptr);
} else {
CPPGC_DCHECK(!GetNode());
}
}
SetValue(nullptr);
PersistentRegionLock guard;
AssignSafe(guard, nullptr);
}
/**
@ -236,7 +312,7 @@ class BasicCrossThreadPersistent final : public PersistentBase,
*
* \returns the object.
*/
operator T*() const { return Get(); } // NOLINT
operator T*() const { return Get(); }
/**
* Dereferences the stored object.
@ -275,12 +351,11 @@ class BasicCrossThreadPersistent final : public PersistentBase,
return ptr && ptr != kSentinelPointer;
}
static void Trace(Visitor* v, const void* ptr) {
const auto* handle = static_cast<const BasicCrossThreadPersistent*>(ptr);
v->TraceRoot(*handle, handle->Location());
static void TraceAsRoot(RootVisitor& root_visitor, const void* ptr) {
root_visitor.Trace(*static_cast<const BasicCrossThreadPersistent*>(ptr));
}
void Assign(T* ptr) {
void AssignUnsafe(T* ptr) {
const void* old_value = GetValue();
if (IsValid(old_value)) {
PersistentRegionLock guard;
@ -304,11 +379,11 @@ class BasicCrossThreadPersistent final : public PersistentBase,
SetValue(ptr);
if (!IsValid(ptr)) return;
PersistentRegionLock guard;
SetNode(this->GetPersistentRegion(ptr).AllocateNode(this, &Trace));
SetNode(this->GetPersistentRegion(ptr).AllocateNode(this, &TraceAsRoot));
this->CheckPointer(ptr);
}
void AssignUnsafe(T* ptr) {
void AssignSafe(PersistentRegionLock&, T* ptr) {
PersistentRegionLock::AssertLocked();
const void* old_value = GetValue();
if (IsValid(old_value)) {
@ -324,18 +399,25 @@ class BasicCrossThreadPersistent final : public PersistentBase,
}
SetValue(ptr);
if (!IsValid(ptr)) return;
SetNode(this->GetPersistentRegion(ptr).AllocateNode(this, &Trace));
SetNode(this->GetPersistentRegion(ptr).AllocateNode(this, &TraceAsRoot));
this->CheckPointer(ptr);
}
void ClearFromGC() const {
if (IsValid(GetValue())) {
WeaknessPolicy::GetPersistentRegion(GetValue()).FreeNode(GetNode());
PersistentBase::ClearFromGC();
if (IsValid(GetValueFromGC())) {
WeaknessPolicy::GetPersistentRegion(GetValueFromGC())
.FreeNode(GetNodeFromGC());
CrossThreadPersistentBase::ClearFromGC();
}
}
friend class cppgc::Visitor;
// See Get() for details.
V8_CLANG_NO_SANITIZE("cfi-unrelated-cast")
T* GetFromGC() const {
return static_cast<T*>(const_cast<void*>(GetValueFromGC()));
}
friend class internal::RootVisitor;
};
template <typename T, typename LocationPolicy, typename CheckingPolicy>

View File

@ -6,7 +6,6 @@
#define INCLUDE_CPPGC_DEFAULT_PLATFORM_H_
#include <memory>
#include <vector>
#include "cppgc/platform.h"
#include "libplatform/libplatform.h"
@ -20,15 +19,6 @@ namespace cppgc {
*/
class V8_EXPORT DefaultPlatform : public Platform {
public:
/**
* Use this method instead of 'cppgc::InitializeProcess' when using
* 'cppgc::DefaultPlatform'. 'cppgc::DefaultPlatform::InitializeProcess'
* will initialize cppgc and v8 if needed (for non-standalone builds).
*
* \param platform DefaultPlatform instance used to initialize cppgc/v8.
*/
static void InitializeProcess(DefaultPlatform* platform);
using IdleTaskSupport = v8::platform::IdleTaskSupport;
explicit DefaultPlatform(
int thread_pool_size = 0,
@ -64,6 +54,8 @@ class V8_EXPORT DefaultPlatform : public Platform {
return v8_platform_->GetTracingController();
}
v8::Platform* GetV8Platform() const { return v8_platform_.get(); }
protected:
static constexpr v8::Isolate* kNoIsolate = nullptr;

View File

@ -12,11 +12,30 @@
#include "cppgc/type-traits.h"
namespace cppgc {
class HeapHandle;
namespace subtle {
template <typename T>
void FreeUnreferencedObject(HeapHandle& heap_handle, T& object);
template <typename T>
bool Resize(T& object, AdditionalBytes additional_bytes);
} // namespace subtle
namespace internal {
V8_EXPORT void FreeUnreferencedObject(void*);
V8_EXPORT bool Resize(void*, size_t);
class ExplicitManagementImpl final {
private:
V8_EXPORT static void FreeUnreferencedObject(HeapHandle&, void*);
V8_EXPORT static bool Resize(void*, size_t);
template <typename T>
friend void subtle::FreeUnreferencedObject(HeapHandle&, T&);
template <typename T>
friend bool subtle::Resize(T&, AdditionalBytes);
};
} // namespace internal
namespace subtle {
@ -30,15 +49,20 @@ namespace subtle {
* to `object` after calling `FreeUnreferencedObject()`. In case such a
* reference exists, it's use results in a use-after-free.
*
* To aid in using the API, `FreeUnreferencedObject()` may be called from
* destructors on objects that would be reclaimed in the same garbage collection
* cycle.
*
* \param heap_handle The corresponding heap.
* \param object Reference to an object that is of type `GarbageCollected` and
* should be immediately reclaimed.
*/
template <typename T>
void FreeUnreferencedObject(T* object) {
void FreeUnreferencedObject(HeapHandle& heap_handle, T& object) {
static_assert(IsGarbageCollectedTypeV<T>,
"Object must be of type GarbageCollected.");
if (!object) return;
internal::FreeUnreferencedObject(object);
internal::ExplicitManagementImpl::FreeUnreferencedObject(heap_handle,
&object);
}
/**
@ -53,6 +77,8 @@ void FreeUnreferencedObject(T* object) {
* object down, the reclaimed area is not used anymore. Any subsequent use
* results in a use-after-free.
*
* The `object` must be live when calling `Resize()`.
*
* \param object Reference to an object that is of type `GarbageCollected` and
* should be resized.
* \param additional_bytes Bytes in addition to sizeof(T) that the object should
@ -64,7 +90,8 @@ template <typename T>
bool Resize(T& object, AdditionalBytes additional_bytes) {
static_assert(IsGarbageCollectedTypeV<T>,
"Object must be of type GarbageCollected.");
return internal::Resize(&object, sizeof(T) + additional_bytes.value);
return internal::ExplicitManagementImpl::Resize(
&object, sizeof(T) + additional_bytes.value);
}
} // namespace subtle

View File

@ -5,8 +5,6 @@
#ifndef INCLUDE_CPPGC_GARBAGE_COLLECTED_H_
#define INCLUDE_CPPGC_GARBAGE_COLLECTED_H_
#include <type_traits>
#include "cppgc/internal/api-constants.h"
#include "cppgc/platform.h"
#include "cppgc/trace-trait.h"
@ -16,28 +14,6 @@ namespace cppgc {
class Visitor;
namespace internal {
class GarbageCollectedBase {
public:
// Must use MakeGarbageCollected.
void* operator new(size_t) = delete;
void* operator new[](size_t) = delete;
// The garbage collector is taking care of reclaiming the object. Also,
// virtual destructor requires an unambiguous, accessible 'operator delete'.
void operator delete(void*) {
#ifdef V8_ENABLE_CHECKS
internal::Abort();
#endif // V8_ENABLE_CHECKS
}
void operator delete[](void*) = delete;
protected:
GarbageCollectedBase() = default;
};
} // namespace internal
/**
* Base class for managed objects. Only descendent types of `GarbageCollected`
* can be constructed using `MakeGarbageCollected()`. Must be inherited from as
@ -74,11 +50,24 @@ class GarbageCollectedBase {
* \endcode
*/
template <typename T>
class GarbageCollected : public internal::GarbageCollectedBase {
class GarbageCollected {
public:
using IsGarbageCollectedTypeMarker = void;
using ParentMostGarbageCollectedType = T;
// Must use MakeGarbageCollected.
void* operator new(size_t) = delete;
void* operator new[](size_t) = delete;
// The garbage collector is taking care of reclaiming the object. Also,
// virtual destructor requires an unambiguous, accessible 'operator delete'.
void operator delete(void*) {
#ifdef V8_ENABLE_CHECKS
internal::Fatal(
"Manually deleting a garbage collected object is not allowed");
#endif // V8_ENABLE_CHECKS
}
void operator delete[](void*) = delete;
protected:
GarbageCollected() = default;
};
@ -101,7 +90,7 @@ class GarbageCollected : public internal::GarbageCollectedBase {
* };
* \endcode
*/
class GarbageCollectedMixin : public internal::GarbageCollectedBase {
class GarbageCollectedMixin {
public:
using IsGarbageCollectedMixinTypeMarker = void;

View File

@ -9,6 +9,7 @@
#include "cppgc/internal/write-barrier.h"
#include "cppgc/macros.h"
#include "cppgc/member.h"
#include "cppgc/trace-trait.h"
#include "v8config.h" // NOLINT(build/include_directory)
@ -47,6 +48,29 @@ class HeapConsistency final {
return internal::WriteBarrier::GetWriteBarrierType(slot, value, params);
}
/**
* Gets the required write barrier type for a specific write. This override is
* only used for all the BasicMember types.
*
* \param slot Slot containing the pointer to the object. The slot itself
* must reside in an object that has been allocated using
* `MakeGarbageCollected()`.
* \param value The pointer to the object held via `BasicMember`.
* \param params Parameters that may be used for actual write barrier calls.
* Only filled if return value indicates that a write barrier is needed. The
* contents of the `params` are an implementation detail.
* \returns whether a write barrier is needed and which barrier to invoke.
*/
template <typename T, typename WeaknessTag, typename WriteBarrierPolicy,
typename CheckingPolicy, typename StorageType>
static V8_INLINE WriteBarrierType GetWriteBarrierType(
const internal::BasicMember<T, WeaknessTag, WriteBarrierPolicy,
CheckingPolicy, StorageType>& value,
WriteBarrierParams& params) {
return internal::WriteBarrier::GetWriteBarrierType(
value.GetRawSlot(), value.GetRawStorage(), params);
}
/**
* Gets the required write barrier type for a specific write.
*
@ -68,6 +92,23 @@ class HeapConsistency final {
return internal::WriteBarrier::GetWriteBarrierType(slot, params, callback);
}
/**
* Gets the required write barrier type for a specific write.
* This version is meant to be used in conjunction with with a marking write
* barrier barrier which doesn't consider the slot.
*
* \param value The pointer to the object. May be an interior pointer to an
* interface of the actual object.
* \param params Parameters that may be used for actual write barrier calls.
* Only filled if return value indicates that a write barrier is needed. The
* contents of the `params` are an implementation detail.
* \returns whether a write barrier is needed and which barrier to invoke.
*/
static V8_INLINE WriteBarrierType
GetWriteBarrierType(const void* value, WriteBarrierParams& params) {
return internal::WriteBarrier::GetWriteBarrierType(value, params);
}
/**
* Conservative Dijkstra-style write barrier that processes an object if it
* has not yet been processed.
@ -129,7 +170,39 @@ class HeapConsistency final {
*/
static V8_INLINE void GenerationalBarrier(const WriteBarrierParams& params,
const void* slot) {
internal::WriteBarrier::GenerationalBarrier(params, slot);
internal::WriteBarrier::GenerationalBarrier<
internal::WriteBarrier::GenerationalBarrierType::kPreciseSlot>(params,
slot);
}
/**
* Generational barrier for maintaining consistency when running with multiple
* generations. This version is used when slot contains uncompressed pointer.
*
* \param params The parameters retrieved from `GetWriteBarrierType()`.
* \param slot Uncompressed slot containing the direct pointer to the object.
* The slot itself must reside in an object that has been allocated using
* `MakeGarbageCollected()`.
*/
static V8_INLINE void GenerationalBarrierForUncompressedSlot(
const WriteBarrierParams& params, const void* uncompressed_slot) {
internal::WriteBarrier::GenerationalBarrier<
internal::WriteBarrier::GenerationalBarrierType::
kPreciseUncompressedSlot>(params, uncompressed_slot);
}
/**
* Generational barrier for source object that may contain outgoing pointers
* to objects in young generation.
*
* \param params The parameters retrieved from `GetWriteBarrierType()`.
* \param inner_pointer Pointer to the source object.
*/
static V8_INLINE void GenerationalBarrierForSourceObject(
const WriteBarrierParams& params, const void* inner_pointer) {
internal::WriteBarrier::GenerationalBarrier<
internal::WriteBarrier::GenerationalBarrierType::kImpreciseSlot>(
params, inner_pointer);
}
private:

View File

@ -0,0 +1,48 @@
// Copyright 2022 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_CPPGC_HEAP_HANDLE_H_
#define INCLUDE_CPPGC_HEAP_HANDLE_H_
#include "v8config.h" // NOLINT(build/include_directory)
namespace cppgc {
namespace internal {
class HeapBase;
class WriteBarrierTypeForCagedHeapPolicy;
class WriteBarrierTypeForNonCagedHeapPolicy;
} // namespace internal
/**
* Opaque handle used for additional heap APIs.
*/
class HeapHandle {
public:
// Deleted copy ctor to avoid treating the type by value.
HeapHandle(const HeapHandle&) = delete;
HeapHandle& operator=(const HeapHandle&) = delete;
private:
HeapHandle() = default;
V8_INLINE bool is_incremental_marking_in_progress() const {
return is_incremental_marking_in_progress_;
}
V8_INLINE bool is_young_generation_enabled() const {
return is_young_generation_enabled_;
}
bool is_incremental_marking_in_progress_ = false;
bool is_young_generation_enabled_ = false;
friend class internal::HeapBase;
friend class internal::WriteBarrierTypeForCagedHeapPolicy;
friend class internal::WriteBarrierTypeForNonCagedHeapPolicy;
};
} // namespace cppgc
#endif // INCLUDE_CPPGC_HEAP_HANDLE_H_

View File

@ -38,6 +38,18 @@ class V8_EXPORT HeapState final {
*/
static bool IsSweeping(const HeapHandle& heap_handle);
/*
* Returns whether the garbage collector is currently sweeping on the thread
* owning this heap. This API allows the caller to determine whether it has
* been called from a destructor of a managed object. This API is experimental
* and may be removed in future.
*
* \param heap_handle The corresponding heap.
* \returns true if the garbage collector is currently sweeping on this
* thread, and false otherwise.
*/
static bool IsSweepingOnOwningThread(const HeapHandle& heap_handle);
/**
* Returns whether the garbage collector is in the atomic pause, i.e., the
* mutator is stopped from running. This API is experimental and is expected

View File

@ -5,7 +5,8 @@
#ifndef INCLUDE_CPPGC_HEAP_STATISTICS_H_
#define INCLUDE_CPPGC_HEAP_STATISTICS_H_
#include <memory>
#include <cstddef>
#include <cstdint>
#include <string>
#include <vector>
@ -30,19 +31,17 @@ struct HeapStatistics final {
};
/**
* Statistics of object types. For each type the statistics record its name,
* how many objects of that type were allocated, and the overall size used by
* these objects.
* Object statistics for a single type.
*/
struct ObjectStatistics {
/** Number of distinct types in the heap. */
size_t num_types = 0;
/** Name of each type in the heap. */
std::vector<std::string> type_name;
/** Number of allocated objects per each type. */
std::vector<size_t> type_count;
/** Overall size of allocated objects per each type. */
std::vector<size_t> type_bytes;
struct ObjectStatsEntry {
/**
* Number of allocated bytes.
*/
size_t allocated_bytes;
/**
* Number of allocated objects.
*/
size_t object_count;
};
/**
@ -50,14 +49,19 @@ struct HeapStatistics final {
* allocated memory size and overall used memory size for the page.
*/
struct PageStatistics {
/** Overall amount of memory allocated for the page. */
size_t physical_size_bytes = 0;
/** Overall committed amount of memory for the page. */
size_t committed_size_bytes = 0;
/** Resident amount of memory held by the page. */
size_t resident_size_bytes = 0;
/** Amount of memory actually used on the page. */
size_t used_size_bytes = 0;
/** Statistics for object allocated on the page. Filled only when
* NameProvider::SupportsCppClassNamesAsObjectNames() is true. */
std::vector<ObjectStatsEntry> object_statistics;
};
/**
* Stastistics of the freelist (used only in non-large object spaces). For
* Statistics of the freelist (used only in non-large object spaces). For
* each bucket in the freelist the statistics record the bucket size, the
* number of freelist entries in the bucket, and the overall allocated memory
* consumed by these freelist entries.
@ -67,7 +71,7 @@ struct HeapStatistics final {
std::vector<size_t> bucket_size;
/** number of freelist entries per bucket. */
std::vector<size_t> free_count;
/** memory size concumed by freelist entries per size. */
/** memory size consumed by freelist entries per size. */
std::vector<size_t> free_size;
};
@ -80,29 +84,35 @@ struct HeapStatistics final {
struct SpaceStatistics {
/** The space name */
std::string name;
/** Overall amount of memory allocated for the space. */
size_t physical_size_bytes = 0;
/** Overall committed amount of memory for the heap. */
size_t committed_size_bytes = 0;
/** Resident amount of memory held by the heap. */
size_t resident_size_bytes = 0;
/** Amount of memory actually used on the space. */
size_t used_size_bytes = 0;
/** Statistics for each of the pages in the space. */
std::vector<PageStatistics> page_stats;
/** Statistics for the freelist of the space. */
FreeListStatistics free_list_stats;
/** Statistics for object allocated on the space. Filled only when
* NameProvider::HideInternalNames() is false. */
ObjectStatistics object_stats;
};
/** Overall amount of memory allocated for the heap. */
size_t physical_size_bytes = 0;
/** Overall committed amount of memory for the heap. */
size_t committed_size_bytes = 0;
/** Resident amount of memory held by the heap. */
size_t resident_size_bytes = 0;
/** Amount of memory actually used on the heap. */
size_t used_size_bytes = 0;
/** Detail level of this HeapStatistics. */
DetailLevel detail_level;
/** Statistics for each of the spaces in the heap. Filled only when
* detail_level is kDetailed. */
* `detail_level` is `DetailLevel::kDetailed`. */
std::vector<SpaceStatistics> space_stats;
/**
* Vector of `cppgc::GarbageCollected` type names.
*/
std::vector<std::string> type_names;
};
} // namespace cppgc

View File

@ -5,6 +5,8 @@
#ifndef INCLUDE_CPPGC_HEAP_H_
#define INCLUDE_CPPGC_HEAP_H_
#include <cstddef>
#include <cstdint>
#include <memory>
#include <vector>
@ -19,6 +21,7 @@
namespace cppgc {
class AllocationHandle;
class HeapHandle;
/**
* Implementation details of cppgc. Those details are considered internal and
@ -29,11 +32,6 @@ namespace internal {
class Heap;
} // namespace internal
/**
* Used for additional heap APIs.
*/
class HeapHandle;
class V8_EXPORT Heap {
public:
/**
@ -57,7 +55,7 @@ class V8_EXPORT Heap {
};
/**
* Specifies supported marking types
* Specifies supported marking types.
*/
enum class MarkingType : uint8_t {
/**
@ -66,8 +64,8 @@ class V8_EXPORT Heap {
*/
kAtomic,
/**
* Incremental marking, i.e. interleave marking is the rest of the
* application on the same thread.
* Incremental marking interleaves marking with the rest of the application
* workload on the same thread.
*/
kIncremental,
/**
@ -77,13 +75,18 @@ class V8_EXPORT Heap {
};
/**
* Specifies supported sweeping types
* Specifies supported sweeping types.
*/
enum class SweepingType : uint8_t {
/**
* Atomic stop-the-world sweeping. All of sweeping is performed at once.
*/
kAtomic,
/**
* Incremental sweeping interleaves sweeping with the rest of the
* application workload on the same thread.
*/
kIncremental,
/**
* Incremental and concurrent sweeping. Sweeping is split and interleaved
* with the rest of the application.

View File

@ -5,8 +5,8 @@
#ifndef INCLUDE_CPPGC_INTERNAL_API_CONSTANTS_H_
#define INCLUDE_CPPGC_INTERNAL_API_CONSTANTS_H_
#include <stddef.h>
#include <stdint.h>
#include <cstddef>
#include <cstdint>
#include "v8config.h" // NOLINT(build/include_directory)
@ -32,12 +32,52 @@ static constexpr uint16_t kFullyConstructedBitMask = uint16_t{1};
static constexpr size_t kPageSize = size_t{1} << 17;
#if defined(V8_TARGET_ARCH_ARM64) && defined(V8_OS_DARWIN)
constexpr size_t kGuardPageSize = 0;
#else
constexpr size_t kGuardPageSize = 4096;
#endif
static constexpr size_t kLargeObjectSizeThreshold = kPageSize / 2;
#if defined(CPPGC_POINTER_COMPRESSION)
#if defined(CPPGC_ENABLE_LARGER_CAGE)
constexpr unsigned kPointerCompressionShift = 3;
#else // !defined(CPPGC_ENABLE_LARGER_CAGE)
constexpr unsigned kPointerCompressionShift = 1;
#endif // !defined(CPPGC_ENABLE_LARGER_CAGE)
#endif // !defined(CPPGC_POINTER_COMPRESSION)
#if defined(CPPGC_CAGED_HEAP)
constexpr size_t kCagedHeapReservationSize = static_cast<size_t>(4) * kGB;
constexpr size_t kCagedHeapReservationAlignment = kCagedHeapReservationSize;
#endif
#if defined(CPPGC_2GB_CAGE)
constexpr size_t kCagedHeapDefaultReservationSize =
static_cast<size_t>(2) * kGB;
constexpr size_t kCagedHeapMaxReservationSize =
kCagedHeapDefaultReservationSize;
#else // !defined(CPPGC_2GB_CAGE)
constexpr size_t kCagedHeapDefaultReservationSize =
static_cast<size_t>(4) * kGB;
#if defined(CPPGC_POINTER_COMPRESSION)
constexpr size_t kCagedHeapMaxReservationSize =
size_t{1} << (31 + kPointerCompressionShift);
#else // !defined(CPPGC_POINTER_COMPRESSION)
constexpr size_t kCagedHeapMaxReservationSize =
kCagedHeapDefaultReservationSize;
#endif // !defined(CPPGC_POINTER_COMPRESSION)
#endif // !defined(CPPGC_2GB_CAGE)
constexpr size_t kCagedHeapReservationAlignment = kCagedHeapMaxReservationSize;
#endif // defined(CPPGC_CAGED_HEAP)
static constexpr size_t kDefaultAlignment = sizeof(void*);
// Maximum support alignment for a type as in `alignof(T)`.
static constexpr size_t kMaxSupportedAlignment = 2 * kDefaultAlignment;
// Granularity of heap allocations.
constexpr size_t kAllocationGranularity = sizeof(void*);
// Default cacheline size.
constexpr size_t kCachelineSize = 64;
} // namespace api_constants

View File

@ -0,0 +1,45 @@
// Copyright 2022 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_CPPGC_INTERNAL_BASE_PAGE_HANDLE_H_
#define INCLUDE_CPPGC_INTERNAL_BASE_PAGE_HANDLE_H_
#include "cppgc/heap-handle.h"
#include "cppgc/internal/api-constants.h"
#include "cppgc/internal/logging.h"
#include "v8config.h" // NOLINT(build/include_directory)
namespace cppgc {
namespace internal {
// The class is needed in the header to allow for fast access to HeapHandle in
// the write barrier.
class BasePageHandle {
public:
static V8_INLINE BasePageHandle* FromPayload(void* payload) {
return reinterpret_cast<BasePageHandle*>(
(reinterpret_cast<uintptr_t>(payload) &
~(api_constants::kPageSize - 1)) +
api_constants::kGuardPageSize);
}
static V8_INLINE const BasePageHandle* FromPayload(const void* payload) {
return FromPayload(const_cast<void*>(payload));
}
HeapHandle& heap_handle() { return heap_handle_; }
const HeapHandle& heap_handle() const { return heap_handle_; }
protected:
explicit BasePageHandle(HeapHandle& heap_handle) : heap_handle_(heap_handle) {
CPPGC_DCHECK(reinterpret_cast<uintptr_t>(this) % api_constants::kPageSize ==
api_constants::kGuardPageSize);
}
HeapHandle& heap_handle_;
};
} // namespace internal
} // namespace cppgc
#endif // INCLUDE_CPPGC_INTERNAL_BASE_PAGE_HANDLE_H_

View File

@ -6,57 +6,108 @@
#define INCLUDE_CPPGC_INTERNAL_CAGED_HEAP_LOCAL_DATA_H_
#include <array>
#include <cstddef>
#include <cstdint>
#include "cppgc/internal/api-constants.h"
#include "cppgc/internal/caged-heap.h"
#include "cppgc/internal/logging.h"
#include "cppgc/platform.h"
#include "v8config.h" // NOLINT(build/include_directory)
#if __cpp_lib_bitopts
#include <bit>
#endif // __cpp_lib_bitopts
#if defined(CPPGC_CAGED_HEAP)
namespace cppgc {
namespace internal {
class HeapBase;
class HeapBaseHandle;
#if defined(CPPGC_YOUNG_GENERATION)
// AgeTable contains entries that correspond to 4KB memory regions. Each entry
// can be in one of three states: kOld, kYoung or kUnknown.
class AgeTable final {
static constexpr size_t kGranularityBits = 12; // 4KiB per byte.
// AgeTable is the bytemap needed for the fast generation check in the write
// barrier. AgeTable contains entries that correspond to 4096 bytes memory
// regions (cards). Each entry in the table represents generation of the objects
// that reside on the corresponding card (young, old or mixed).
class V8_EXPORT AgeTable final {
static constexpr size_t kRequiredSize = 1 * api_constants::kMB;
static constexpr size_t kAllocationGranularity =
api_constants::kAllocationGranularity;
public:
enum class Age : uint8_t { kOld, kYoung, kUnknown };
// Represents age of the objects living on a single card.
enum class Age : uint8_t { kOld, kYoung, kMixed };
// When setting age for a range, consider or ignore ages of the adjacent
// cards.
enum class AdjacentCardsPolicy : uint8_t { kConsider, kIgnore };
static constexpr size_t kEntrySizeInBytes = 1 << kGranularityBits;
static constexpr size_t kCardSizeInBytes =
api_constants::kCagedHeapDefaultReservationSize / kRequiredSize;
Age& operator[](uintptr_t offset) { return table_[entry(offset)]; }
Age operator[](uintptr_t offset) const { return table_[entry(offset)]; }
static constexpr size_t CalculateAgeTableSizeForHeapSize(size_t heap_size) {
return heap_size / kCardSizeInBytes;
}
void Reset(PageAllocator* allocator);
void SetAge(uintptr_t cage_offset, Age age) {
table_[card(cage_offset)] = age;
}
V8_INLINE Age GetAge(uintptr_t cage_offset) const {
return table_[card(cage_offset)];
}
void SetAgeForRange(uintptr_t cage_offset_begin, uintptr_t cage_offset_end,
Age age, AdjacentCardsPolicy adjacent_cards_policy);
Age GetAgeForRange(uintptr_t cage_offset_begin,
uintptr_t cage_offset_end) const;
void ResetForTesting();
private:
static constexpr size_t kAgeTableSize =
api_constants::kCagedHeapReservationSize >> kGranularityBits;
size_t entry(uintptr_t offset) const {
V8_INLINE size_t card(uintptr_t offset) const {
constexpr size_t kGranularityBits =
#if __cpp_lib_bitopts
std::countr_zero(static_cast<uint32_t>(kCardSizeInBytes));
#elif V8_HAS_BUILTIN_CTZ
__builtin_ctz(static_cast<uint32_t>(kCardSizeInBytes));
#else //! V8_HAS_BUILTIN_CTZ
// Hardcode and check with assert.
#if defined(CPPGC_2GB_CAGE)
11;
#else // !defined(CPPGC_2GB_CAGE)
12;
#endif // !defined(CPPGC_2GB_CAGE)
#endif // !V8_HAS_BUILTIN_CTZ
static_assert((1 << kGranularityBits) == kCardSizeInBytes);
const size_t entry = offset >> kGranularityBits;
CPPGC_DCHECK(table_.size() > entry);
CPPGC_DCHECK(CagedHeapBase::GetAgeTableSize() > entry);
return entry;
}
std::array<Age, kAgeTableSize> table_;
#if defined(V8_CC_GNU)
// gcc disallows flexible arrays in otherwise empty classes.
Age table_[0];
#else // !defined(V8_CC_GNU)
Age table_[];
#endif // !defined(V8_CC_GNU)
};
static_assert(sizeof(AgeTable) == 1 * api_constants::kMB,
"Size of AgeTable is 1MB");
#endif // CPPGC_YOUNG_GENERATION
struct CagedHeapLocalData final {
explicit CagedHeapLocalData(HeapBase* heap_base) : heap_base(heap_base) {}
V8_INLINE static CagedHeapLocalData& Get() {
return *reinterpret_cast<CagedHeapLocalData*>(CagedHeapBase::GetBase());
}
static constexpr size_t CalculateLocalDataSizeForHeapSize(size_t heap_size) {
return AgeTable::CalculateAgeTableSizeForHeapSize(heap_size);
}
bool is_incremental_marking_in_progress = false;
HeapBase* heap_base = nullptr;
#if defined(CPPGC_YOUNG_GENERATION)
AgeTable age_table;
#endif
@ -65,4 +116,6 @@ struct CagedHeapLocalData final {
} // namespace internal
} // namespace cppgc
#endif // defined(CPPGC_CAGED_HEAP)
#endif // INCLUDE_CPPGC_INTERNAL_CAGED_HEAP_LOCAL_DATA_H_

View File

@ -0,0 +1,68 @@
// Copyright 2022 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_CPPGC_INTERNAL_CAGED_HEAP_H_
#define INCLUDE_CPPGC_INTERNAL_CAGED_HEAP_H_
#include <climits>
#include <cstddef>
#include "cppgc/internal/api-constants.h"
#include "cppgc/internal/base-page-handle.h"
#include "v8config.h" // NOLINT(build/include_directory)
#if defined(CPPGC_CAGED_HEAP)
namespace cppgc {
namespace internal {
class V8_EXPORT CagedHeapBase {
public:
V8_INLINE static uintptr_t OffsetFromAddress(const void* address) {
return reinterpret_cast<uintptr_t>(address) &
(api_constants::kCagedHeapReservationAlignment - 1);
}
V8_INLINE static bool IsWithinCage(const void* address) {
CPPGC_DCHECK(g_heap_base_);
return (reinterpret_cast<uintptr_t>(address) &
~(api_constants::kCagedHeapReservationAlignment - 1)) ==
g_heap_base_;
}
V8_INLINE static bool AreWithinCage(const void* addr1, const void* addr2) {
#if defined(CPPGC_2GB_CAGE)
static constexpr size_t kHeapBaseShift = sizeof(uint32_t) * CHAR_BIT - 1;
#else //! defined(CPPGC_2GB_CAGE)
#if defined(CPPGC_POINTER_COMPRESSION)
static constexpr size_t kHeapBaseShift =
31 + api_constants::kPointerCompressionShift;
#else // !defined(CPPGC_POINTER_COMPRESSION)
static constexpr size_t kHeapBaseShift = sizeof(uint32_t) * CHAR_BIT;
#endif // !defined(CPPGC_POINTER_COMPRESSION)
#endif //! defined(CPPGC_2GB_CAGE)
static_assert((static_cast<size_t>(1) << kHeapBaseShift) ==
api_constants::kCagedHeapMaxReservationSize);
CPPGC_DCHECK(g_heap_base_);
return !(((reinterpret_cast<uintptr_t>(addr1) ^ g_heap_base_) |
(reinterpret_cast<uintptr_t>(addr2) ^ g_heap_base_)) >>
kHeapBaseShift);
}
V8_INLINE static uintptr_t GetBase() { return g_heap_base_; }
V8_INLINE static size_t GetAgeTableSize() { return g_age_table_size_; }
private:
friend class CagedHeap;
static uintptr_t g_heap_base_;
static size_t g_age_table_size_;
};
} // namespace internal
} // namespace cppgc
#endif // defined(CPPGC_CAGED_HEAP)
#endif // INCLUDE_CPPGC_INTERNAL_CAGED_HEAP_H_

View File

@ -21,13 +21,13 @@ namespace cppgc {
// [[no_unique_address]] comes in C++20 but supported in clang with -std >=
// c++11.
#if CPPGC_HAS_CPP_ATTRIBUTE(no_unique_address) // NOLINTNEXTLINE
#if CPPGC_HAS_CPP_ATTRIBUTE(no_unique_address)
#define CPPGC_NO_UNIQUE_ADDRESS [[no_unique_address]]
#else
#define CPPGC_NO_UNIQUE_ADDRESS
#endif
#if CPPGC_HAS_ATTRIBUTE(unused) // NOLINTNEXTLINE
#if CPPGC_HAS_ATTRIBUTE(unused)
#define CPPGC_UNUSED __attribute__((unused))
#else
#define CPPGC_UNUSED

View File

@ -19,7 +19,8 @@ struct HasFinalizeGarbageCollectedObject : std::false_type {};
template <typename T>
struct HasFinalizeGarbageCollectedObject<
T, void_t<decltype(std::declval<T>().FinalizeGarbageCollectedObject())>>
T,
std::void_t<decltype(std::declval<T>().FinalizeGarbageCollectedObject())>>
: std::true_type {};
// The FinalizerTraitImpl specifies how to finalize objects.
@ -76,6 +77,8 @@ struct FinalizerTrait {
}
public:
static constexpr bool HasFinalizer() { return kNonTrivialFinalizer; }
// The callback used to finalize an object of type T.
static constexpr FinalizationCallback kCallback =
kNonTrivialFinalizer ? Finalize : nullptr;

View File

@ -7,8 +7,10 @@
#include <atomic>
#include <cstdint>
#include <type_traits>
#include "cppgc/internal/finalizer-trait.h"
#include "cppgc/internal/logging.h"
#include "cppgc/internal/name-trait.h"
#include "cppgc/trace-trait.h"
#include "v8config.h" // NOLINT(build/include_directory)
@ -18,17 +20,94 @@ namespace internal {
using GCInfoIndex = uint16_t;
// Acquires a new GC info object and returns the index. In addition, also
// updates `registered_index` atomically.
V8_EXPORT GCInfoIndex
EnsureGCInfoIndex(std::atomic<GCInfoIndex>& registered_index,
FinalizationCallback, TraceCallback, NameCallback, bool);
struct V8_EXPORT EnsureGCInfoIndexTrait final {
// Acquires a new GC info object and updates `registered_index` with the index
// that identifies that new info accordingly.
template <typename T>
V8_INLINE static GCInfoIndex EnsureIndex(
std::atomic<GCInfoIndex>& registered_index) {
return EnsureGCInfoIndexTraitDispatch<T>{}(registered_index);
}
private:
template <typename T, bool = FinalizerTrait<T>::HasFinalizer(),
bool = NameTrait<T>::HasNonHiddenName()>
struct EnsureGCInfoIndexTraitDispatch;
static GCInfoIndex V8_PRESERVE_MOST
EnsureGCInfoIndex(std::atomic<GCInfoIndex>&, TraceCallback,
FinalizationCallback, NameCallback);
static GCInfoIndex V8_PRESERVE_MOST EnsureGCInfoIndex(
std::atomic<GCInfoIndex>&, TraceCallback, FinalizationCallback);
static GCInfoIndex V8_PRESERVE_MOST
EnsureGCInfoIndex(std::atomic<GCInfoIndex>&, TraceCallback, NameCallback);
static GCInfoIndex V8_PRESERVE_MOST
EnsureGCInfoIndex(std::atomic<GCInfoIndex>&, TraceCallback);
};
#define DISPATCH(has_finalizer, has_non_hidden_name, function) \
template <typename T> \
struct EnsureGCInfoIndexTrait::EnsureGCInfoIndexTraitDispatch< \
T, has_finalizer, has_non_hidden_name> { \
V8_INLINE GCInfoIndex \
operator()(std::atomic<GCInfoIndex>& registered_index) { \
return function; \
} \
};
// ------------------------------------------------------- //
// DISPATCH(has_finalizer, has_non_hidden_name, function) //
// ------------------------------------------------------- //
DISPATCH(true, true, //
EnsureGCInfoIndex(registered_index, //
TraceTrait<T>::Trace, //
FinalizerTrait<T>::kCallback, //
NameTrait<T>::GetName)) //
DISPATCH(true, false, //
EnsureGCInfoIndex(registered_index, //
TraceTrait<T>::Trace, //
FinalizerTrait<T>::kCallback)) //
DISPATCH(false, true, //
EnsureGCInfoIndex(registered_index, //
TraceTrait<T>::Trace, //
NameTrait<T>::GetName)) //
DISPATCH(false, false, //
EnsureGCInfoIndex(registered_index, //
TraceTrait<T>::Trace)) //
#undef DISPATCH
// Trait determines how the garbage collector treats objects wrt. to traversing,
// finalization, and naming.
template <typename T>
struct GCInfoTrait final {
V8_INLINE static GCInfoIndex Index() {
static_assert(sizeof(T), "T must be fully defined");
static std::atomic<GCInfoIndex>
registered_index; // Uses zero initialization.
GCInfoIndex index = registered_index.load(std::memory_order_acquire);
if (V8_UNLIKELY(!index)) {
index = EnsureGCInfoIndexTrait::EnsureIndex<T>(registered_index);
CPPGC_DCHECK(index != 0);
CPPGC_DCHECK(index == registered_index.load(std::memory_order_acquire));
}
return index;
}
static constexpr bool CheckCallbacksAreDefined() {
// No USE() macro available.
(void)static_cast<TraceCallback>(TraceTrait<T>::Trace);
(void)static_cast<FinalizationCallback>(FinalizerTrait<T>::kCallback);
(void)static_cast<NameCallback>(NameTrait<T>::GetName);
return true;
}
};
// Fold types based on finalizer behavior. Note that finalizer characteristics
// align with trace behavior, i.e., destructors are virtual when trace methods
// are and vice versa.
template <typename T, typename ParentMostGarbageCollectedType>
struct GCInfoFolding {
struct GCInfoFolding final {
static constexpr bool kHasVirtualDestructorAtBase =
std::has_virtual_destructor<ParentMostGarbageCollectedType>::value;
static constexpr bool kBothTypesAreTriviallyDestructible =
@ -43,30 +122,24 @@ struct GCInfoFolding {
static constexpr bool kWantsDetailedObjectNames = false;
#endif // !CPPGC_SUPPORTS_OBJECT_NAMES
// Folding would regresses name resolution when deriving names from C++
// class names as it would just folds a name to the base class name.
using ResultType = std::conditional_t<(kHasVirtualDestructorAtBase ||
kBothTypesAreTriviallyDestructible ||
kHasCustomFinalizerDispatchAtBase) &&
!kWantsDetailedObjectNames,
ParentMostGarbageCollectedType, T>;
};
// Always true. Forces the compiler to resolve callbacks which ensures that
// both modes don't break without requiring compiling a separate
// configuration. Only a single GCInfo (for `ResultType` below) will actually
// be instantiated but existence (and well-formedness) of all callbacks is
// checked.
static constexpr bool kCheckTypeGuardAlwaysTrue =
GCInfoTrait<T>::CheckCallbacksAreDefined() &&
GCInfoTrait<ParentMostGarbageCollectedType>::CheckCallbacksAreDefined();
// Trait determines how the garbage collector treats objects wrt. to traversing,
// finalization, and naming.
template <typename T>
struct GCInfoTrait final {
static GCInfoIndex Index() {
static_assert(sizeof(T), "T must be fully defined");
static std::atomic<GCInfoIndex>
registered_index; // Uses zero initialization.
const GCInfoIndex index = registered_index.load(std::memory_order_acquire);
return index ? index
: EnsureGCInfoIndex(
registered_index, FinalizerTrait<T>::kCallback,
TraceTrait<T>::Trace, NameTrait<T>::GetName,
std::is_polymorphic<T>::value);
}
// Folding would regress name resolution when deriving names from C++
// class names as it would just folds a name to the base class name.
using ResultType =
std::conditional_t<kCheckTypeGuardAlwaysTrue &&
(kHasVirtualDestructorAtBase ||
kBothTypesAreTriviallyDestructible ||
kHasCustomFinalizerDispatchAtBase) &&
!kWantsDetailedObjectNames,
ParentMostGarbageCollectedType, T>;
};
} // namespace internal

View File

@ -20,18 +20,18 @@ FatalImpl(const char*, const SourceLocation& = SourceLocation::Current());
template <typename>
struct EatParams {};
#if DEBUG
#if defined(DEBUG)
#define CPPGC_DCHECK_MSG(condition, message) \
do { \
if (V8_UNLIKELY(!(condition))) { \
::cppgc::internal::DCheckImpl(message); \
} \
} while (false)
#else
#else // !defined(DEBUG)
#define CPPGC_DCHECK_MSG(condition, message) \
(static_cast<void>(::cppgc::internal::EatParams<decltype( \
static_cast<void>(condition), message)>{}))
#endif
#endif // !defined(DEBUG)
#define CPPGC_DCHECK(condition) CPPGC_DCHECK_MSG(condition, #condition)

View File

@ -0,0 +1,256 @@
// Copyright 2022 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_CPPGC_INTERNAL_MEMBER_STORAGE_H_
#define INCLUDE_CPPGC_INTERNAL_MEMBER_STORAGE_H_
#include <atomic>
#include <cstddef>
#include <type_traits>
#include "cppgc/internal/api-constants.h"
#include "cppgc/internal/logging.h"
#include "cppgc/sentinel-pointer.h"
#include "v8config.h" // NOLINT(build/include_directory)
namespace cppgc {
namespace internal {
enum class WriteBarrierSlotType {
kCompressed,
kUncompressed,
};
#if defined(CPPGC_POINTER_COMPRESSION)
#if defined(__clang__)
// Attribute const allows the compiler to assume that CageBaseGlobal::g_base_
// doesn't change (e.g. across calls) and thereby avoid redundant loads.
#define CPPGC_CONST __attribute__((const))
#define CPPGC_REQUIRE_CONSTANT_INIT \
__attribute__((require_constant_initialization))
#else // defined(__clang__)
#define CPPGC_CONST
#define CPPGC_REQUIRE_CONSTANT_INIT
#endif // defined(__clang__)
class V8_EXPORT CageBaseGlobal final {
public:
V8_INLINE CPPGC_CONST static uintptr_t Get() {
CPPGC_DCHECK(IsBaseConsistent());
return g_base_.base;
}
V8_INLINE CPPGC_CONST static bool IsSet() {
CPPGC_DCHECK(IsBaseConsistent());
return (g_base_.base & ~kLowerHalfWordMask) != 0;
}
private:
// We keep the lower halfword as ones to speed up decompression.
static constexpr uintptr_t kLowerHalfWordMask =
(api_constants::kCagedHeapReservationAlignment - 1);
static union alignas(api_constants::kCachelineSize) Base {
uintptr_t base;
char cache_line[api_constants::kCachelineSize];
} g_base_ CPPGC_REQUIRE_CONSTANT_INIT;
CageBaseGlobal() = delete;
V8_INLINE static bool IsBaseConsistent() {
return kLowerHalfWordMask == (g_base_.base & kLowerHalfWordMask);
}
friend class CageBaseGlobalUpdater;
};
#undef CPPGC_REQUIRE_CONSTANT_INIT
#undef CPPGC_CONST
class V8_TRIVIAL_ABI CompressedPointer final {
public:
using IntegralType = uint32_t;
static constexpr auto kWriteBarrierSlotType =
WriteBarrierSlotType::kCompressed;
V8_INLINE CompressedPointer() : value_(0u) {}
V8_INLINE explicit CompressedPointer(const void* ptr)
: value_(Compress(ptr)) {}
V8_INLINE explicit CompressedPointer(std::nullptr_t) : value_(0u) {}
V8_INLINE explicit CompressedPointer(SentinelPointer)
: value_(kCompressedSentinel) {}
V8_INLINE const void* Load() const { return Decompress(value_); }
V8_INLINE const void* LoadAtomic() const {
return Decompress(
reinterpret_cast<const std::atomic<IntegralType>&>(value_).load(
std::memory_order_relaxed));
}
V8_INLINE void Store(const void* ptr) { value_ = Compress(ptr); }
V8_INLINE void StoreAtomic(const void* value) {
reinterpret_cast<std::atomic<IntegralType>&>(value_).store(
Compress(value), std::memory_order_relaxed);
}
V8_INLINE void Clear() { value_ = 0u; }
V8_INLINE bool IsCleared() const { return !value_; }
V8_INLINE bool IsSentinel() const { return value_ == kCompressedSentinel; }
V8_INLINE uint32_t GetAsInteger() const { return value_; }
V8_INLINE friend bool operator==(CompressedPointer a, CompressedPointer b) {
return a.value_ == b.value_;
}
V8_INLINE friend bool operator!=(CompressedPointer a, CompressedPointer b) {
return a.value_ != b.value_;
}
V8_INLINE friend bool operator<(CompressedPointer a, CompressedPointer b) {
return a.value_ < b.value_;
}
V8_INLINE friend bool operator<=(CompressedPointer a, CompressedPointer b) {
return a.value_ <= b.value_;
}
V8_INLINE friend bool operator>(CompressedPointer a, CompressedPointer b) {
return a.value_ > b.value_;
}
V8_INLINE friend bool operator>=(CompressedPointer a, CompressedPointer b) {
return a.value_ >= b.value_;
}
static V8_INLINE IntegralType Compress(const void* ptr) {
static_assert(SentinelPointer::kSentinelValue ==
1 << api_constants::kPointerCompressionShift,
"The compression scheme relies on the sentinel encoded as 1 "
"<< kPointerCompressionShift");
static constexpr size_t kGigaCageMask =
~(api_constants::kCagedHeapReservationAlignment - 1);
static constexpr size_t kPointerCompressionShiftMask =
(1 << api_constants::kPointerCompressionShift) - 1;
CPPGC_DCHECK(CageBaseGlobal::IsSet());
const uintptr_t base = CageBaseGlobal::Get();
CPPGC_DCHECK(!ptr || ptr == kSentinelPointer ||
(base & kGigaCageMask) ==
(reinterpret_cast<uintptr_t>(ptr) & kGigaCageMask));
CPPGC_DCHECK(
(reinterpret_cast<uintptr_t>(ptr) & kPointerCompressionShiftMask) == 0);
#if defined(CPPGC_2GB_CAGE)
// Truncate the pointer.
auto compressed =
static_cast<IntegralType>(reinterpret_cast<uintptr_t>(ptr));
#else // !defined(CPPGC_2GB_CAGE)
const auto uptr = reinterpret_cast<uintptr_t>(ptr);
// Shift the pointer and truncate.
auto compressed = static_cast<IntegralType>(
uptr >> api_constants::kPointerCompressionShift);
#endif // !defined(CPPGC_2GB_CAGE)
// Normal compressed pointers must have the MSB set.
CPPGC_DCHECK((!compressed || compressed == kCompressedSentinel) ||
(compressed & (1 << 31)));
return compressed;
}
static V8_INLINE void* Decompress(IntegralType ptr) {
CPPGC_DCHECK(CageBaseGlobal::IsSet());
const uintptr_t base = CageBaseGlobal::Get();
// Treat compressed pointer as signed and cast it to uint64_t, which will
// sign-extend it.
#if defined(CPPGC_2GB_CAGE)
const uint64_t mask = static_cast<uint64_t>(static_cast<int32_t>(ptr));
#else // !defined(CPPGC_2GB_CAGE)
// Then, shift the result. It's important to shift the unsigned
// value, as otherwise it would result in undefined behavior.
const uint64_t mask = static_cast<uint64_t>(static_cast<int32_t>(ptr))
<< api_constants::kPointerCompressionShift;
#endif // !defined(CPPGC_2GB_CAGE)
return reinterpret_cast<void*>(mask & base);
}
private:
#if defined(CPPGC_2GB_CAGE)
static constexpr IntegralType kCompressedSentinel =
SentinelPointer::kSentinelValue;
#else // !defined(CPPGC_2GB_CAGE)
static constexpr IntegralType kCompressedSentinel =
SentinelPointer::kSentinelValue >>
api_constants::kPointerCompressionShift;
#endif // !defined(CPPGC_2GB_CAGE)
// All constructors initialize `value_`. Do not add a default value here as it
// results in a non-atomic write on some builds, even when the atomic version
// of the constructor is used.
IntegralType value_;
};
#endif // defined(CPPGC_POINTER_COMPRESSION)
class V8_TRIVIAL_ABI RawPointer final {
public:
using IntegralType = uintptr_t;
static constexpr auto kWriteBarrierSlotType =
WriteBarrierSlotType::kUncompressed;
V8_INLINE RawPointer() : ptr_(nullptr) {}
V8_INLINE explicit RawPointer(const void* ptr) : ptr_(ptr) {}
V8_INLINE const void* Load() const { return ptr_; }
V8_INLINE const void* LoadAtomic() const {
return reinterpret_cast<const std::atomic<const void*>&>(ptr_).load(
std::memory_order_relaxed);
}
V8_INLINE void Store(const void* ptr) { ptr_ = ptr; }
V8_INLINE void StoreAtomic(const void* ptr) {
reinterpret_cast<std::atomic<const void*>&>(ptr_).store(
ptr, std::memory_order_relaxed);
}
V8_INLINE void Clear() { ptr_ = nullptr; }
V8_INLINE bool IsCleared() const { return !ptr_; }
V8_INLINE bool IsSentinel() const { return ptr_ == kSentinelPointer; }
V8_INLINE uintptr_t GetAsInteger() const {
return reinterpret_cast<uintptr_t>(ptr_);
}
V8_INLINE friend bool operator==(RawPointer a, RawPointer b) {
return a.ptr_ == b.ptr_;
}
V8_INLINE friend bool operator!=(RawPointer a, RawPointer b) {
return a.ptr_ != b.ptr_;
}
V8_INLINE friend bool operator<(RawPointer a, RawPointer b) {
return a.ptr_ < b.ptr_;
}
V8_INLINE friend bool operator<=(RawPointer a, RawPointer b) {
return a.ptr_ <= b.ptr_;
}
V8_INLINE friend bool operator>(RawPointer a, RawPointer b) {
return a.ptr_ > b.ptr_;
}
V8_INLINE friend bool operator>=(RawPointer a, RawPointer b) {
return a.ptr_ >= b.ptr_;
}
private:
// All constructors initialize `ptr_`. Do not add a default value here as it
// results in a non-atomic write on some builds, even when the atomic version
// of the constructor is used.
const void* ptr_;
};
#if defined(CPPGC_POINTER_COMPRESSION)
using DefaultMemberStorage = CompressedPointer;
#else // !defined(CPPGC_POINTER_COMPRESSION)
using DefaultMemberStorage = RawPointer;
#endif // !defined(CPPGC_POINTER_COMPRESSION)
} // namespace internal
} // namespace cppgc
#endif // INCLUDE_CPPGC_INTERNAL_MEMBER_STORAGE_H_

View File

@ -6,6 +6,8 @@
#define INCLUDE_CPPGC_INTERNAL_NAME_TRAIT_H_
#include <cstddef>
#include <cstdint>
#include <type_traits>
#include "cppgc/name-provider.h"
#include "v8config.h" // NOLINT(build/include_directory)
@ -57,6 +59,11 @@ struct HeapObjectName {
bool name_was_hidden;
};
enum class HeapObjectNameForUnnamedObject : uint8_t {
kUseClassNameIfSupported,
kUseHiddenName,
};
class V8_EXPORT NameTraitBase {
protected:
static HeapObjectName GetNameFromTypeSignature(const char*);
@ -67,16 +74,34 @@ class V8_EXPORT NameTraitBase {
template <typename T>
class NameTrait final : public NameTraitBase {
public:
static HeapObjectName GetName(const void* obj) {
return GetNameFor(static_cast<const T*>(obj));
static constexpr bool HasNonHiddenName() {
#if CPPGC_SUPPORTS_COMPILE_TIME_TYPENAME
return true;
#elif CPPGC_SUPPORTS_OBJECT_NAMES
return true;
#else // !CPPGC_SUPPORTS_OBJECT_NAMES
return std::is_base_of<NameProvider, T>::value;
#endif // !CPPGC_SUPPORTS_OBJECT_NAMES
}
static HeapObjectName GetName(
const void* obj, HeapObjectNameForUnnamedObject name_retrieval_mode) {
return GetNameFor(static_cast<const T*>(obj), name_retrieval_mode);
}
private:
static HeapObjectName GetNameFor(const NameProvider* name_provider) {
return {name_provider->GetName(), false};
static HeapObjectName GetNameFor(const NameProvider* name_provider,
HeapObjectNameForUnnamedObject) {
// Objects inheriting from `NameProvider` are not considered unnamed as
// users already provided a name for them.
return {name_provider->GetHumanReadableName(), false};
}
static HeapObjectName GetNameFor(...) {
static HeapObjectName GetNameFor(
const void*, HeapObjectNameForUnnamedObject name_retrieval_mode) {
if (name_retrieval_mode == HeapObjectNameForUnnamedObject::kUseHiddenName)
return {NameProvider::kHiddenName, true};
#if CPPGC_SUPPORTS_COMPILE_TIME_TYPENAME
return {GetTypename<T>(), false};
#elif CPPGC_SUPPORTS_OBJECT_NAMES
@ -101,7 +126,8 @@ class NameTrait final : public NameTraitBase {
}
};
using NameCallback = HeapObjectName (*)(const void*);
using NameCallback = HeapObjectName (*)(const void*,
HeapObjectNameForUnnamedObject);
} // namespace internal
} // namespace cppgc

View File

@ -14,12 +14,11 @@
#include "v8config.h" // NOLINT(build/include_directory)
namespace cppgc {
class Visitor;
namespace internal {
class CrossThreadPersistentRegion;
class FatalOutOfMemoryHandler;
class RootVisitor;
// PersistentNode represents a variant of two states:
// 1) traceable node with a back pointer to the Persistent object;
@ -31,7 +30,7 @@ class PersistentNode final {
PersistentNode(const PersistentNode&) = delete;
PersistentNode& operator=(const PersistentNode&) = delete;
void InitializeAsUsedNode(void* owner, TraceCallback trace) {
void InitializeAsUsedNode(void* owner, TraceRootCallback trace) {
CPPGC_DCHECK(trace);
owner_ = owner;
trace_ = trace;
@ -52,9 +51,9 @@ class PersistentNode final {
return next_;
}
void Trace(Visitor* visitor) const {
void Trace(RootVisitor& root_visitor) const {
CPPGC_DCHECK(IsUsed());
trace_(visitor, owner_);
trace_(root_visitor, owner_);
}
bool IsUsed() const { return trace_; }
@ -72,29 +71,38 @@ class PersistentNode final {
void* owner_ = nullptr;
PersistentNode* next_;
};
TraceCallback trace_ = nullptr;
TraceRootCallback trace_ = nullptr;
};
class V8_EXPORT PersistentRegion final {
class V8_EXPORT PersistentRegionBase {
using PersistentNodeSlots = std::array<PersistentNode, 256u>;
public:
PersistentRegion() = default;
// Clears Persistent fields to avoid stale pointers after heap teardown.
~PersistentRegion();
~PersistentRegionBase();
PersistentRegion(const PersistentRegion&) = delete;
PersistentRegion& operator=(const PersistentRegion&) = delete;
PersistentRegionBase(const PersistentRegionBase&) = delete;
PersistentRegionBase& operator=(const PersistentRegionBase&) = delete;
PersistentNode* AllocateNode(void* owner, TraceCallback trace) {
if (!free_list_head_) {
EnsureNodeSlots();
void Iterate(RootVisitor&);
size_t NodesInUse() const;
void ClearAllUsedNodes();
protected:
explicit PersistentRegionBase(const FatalOutOfMemoryHandler& oom_handler);
PersistentNode* TryAllocateNodeFromFreeList(void* owner,
TraceRootCallback trace) {
PersistentNode* node = nullptr;
if (V8_LIKELY(free_list_head_)) {
node = free_list_head_;
free_list_head_ = free_list_head_->FreeListNext();
CPPGC_DCHECK(!node->IsUsed());
node->InitializeAsUsedNode(owner, trace);
nodes_in_use_++;
}
PersistentNode* node = free_list_head_;
free_list_head_ = free_list_head_->FreeListNext();
CPPGC_DCHECK(!node->IsUsed());
node->InitializeAsUsedNode(owner, trace);
nodes_in_use_++;
return node;
}
@ -107,24 +115,57 @@ class V8_EXPORT PersistentRegion final {
nodes_in_use_--;
}
void Trace(Visitor*);
size_t NodesInUse() const;
void ClearAllUsedNodes();
PersistentNode* RefillFreeListAndAllocateNode(void* owner,
TraceRootCallback trace);
private:
void EnsureNodeSlots();
template <typename PersistentBaseClass>
void ClearAllUsedNodes();
void RefillFreeList();
std::vector<std::unique_ptr<PersistentNodeSlots>> nodes_;
PersistentNode* free_list_head_ = nullptr;
size_t nodes_in_use_ = 0;
const FatalOutOfMemoryHandler& oom_handler_;
friend class CrossThreadPersistentRegion;
};
// CrossThreadPersistent uses PersistentRegion but protects it using this lock
// when needed.
// Variant of PersistentRegionBase that checks whether the allocation and
// freeing happens only on the thread that created the region.
class V8_EXPORT PersistentRegion final : public PersistentRegionBase {
public:
explicit PersistentRegion(const FatalOutOfMemoryHandler&);
// Clears Persistent fields to avoid stale pointers after heap teardown.
~PersistentRegion() = default;
PersistentRegion(const PersistentRegion&) = delete;
PersistentRegion& operator=(const PersistentRegion&) = delete;
V8_INLINE PersistentNode* AllocateNode(void* owner, TraceRootCallback trace) {
CPPGC_DCHECK(IsCreationThread());
auto* node = TryAllocateNodeFromFreeList(owner, trace);
if (V8_LIKELY(node)) return node;
// Slow path allocation allows for checking thread correspondence.
CPPGC_CHECK(IsCreationThread());
return RefillFreeListAndAllocateNode(owner, trace);
}
V8_INLINE void FreeNode(PersistentNode* node) {
CPPGC_DCHECK(IsCreationThread());
PersistentRegionBase::FreeNode(node);
}
private:
bool IsCreationThread();
int creation_thread_id_;
};
// CrossThreadPersistent uses PersistentRegionBase but protects it using this
// lock when needed.
class V8_EXPORT PersistentRegionLock final {
public:
PersistentRegionLock();
@ -133,11 +174,12 @@ class V8_EXPORT PersistentRegionLock final {
static void AssertLocked();
};
// Variant of PersistentRegion that checks whether the PersistentRegionLock is
// locked.
class V8_EXPORT CrossThreadPersistentRegion final {
// Variant of PersistentRegionBase that checks whether the PersistentRegionLock
// is locked.
class V8_EXPORT CrossThreadPersistentRegion final
: protected PersistentRegionBase {
public:
CrossThreadPersistentRegion() = default;
explicit CrossThreadPersistentRegion(const FatalOutOfMemoryHandler&);
// Clears Persistent fields to avoid stale pointers after heap teardown.
~CrossThreadPersistentRegion();
@ -145,24 +187,24 @@ class V8_EXPORT CrossThreadPersistentRegion final {
CrossThreadPersistentRegion& operator=(const CrossThreadPersistentRegion&) =
delete;
V8_INLINE PersistentNode* AllocateNode(void* owner, TraceCallback trace) {
V8_INLINE PersistentNode* AllocateNode(void* owner, TraceRootCallback trace) {
PersistentRegionLock::AssertLocked();
return persistent_region_.AllocateNode(owner, trace);
auto* node = TryAllocateNodeFromFreeList(owner, trace);
if (V8_LIKELY(node)) return node;
return RefillFreeListAndAllocateNode(owner, trace);
}
V8_INLINE void FreeNode(PersistentNode* node) {
PersistentRegionLock::AssertLocked();
persistent_region_.FreeNode(node);
PersistentRegionBase::FreeNode(node);
}
void Trace(Visitor*);
void Iterate(RootVisitor&);
size_t NodesInUse() const;
void ClearAllUsedNodes();
private:
PersistentRegion persistent_region_;
};
} // namespace internal

View File

@ -8,13 +8,17 @@
#include <cstdint>
#include <type_traits>
#include "cppgc/internal/member-storage.h"
#include "cppgc/internal/write-barrier.h"
#include "cppgc/sentinel-pointer.h"
#include "cppgc/source-location.h"
#include "cppgc/type-traits.h"
#include "v8config.h" // NOLINT(build/include_directory)
namespace cppgc {
namespace internal {
class HeapBase;
class PersistentRegion;
class CrossThreadPersistentRegion;
@ -24,15 +28,67 @@ class WeakMemberTag;
class UntracedMemberTag;
struct DijkstraWriteBarrierPolicy {
static void InitializingBarrier(const void*, const void*) {
V8_INLINE static void InitializingBarrier(const void*, const void*) {
// Since in initializing writes the source object is always white, having no
// barrier doesn't break the tri-color invariant.
}
static void AssigningBarrier(const void* slot, const void* value) {
template <WriteBarrierSlotType SlotType>
V8_INLINE static void AssigningBarrier(const void* slot, const void* value) {
#ifdef CPPGC_SLIM_WRITE_BARRIER
if (V8_UNLIKELY(WriteBarrier::IsEnabled()))
WriteBarrier::CombinedWriteBarrierSlow<SlotType>(slot);
#else // !CPPGC_SLIM_WRITE_BARRIER
WriteBarrier::Params params;
switch (WriteBarrier::GetWriteBarrierType(slot, value, params)) {
const WriteBarrier::Type type =
WriteBarrier::GetWriteBarrierType(slot, value, params);
WriteBarrier(type, params, slot, value);
#endif // !CPPGC_SLIM_WRITE_BARRIER
}
template <WriteBarrierSlotType SlotType>
V8_INLINE static void AssigningBarrier(const void* slot, RawPointer storage) {
static_assert(
SlotType == WriteBarrierSlotType::kUncompressed,
"Assigning storages of Member and UncompressedMember is not supported");
#ifdef CPPGC_SLIM_WRITE_BARRIER
if (V8_UNLIKELY(WriteBarrier::IsEnabled()))
WriteBarrier::CombinedWriteBarrierSlow<SlotType>(slot);
#else // !CPPGC_SLIM_WRITE_BARRIER
WriteBarrier::Params params;
const WriteBarrier::Type type =
WriteBarrier::GetWriteBarrierType(slot, storage, params);
WriteBarrier(type, params, slot, storage.Load());
#endif // !CPPGC_SLIM_WRITE_BARRIER
}
#if defined(CPPGC_POINTER_COMPRESSION)
template <WriteBarrierSlotType SlotType>
V8_INLINE static void AssigningBarrier(const void* slot,
CompressedPointer storage) {
static_assert(
SlotType == WriteBarrierSlotType::kCompressed,
"Assigning storages of Member and UncompressedMember is not supported");
#ifdef CPPGC_SLIM_WRITE_BARRIER
if (V8_UNLIKELY(WriteBarrier::IsEnabled()))
WriteBarrier::CombinedWriteBarrierSlow<SlotType>(slot);
#else // !CPPGC_SLIM_WRITE_BARRIER
WriteBarrier::Params params;
const WriteBarrier::Type type =
WriteBarrier::GetWriteBarrierType(slot, storage, params);
WriteBarrier(type, params, slot, storage.Load());
#endif // !CPPGC_SLIM_WRITE_BARRIER
}
#endif // defined(CPPGC_POINTER_COMPRESSION)
private:
V8_INLINE static void WriteBarrier(WriteBarrier::Type type,
const WriteBarrier::Params& params,
const void* slot, const void* value) {
switch (type) {
case WriteBarrier::Type::kGenerational:
WriteBarrier::GenerationalBarrier(params, slot);
WriteBarrier::GenerationalBarrier<
WriteBarrier::GenerationalBarrierType::kPreciseSlot>(params, slot);
break;
case WriteBarrier::Type::kMarking:
WriteBarrier::DijkstraMarkingBarrier(params, value);
@ -44,29 +100,69 @@ struct DijkstraWriteBarrierPolicy {
};
struct NoWriteBarrierPolicy {
static void InitializingBarrier(const void*, const void*) {}
static void AssigningBarrier(const void*, const void*) {}
V8_INLINE static void InitializingBarrier(const void*, const void*) {}
template <WriteBarrierSlotType>
V8_INLINE static void AssigningBarrier(const void*, const void*) {}
template <WriteBarrierSlotType, typename MemberStorage>
V8_INLINE static void AssigningBarrier(const void*, MemberStorage) {}
};
class V8_EXPORT EnabledCheckingPolicy {
class V8_EXPORT SameThreadEnabledCheckingPolicyBase {
protected:
EnabledCheckingPolicy();
void CheckPointer(const void* ptr);
void CheckPointerImpl(const void* ptr, bool points_to_payload,
bool check_off_heap_assignments);
const HeapBase* heap_ = nullptr;
};
template <bool kCheckOffHeapAssignments>
class V8_EXPORT SameThreadEnabledCheckingPolicy
: private SameThreadEnabledCheckingPolicyBase {
protected:
template <typename T>
void CheckPointer(const T* ptr) {
if (!ptr || (kSentinelPointer == ptr)) return;
CheckPointersImplTrampoline<T>::Call(this, ptr);
}
private:
void* impl_;
template <typename T, bool = IsCompleteV<T>>
struct CheckPointersImplTrampoline {
static void Call(SameThreadEnabledCheckingPolicy* policy, const T* ptr) {
policy->CheckPointerImpl(ptr, false, kCheckOffHeapAssignments);
}
};
template <typename T>
struct CheckPointersImplTrampoline<T, true> {
static void Call(SameThreadEnabledCheckingPolicy* policy, const T* ptr) {
policy->CheckPointerImpl(ptr, IsGarbageCollectedTypeV<T>,
kCheckOffHeapAssignments);
}
};
};
class DisabledCheckingPolicy {
protected:
void CheckPointer(const void* raw) {}
V8_INLINE void CheckPointer(const void*) {}
};
#if V8_ENABLE_CHECKS
using DefaultCheckingPolicy = EnabledCheckingPolicy;
#else
using DefaultCheckingPolicy = DisabledCheckingPolicy;
#endif
#ifdef DEBUG
// Off heap members are not connected to object graph and thus cannot ressurect
// dead objects.
using DefaultMemberCheckingPolicy =
SameThreadEnabledCheckingPolicy<false /* kCheckOffHeapAssignments*/>;
using DefaultPersistentCheckingPolicy =
SameThreadEnabledCheckingPolicy<true /* kCheckOffHeapAssignments*/>;
#else // !DEBUG
using DefaultMemberCheckingPolicy = DisabledCheckingPolicy;
using DefaultPersistentCheckingPolicy = DisabledCheckingPolicy;
#endif // !DEBUG
// For CT(W)P neither marking information (for value), nor objectstart bitmap
// (for slot) are guaranteed to be present because there's no synchronization
// between heaps after marking.
using DefaultCrossThreadPersistentCheckingPolicy = DisabledCheckingPolicy;
class KeepLocationPolicy {
public:
@ -129,14 +225,15 @@ struct WeakCrossThreadPersistentPolicy {
// Forward declarations setting up the default policies.
template <typename T, typename WeaknessPolicy,
typename LocationPolicy = DefaultLocationPolicy,
typename CheckingPolicy = DisabledCheckingPolicy>
typename CheckingPolicy = DefaultCrossThreadPersistentCheckingPolicy>
class BasicCrossThreadPersistent;
template <typename T, typename WeaknessPolicy,
typename LocationPolicy = DefaultLocationPolicy,
typename CheckingPolicy = DefaultCheckingPolicy>
typename CheckingPolicy = DefaultPersistentCheckingPolicy>
class BasicPersistent;
template <typename T, typename WeaknessTag, typename WriteBarrierPolicy,
typename CheckingPolicy = DefaultCheckingPolicy>
typename CheckingPolicy = DefaultMemberCheckingPolicy,
typename StorageType = DefaultMemberStorage>
class BasicMember;
} // namespace internal

View File

@ -1,30 +0,0 @@
// Copyright 2020 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_CPPGC_INTERNAL_PREFINALIZER_HANDLER_H_
#define INCLUDE_CPPGC_INTERNAL_PREFINALIZER_HANDLER_H_
#include "cppgc/heap.h"
#include "cppgc/liveness-broker.h"
namespace cppgc {
namespace internal {
class V8_EXPORT PreFinalizerRegistrationDispatcher final {
public:
using PreFinalizerCallback = bool (*)(const LivenessBroker&, void*);
struct PreFinalizer {
void* object;
PreFinalizerCallback callback;
bool operator==(const PreFinalizer& other) const;
};
static void RegisterPrefinalizer(PreFinalizer pre_finalizer);
};
} // namespace internal
} // namespace cppgc
#endif // INCLUDE_CPPGC_INTERNAL_PREFINALIZER_HANDLER_H_

View File

@ -5,15 +5,23 @@
#ifndef INCLUDE_CPPGC_INTERNAL_WRITE_BARRIER_H_
#define INCLUDE_CPPGC_INTERNAL_WRITE_BARRIER_H_
#include <cstddef>
#include <cstdint>
#include "cppgc/heap-handle.h"
#include "cppgc/heap-state.h"
#include "cppgc/internal/api-constants.h"
#include "cppgc/internal/atomic-entry-flag.h"
#include "cppgc/internal/base-page-handle.h"
#include "cppgc/internal/member-storage.h"
#include "cppgc/platform.h"
#include "cppgc/sentinel-pointer.h"
#include "cppgc/trace-trait.h"
#include "v8config.h" // NOLINT(build/include_directory)
#if defined(CPPGC_CAGED_HEAP)
#include "cppgc/internal/caged-heap-local-data.h"
#include "cppgc/internal/caged-heap.h"
#endif
namespace cppgc {
@ -22,8 +30,11 @@ class HeapHandle;
namespace internal {
#if defined(CPPGC_CAGED_HEAP)
class WriteBarrierTypeForCagedHeapPolicy;
#else // !CPPGC_CAGED_HEAP
class WriteBarrierTypeForNonCagedHeapPolicy;
#endif // !CPPGC_CAGED_HEAP
class V8_EXPORT WriteBarrier final {
public:
@ -33,16 +44,18 @@ class V8_EXPORT WriteBarrier final {
kGenerational,
};
enum class GenerationalBarrierType : uint8_t {
kPreciseSlot,
kPreciseUncompressedSlot,
kImpreciseSlot,
};
struct Params {
HeapHandle* heap = nullptr;
#if V8_ENABLE_CHECKS
Type type = Type::kNone;
#endif // !V8_ENABLE_CHECKS
#if defined(CPPGC_CAGED_HEAP)
uintptr_t start = 0;
CagedHeapLocalData& caged_heap() const {
return *reinterpret_cast<CagedHeapLocalData*>(start);
}
uintptr_t slot_offset = 0;
uintptr_t value_offset = 0;
#endif // CPPGC_CAGED_HEAP
@ -56,14 +69,25 @@ class V8_EXPORT WriteBarrier final {
// Returns the required write barrier for a given `slot` and `value`.
static V8_INLINE Type GetWriteBarrierType(const void* slot, const void* value,
Params& params);
// Returns the required write barrier for a given `slot` and `value`.
template <typename MemberStorage>
static V8_INLINE Type GetWriteBarrierType(const void* slot, MemberStorage,
Params& params);
// Returns the required write barrier for a given `slot`.
template <typename HeapHandleCallback>
static V8_INLINE Type GetWriteBarrierType(const void* slot, Params& params,
HeapHandleCallback callback);
// Returns the required write barrier for a given `value`.
static V8_INLINE Type GetWriteBarrierType(const void* value, Params& params);
template <typename HeapHandleCallback>
static V8_INLINE Type GetWriteBarrierTypeForExternallyReferencedObject(
const void* value, Params& params, HeapHandleCallback callback);
#ifdef CPPGC_SLIM_WRITE_BARRIER
// A write barrier that combines `GenerationalBarrier()` and
// `DijkstraMarkingBarrier()`. We only pass a single parameter here to clobber
// as few registers as possible.
template <WriteBarrierSlotType>
static V8_NOINLINE void V8_PRESERVE_MOST
CombinedWriteBarrierSlow(const void* slot);
#endif // CPPGC_SLIM_WRITE_BARRIER
static V8_INLINE void DijkstraMarkingBarrier(const Params& params,
const void* object);
@ -73,11 +97,13 @@ class V8_EXPORT WriteBarrier final {
static V8_INLINE void SteeleMarkingBarrier(const Params& params,
const void* object);
#if defined(CPPGC_YOUNG_GENERATION)
template <GenerationalBarrierType>
static V8_INLINE void GenerationalBarrier(const Params& params,
const void* slot);
#else // !CPPGC_YOUNG_GENERATION
#else // !CPPGC_YOUNG_GENERATION
template <GenerationalBarrierType>
static V8_INLINE void GenerationalBarrier(const Params& params,
const void* slot) {}
const void* slot){}
#endif // CPPGC_YOUNG_GENERATION
#if V8_ENABLE_CHECKS
@ -86,12 +112,10 @@ class V8_EXPORT WriteBarrier final {
static void CheckParams(Type expected_type, const Params& params) {}
#endif // !V8_ENABLE_CHECKS
// The IncrementalOrConcurrentUpdater class allows cppgc internal to update
// |incremental_or_concurrent_marking_flag_|.
class IncrementalOrConcurrentMarkingFlagUpdater;
static bool IsAnyIncrementalOrConcurrentMarking() {
return incremental_or_concurrent_marking_flag_.MightBeEntered();
}
// The FlagUpdater class allows cppgc internal to update
// |write_barrier_enabled_|.
class FlagUpdater;
static bool IsEnabled() { return write_barrier_enabled_.MightBeEntered(); }
private:
WriteBarrier() = delete;
@ -115,16 +139,24 @@ class V8_EXPORT WriteBarrier final {
#if defined(CPPGC_YOUNG_GENERATION)
static CagedHeapLocalData& GetLocalData(HeapHandle&);
static void GenerationalBarrierSlow(const CagedHeapLocalData& local_data,
const AgeTable& ageTable,
const void* slot, uintptr_t value_offset);
const AgeTable& age_table,
const void* slot, uintptr_t value_offset,
HeapHandle* heap_handle);
static void GenerationalBarrierForUncompressedSlotSlow(
const CagedHeapLocalData& local_data, const AgeTable& age_table,
const void* slot, uintptr_t value_offset, HeapHandle* heap_handle);
static void GenerationalBarrierForSourceObjectSlow(
const CagedHeapLocalData& local_data, const void* object,
HeapHandle* heap_handle);
#endif // CPPGC_YOUNG_GENERATION
static AtomicEntryFlag incremental_or_concurrent_marking_flag_;
static AtomicEntryFlag write_barrier_enabled_;
};
template <WriteBarrier::Type type>
V8_INLINE WriteBarrier::Type SetAndReturnType(WriteBarrier::Params& params) {
if (type == WriteBarrier::Type::kNone) return WriteBarrier::Type::kNone;
if constexpr (type == WriteBarrier::Type::kNone)
return WriteBarrier::Type::kNone;
#if V8_ENABLE_CHECKS
params.type = type;
#endif // !V8_ENABLE_CHECKS
@ -141,67 +173,99 @@ class V8_EXPORT WriteBarrierTypeForCagedHeapPolicy final {
return ValueModeDispatch<value_mode>::Get(slot, value, params, callback);
}
template <typename HeapHandleCallback>
static V8_INLINE WriteBarrier::Type GetForExternallyReferenced(
const void* value, WriteBarrier::Params& params, HeapHandleCallback) {
if (!TryGetCagedHeap(value, value, params)) {
return WriteBarrier::Type::kNone;
}
if (V8_UNLIKELY(params.caged_heap().is_incremental_marking_in_progress)) {
return SetAndReturnType<WriteBarrier::Type::kMarking>(params);
}
return SetAndReturnType<WriteBarrier::Type::kNone>(params);
template <WriteBarrier::ValueMode value_mode, typename HeapHandleCallback,
typename MemberStorage>
static V8_INLINE WriteBarrier::Type Get(const void* slot, MemberStorage value,
WriteBarrier::Params& params,
HeapHandleCallback callback) {
return ValueModeDispatch<value_mode>::Get(slot, value, params, callback);
}
template <WriteBarrier::ValueMode value_mode, typename HeapHandleCallback>
static V8_INLINE WriteBarrier::Type Get(const void* value,
WriteBarrier::Params& params,
HeapHandleCallback callback) {
return GetNoSlot(value, params, callback);
}
private:
WriteBarrierTypeForCagedHeapPolicy() = delete;
template <WriteBarrier::ValueMode value_mode>
struct ValueModeDispatch;
template <typename HeapHandleCallback>
static V8_INLINE WriteBarrier::Type GetNoSlot(const void* value,
WriteBarrier::Params& params,
HeapHandleCallback) {
const bool within_cage = CagedHeapBase::IsWithinCage(value);
if (!within_cage) return WriteBarrier::Type::kNone;
static V8_INLINE bool TryGetCagedHeap(const void* slot, const void* value,
WriteBarrier::Params& params) {
params.start = reinterpret_cast<uintptr_t>(value) &
~(api_constants::kCagedHeapReservationAlignment - 1);
const uintptr_t slot_offset =
reinterpret_cast<uintptr_t>(slot) - params.start;
if (slot_offset > api_constants::kCagedHeapReservationSize) {
// Check if slot is on stack or value is sentinel or nullptr. This relies
// on the fact that kSentinelPointer is encoded as 0x1.
return false;
// We know that |value| points either within the normal page or to the
// beginning of large-page, so extract the page header by bitmasking.
BasePageHandle* page =
BasePageHandle::FromPayload(const_cast<void*>(value));
HeapHandle& heap_handle = page->heap_handle();
if (V8_UNLIKELY(heap_handle.is_incremental_marking_in_progress())) {
return SetAndReturnType<WriteBarrier::Type::kMarking>(params);
}
return true;
return SetAndReturnType<WriteBarrier::Type::kNone>(params);
}
// Returns whether marking is in progress. If marking is not in progress
// sets the start of the cage accordingly.
//
// TODO(chromium:1056170): Create fast path on API.
static bool IsMarking(const HeapHandle&, WriteBarrier::Params&);
template <WriteBarrier::ValueMode value_mode>
struct ValueModeDispatch;
};
template <>
struct WriteBarrierTypeForCagedHeapPolicy::ValueModeDispatch<
WriteBarrier::ValueMode::kValuePresent> {
template <typename HeapHandleCallback, typename MemberStorage>
static V8_INLINE WriteBarrier::Type Get(const void* slot,
MemberStorage storage,
WriteBarrier::Params& params,
HeapHandleCallback) {
if (V8_LIKELY(!WriteBarrier::IsEnabled()))
return SetAndReturnType<WriteBarrier::Type::kNone>(params);
return BarrierEnabledGet(slot, storage.Load(), params);
}
template <typename HeapHandleCallback>
static V8_INLINE WriteBarrier::Type Get(const void* slot, const void* value,
WriteBarrier::Params& params,
HeapHandleCallback) {
bool within_cage = TryGetCagedHeap(slot, value, params);
if (!within_cage) {
return WriteBarrier::Type::kNone;
}
if (V8_LIKELY(!params.caged_heap().is_incremental_marking_in_progress)) {
if (V8_LIKELY(!WriteBarrier::IsEnabled()))
return SetAndReturnType<WriteBarrier::Type::kNone>(params);
return BarrierEnabledGet(slot, value, params);
}
private:
static V8_INLINE WriteBarrier::Type BarrierEnabledGet(
const void* slot, const void* value, WriteBarrier::Params& params) {
const bool within_cage = CagedHeapBase::AreWithinCage(slot, value);
if (!within_cage) return WriteBarrier::Type::kNone;
// We know that |value| points either within the normal page or to the
// beginning of large-page, so extract the page header by bitmasking.
BasePageHandle* page =
BasePageHandle::FromPayload(const_cast<void*>(value));
HeapHandle& heap_handle = page->heap_handle();
if (V8_LIKELY(!heap_handle.is_incremental_marking_in_progress())) {
#if defined(CPPGC_YOUNG_GENERATION)
params.heap = reinterpret_cast<HeapHandle*>(params.start);
params.slot_offset = reinterpret_cast<uintptr_t>(slot) - params.start;
params.value_offset = reinterpret_cast<uintptr_t>(value) - params.start;
if (!heap_handle.is_young_generation_enabled())
return WriteBarrier::Type::kNone;
params.heap = &heap_handle;
params.slot_offset = CagedHeapBase::OffsetFromAddress(slot);
params.value_offset = CagedHeapBase::OffsetFromAddress(value);
return SetAndReturnType<WriteBarrier::Type::kGenerational>(params);
#else // !CPPGC_YOUNG_GENERATION
return SetAndReturnType<WriteBarrier::Type::kNone>(params);
#endif // !CPPGC_YOUNG_GENERATION
}
params.heap = reinterpret_cast<HeapHandle*>(params.start);
// Use marking barrier.
params.heap = &heap_handle;
return SetAndReturnType<WriteBarrier::Type::kMarking>(params);
}
};
@ -213,28 +277,28 @@ struct WriteBarrierTypeForCagedHeapPolicy::ValueModeDispatch<
static V8_INLINE WriteBarrier::Type Get(const void* slot, const void*,
WriteBarrier::Params& params,
HeapHandleCallback callback) {
#if defined(CPPGC_YOUNG_GENERATION)
if (V8_LIKELY(!WriteBarrier::IsEnabled()))
return SetAndReturnType<WriteBarrier::Type::kNone>(params);
HeapHandle& handle = callback();
if (V8_LIKELY(!IsMarking(handle, params))) {
// params.start is populated by IsMarking().
#if defined(CPPGC_YOUNG_GENERATION)
if (V8_LIKELY(!handle.is_incremental_marking_in_progress())) {
if (!handle.is_young_generation_enabled()) {
return WriteBarrier::Type::kNone;
}
params.heap = &handle;
params.slot_offset = reinterpret_cast<uintptr_t>(slot) - params.start;
// params.value_offset stays 0.
if (params.slot_offset > api_constants::kCagedHeapReservationSize) {
// Check if slot is on stack.
// Check if slot is on stack.
if (V8_UNLIKELY(!CagedHeapBase::IsWithinCage(slot))) {
return SetAndReturnType<WriteBarrier::Type::kNone>(params);
}
params.slot_offset = CagedHeapBase::OffsetFromAddress(slot);
return SetAndReturnType<WriteBarrier::Type::kGenerational>(params);
}
#else // !CPPGC_YOUNG_GENERATION
if (V8_LIKELY(!WriteBarrier::IsAnyIncrementalOrConcurrentMarking())) {
#else // !defined(CPPGC_YOUNG_GENERATION)
if (V8_UNLIKELY(!handle.is_incremental_marking_in_progress())) {
return SetAndReturnType<WriteBarrier::Type::kNone>(params);
}
HeapHandle& handle = callback();
if (V8_UNLIKELY(!subtle::HeapState::IsMarking(handle))) {
return SetAndReturnType<WriteBarrier::Type::kNone>(params);
}
#endif // !CPPGC_YOUNG_GENERATION
#endif // !defined(CPPGC_YOUNG_GENERATION)
params.heap = &handle;
return SetAndReturnType<WriteBarrier::Type::kMarking>(params);
}
@ -251,10 +315,18 @@ class V8_EXPORT WriteBarrierTypeForNonCagedHeapPolicy final {
return ValueModeDispatch<value_mode>::Get(slot, value, params, callback);
}
template <typename HeapHandleCallback>
static V8_INLINE WriteBarrier::Type GetForExternallyReferenced(
const void* value, WriteBarrier::Params& params,
HeapHandleCallback callback) {
template <WriteBarrier::ValueMode value_mode, typename HeapHandleCallback>
static V8_INLINE WriteBarrier::Type Get(const void* slot, RawPointer value,
WriteBarrier::Params& params,
HeapHandleCallback callback) {
return ValueModeDispatch<value_mode>::Get(slot, value.Load(), params,
callback);
}
template <WriteBarrier::ValueMode value_mode, typename HeapHandleCallback>
static V8_INLINE WriteBarrier::Type Get(const void* value,
WriteBarrier::Params& params,
HeapHandleCallback callback) {
// The slot will never be used in `Get()` below.
return Get<WriteBarrier::ValueMode::kValuePresent>(nullptr, value, params,
callback);
@ -264,11 +336,6 @@ class V8_EXPORT WriteBarrierTypeForNonCagedHeapPolicy final {
template <WriteBarrier::ValueMode value_mode>
struct ValueModeDispatch;
// TODO(chromium:1056170): Create fast path on API.
static bool IsMarking(const void*, HeapHandle**);
// TODO(chromium:1056170): Create fast path on API.
static bool IsMarking(HeapHandle&);
WriteBarrierTypeForNonCagedHeapPolicy() = delete;
};
@ -281,9 +348,18 @@ struct WriteBarrierTypeForNonCagedHeapPolicy::ValueModeDispatch<
HeapHandleCallback callback) {
// The following check covers nullptr as well as sentinel pointer.
if (object <= static_cast<void*>(kSentinelPointer)) {
return WriteBarrier::Type::kNone;
return SetAndReturnType<WriteBarrier::Type::kNone>(params);
}
if (IsMarking(object, &params.heap)) {
if (V8_LIKELY(!WriteBarrier::IsEnabled())) {
return SetAndReturnType<WriteBarrier::Type::kNone>(params);
}
// We know that |object| is within the normal page or in the beginning of a
// large page, so extract the page header by bitmasking.
BasePageHandle* page =
BasePageHandle::FromPayload(const_cast<void*>(object));
HeapHandle& heap_handle = page->heap_handle();
if (V8_LIKELY(heap_handle.is_incremental_marking_in_progress())) {
return SetAndReturnType<WriteBarrier::Type::kMarking>(params);
}
return SetAndReturnType<WriteBarrier::Type::kNone>(params);
@ -297,9 +373,9 @@ struct WriteBarrierTypeForNonCagedHeapPolicy::ValueModeDispatch<
static V8_INLINE WriteBarrier::Type Get(const void*, const void*,
WriteBarrier::Params& params,
HeapHandleCallback callback) {
if (V8_UNLIKELY(WriteBarrier::IsAnyIncrementalOrConcurrentMarking())) {
if (V8_UNLIKELY(WriteBarrier::IsEnabled())) {
HeapHandle& handle = callback();
if (IsMarking(handle)) {
if (V8_LIKELY(handle.is_incremental_marking_in_progress())) {
params.heap = &handle;
return SetAndReturnType<WriteBarrier::Type::kMarking>(params);
}
@ -315,6 +391,14 @@ WriteBarrier::Type WriteBarrier::GetWriteBarrierType(
params, []() {});
}
// static
template <typename MemberStorage>
WriteBarrier::Type WriteBarrier::GetWriteBarrierType(
const void* slot, MemberStorage value, WriteBarrier::Params& params) {
return WriteBarrierTypePolicy::Get<ValueMode::kValuePresent>(slot, value,
params, []() {});
}
// static
template <typename HeapHandleCallback>
WriteBarrier::Type WriteBarrier::GetWriteBarrierType(
@ -325,12 +409,10 @@ WriteBarrier::Type WriteBarrier::GetWriteBarrierType(
}
// static
template <typename HeapHandleCallback>
WriteBarrier::Type
WriteBarrier::GetWriteBarrierTypeForExternallyReferencedObject(
const void* value, Params& params, HeapHandleCallback callback) {
return WriteBarrierTypePolicy::GetForExternallyReferenced(value, params,
callback);
WriteBarrier::Type WriteBarrier::GetWriteBarrierType(
const void* value, WriteBarrier::Params& params) {
return WriteBarrierTypePolicy::Get<ValueMode::kValuePresent>(value, params,
[]() {});
}
// static
@ -369,17 +451,32 @@ void WriteBarrier::SteeleMarkingBarrier(const Params& params,
}
#if defined(CPPGC_YOUNG_GENERATION)
// static
template <WriteBarrier::GenerationalBarrierType type>
void WriteBarrier::GenerationalBarrier(const Params& params, const void* slot) {
CheckParams(Type::kGenerational, params);
const CagedHeapLocalData& local_data = params.caged_heap();
const CagedHeapLocalData& local_data = CagedHeapLocalData::Get();
const AgeTable& age_table = local_data.age_table;
// Bail out if the slot is in young generation.
if (V8_LIKELY(age_table[params.slot_offset] == AgeTable::Age::kYoung)) return;
// Bail out if the slot (precise or imprecise) is in young generation.
if (V8_LIKELY(age_table.GetAge(params.slot_offset) == AgeTable::Age::kYoung))
return;
GenerationalBarrierSlow(local_data, age_table, slot, params.value_offset);
// Dispatch between different types of barriers.
// TODO(chromium:1029379): Consider reload local_data in the slow path to
// reduce register pressure.
if constexpr (type == GenerationalBarrierType::kPreciseSlot) {
GenerationalBarrierSlow(local_data, age_table, slot, params.value_offset,
params.heap);
} else if constexpr (type ==
GenerationalBarrierType::kPreciseUncompressedSlot) {
GenerationalBarrierForUncompressedSlotSlow(
local_data, age_table, slot, params.value_offset, params.heap);
} else {
GenerationalBarrierForSourceObjectSlow(local_data, slot, params.heap);
}
}
#endif // !CPPGC_YOUNG_GENERATION

View File

@ -7,6 +7,7 @@
#include "cppgc/heap.h"
#include "cppgc/member.h"
#include "cppgc/sentinel-pointer.h"
#include "cppgc/trace-trait.h"
#include "v8config.h" // NOLINT(build/include_directory)
@ -44,21 +45,24 @@ class V8_EXPORT LivenessBroker final {
public:
template <typename T>
bool IsHeapObjectAlive(const T* object) const {
return object &&
// - nullptr objects are considered alive to allow weakness to be used from
// stack while running into a conservative GC. Treating nullptr as dead
// would mean that e.g. custom collections could not be strongified on
// stack.
// - Sentinel pointers are also preserved in weakness and not cleared.
return !object || object == kSentinelPointer ||
IsHeapObjectAliveImpl(
TraceTrait<T>::GetTraceDescriptor(object).base_object_payload);
}
template <typename T>
bool IsHeapObjectAlive(const WeakMember<T>& weak_member) const {
return (weak_member != kSentinelPointer) &&
IsHeapObjectAlive<T>(weak_member.Get());
return IsHeapObjectAlive<T>(weak_member.Get());
}
template <typename T>
bool IsHeapObjectAlive(const UntracedMember<T>& untraced_member) const {
return (untraced_member != kSentinelPointer) &&
IsHeapObjectAlive<T>(untraced_member.Get());
return IsHeapObjectAlive<T>(untraced_member.Get());
}
private:

View File

@ -5,13 +5,16 @@
#ifndef INCLUDE_CPPGC_MACROS_H_
#define INCLUDE_CPPGC_MACROS_H_
#include <stddef.h>
#include <cstddef>
#include "cppgc/internal/compiler-specific.h"
namespace cppgc {
// Use if the object is only stack allocated.
// Use CPPGC_STACK_ALLOCATED if the object is only stack allocated.
// Add the CPPGC_STACK_ALLOCATED_IGNORE annotation on a case-by-case basis when
// enforcement of CPPGC_STACK_ALLOCATED should be suppressed.
#if defined(__clang__)
#define CPPGC_STACK_ALLOCATED() \
public: \
using IsStackAllocatedTypeMarker CPPGC_UNUSED = int; \
@ -20,6 +23,12 @@ namespace cppgc {
void* operator new(size_t) = delete; \
void* operator new(size_t, void*) = delete; \
static_assert(true, "Force semicolon.")
#define CPPGC_STACK_ALLOCATED_IGNORE(bug_or_reason) \
__attribute__((annotate("stack_allocated_ignore")))
#else // !defined(__clang__)
#define CPPGC_STACK_ALLOCATED() static_assert(true, "Force semicolon.")
#define CPPGC_STACK_ALLOCATED_IGNORE(bug_or_reason)
#endif // !defined(__clang__)
} // namespace cppgc

View File

@ -9,6 +9,8 @@
#include <cstddef>
#include <type_traits>
#include "cppgc/internal/api-constants.h"
#include "cppgc/internal/member-storage.h"
#include "cppgc/internal/pointer-policies.h"
#include "cppgc/sentinel-pointer.h"
#include "cppgc/type-traits.h"
@ -16,221 +18,536 @@
namespace cppgc {
namespace subtle {
class HeapConsistency;
} // namespace subtle
class Visitor;
namespace internal {
// MemberBase always refers to the object as const object and defers to
// BasicMember on casting to the right type as needed.
class MemberBase {
template <typename StorageType>
class V8_TRIVIAL_ABI MemberBase {
public:
using RawStorage = StorageType;
protected:
MemberBase() = default;
explicit MemberBase(const void* value) : raw_(value) {}
struct AtomicInitializerTag {};
const void** GetRawSlot() const { return &raw_; }
const void* GetRaw() const { return raw_; }
void SetRaw(void* value) { raw_ = value; }
const void* GetRawAtomic() const {
return reinterpret_cast<const std::atomic<const void*>*>(&raw_)->load(
std::memory_order_relaxed);
}
void SetRawAtomic(const void* value) {
reinterpret_cast<std::atomic<const void*>*>(&raw_)->store(
value, std::memory_order_relaxed);
V8_INLINE MemberBase() = default;
V8_INLINE explicit MemberBase(const void* value) : raw_(value) {}
V8_INLINE MemberBase(const void* value, AtomicInitializerTag) {
SetRawAtomic(value);
}
void ClearFromGC() const { raw_ = nullptr; }
V8_INLINE explicit MemberBase(RawStorage raw) : raw_(raw) {}
V8_INLINE explicit MemberBase(std::nullptr_t) : raw_(nullptr) {}
V8_INLINE explicit MemberBase(SentinelPointer s) : raw_(s) {}
V8_INLINE const void** GetRawSlot() const {
return reinterpret_cast<const void**>(const_cast<MemberBase*>(this));
}
V8_INLINE const void* GetRaw() const { return raw_.Load(); }
V8_INLINE void SetRaw(void* value) { raw_.Store(value); }
V8_INLINE const void* GetRawAtomic() const { return raw_.LoadAtomic(); }
V8_INLINE void SetRawAtomic(const void* value) { raw_.StoreAtomic(value); }
V8_INLINE RawStorage GetRawStorage() const { return raw_; }
V8_INLINE void SetRawStorageAtomic(RawStorage other) {
reinterpret_cast<std::atomic<RawStorage>&>(raw_).store(
other, std::memory_order_relaxed);
}
V8_INLINE bool IsCleared() const { return raw_.IsCleared(); }
V8_INLINE void ClearFromGC() const { raw_.Clear(); }
private:
mutable const void* raw_ = nullptr;
friend class MemberDebugHelper;
mutable RawStorage raw_;
};
// The basic class from which all Member classes are 'generated'.
template <typename T, typename WeaknessTag, typename WriteBarrierPolicy,
typename CheckingPolicy>
class BasicMember final : private MemberBase, private CheckingPolicy {
typename CheckingPolicy, typename StorageType>
class V8_TRIVIAL_ABI BasicMember final : private MemberBase<StorageType>,
private CheckingPolicy {
using Base = MemberBase<StorageType>;
public:
using PointeeType = T;
using RawStorage = typename Base::RawStorage;
constexpr BasicMember() = default;
constexpr BasicMember(std::nullptr_t) {} // NOLINT
BasicMember(SentinelPointer s) : MemberBase(s) {} // NOLINT
BasicMember(T* raw) : MemberBase(raw) { // NOLINT
InitializingWriteBarrier();
V8_INLINE constexpr BasicMember() = default;
V8_INLINE constexpr BasicMember(std::nullptr_t) {} // NOLINT
V8_INLINE BasicMember(SentinelPointer s) : Base(s) {} // NOLINT
V8_INLINE BasicMember(T* raw) : Base(raw) { // NOLINT
InitializingWriteBarrier(raw);
this->CheckPointer(Get());
}
BasicMember(T& raw) : BasicMember(&raw) {} // NOLINT
V8_INLINE BasicMember(T& raw) // NOLINT
: BasicMember(&raw) {}
// Atomic ctor. Using the AtomicInitializerTag forces BasicMember to
// initialize using atomic assignments. This is required for preventing
// data races with concurrent marking.
using AtomicInitializerTag = typename Base::AtomicInitializerTag;
V8_INLINE BasicMember(std::nullptr_t, AtomicInitializerTag atomic)
: Base(nullptr, atomic) {}
V8_INLINE BasicMember(SentinelPointer s, AtomicInitializerTag atomic)
: Base(s, atomic) {}
V8_INLINE BasicMember(T* raw, AtomicInitializerTag atomic)
: Base(raw, atomic) {
InitializingWriteBarrier(raw);
this->CheckPointer(Get());
}
V8_INLINE BasicMember(T& raw, AtomicInitializerTag atomic)
: BasicMember(&raw, atomic) {}
// Copy ctor.
BasicMember(const BasicMember& other) : BasicMember(other.Get()) {}
// Allow heterogeneous construction.
V8_INLINE BasicMember(const BasicMember& other)
: BasicMember(other.GetRawStorage()) {}
// Heterogeneous copy constructors. When the source pointer have a different
// type, perform a compress-decompress round, because the source pointer may
// need to be adjusted.
template <typename U, typename OtherBarrierPolicy, typename OtherWeaknessTag,
typename OtherCheckingPolicy,
typename = std::enable_if_t<std::is_base_of<T, U>::value>>
BasicMember( // NOLINT
std::enable_if_t<internal::IsDecayedSameV<T, U>>* = nullptr>
V8_INLINE BasicMember( // NOLINT
const BasicMember<U, OtherWeaknessTag, OtherBarrierPolicy,
OtherCheckingPolicy>& other)
OtherCheckingPolicy, StorageType>& other)
: BasicMember(other.GetRawStorage()) {}
template <typename U, typename OtherBarrierPolicy, typename OtherWeaknessTag,
typename OtherCheckingPolicy,
std::enable_if_t<internal::IsStrictlyBaseOfV<T, U>>* = nullptr>
V8_INLINE BasicMember( // NOLINT
const BasicMember<U, OtherWeaknessTag, OtherBarrierPolicy,
OtherCheckingPolicy, StorageType>& other)
: BasicMember(other.Get()) {}
// Move ctor.
BasicMember(BasicMember&& other) noexcept : BasicMember(other.Get()) {
V8_INLINE BasicMember(BasicMember&& other) noexcept
: BasicMember(other.GetRawStorage()) {
other.Clear();
}
// Allow heterogeneous move construction.
// Heterogeneous move constructors. When the source pointer have a different
// type, perform a compress-decompress round, because the source pointer may
// need to be adjusted.
template <typename U, typename OtherBarrierPolicy, typename OtherWeaknessTag,
typename OtherCheckingPolicy,
typename = std::enable_if_t<std::is_base_of<T, U>::value>>
BasicMember( // NOLINT
BasicMember<U, OtherWeaknessTag, OtherBarrierPolicy,
OtherCheckingPolicy>&& other) noexcept
std::enable_if_t<internal::IsDecayedSameV<T, U>>* = nullptr>
V8_INLINE BasicMember(
BasicMember<U, OtherWeaknessTag, OtherBarrierPolicy, OtherCheckingPolicy,
StorageType>&& other) noexcept
: BasicMember(other.GetRawStorage()) {
other.Clear();
}
template <typename U, typename OtherBarrierPolicy, typename OtherWeaknessTag,
typename OtherCheckingPolicy,
std::enable_if_t<internal::IsStrictlyBaseOfV<T, U>>* = nullptr>
V8_INLINE BasicMember(
BasicMember<U, OtherWeaknessTag, OtherBarrierPolicy, OtherCheckingPolicy,
StorageType>&& other) noexcept
: BasicMember(other.Get()) {
other.Clear();
}
// Construction from Persistent.
template <typename U, typename PersistentWeaknessPolicy,
typename PersistentLocationPolicy,
typename PersistentCheckingPolicy,
typename = std::enable_if_t<std::is_base_of<T, U>::value>>
BasicMember( // NOLINT
const BasicPersistent<U, PersistentWeaknessPolicy,
PersistentLocationPolicy, PersistentCheckingPolicy>&
p)
V8_INLINE BasicMember(const BasicPersistent<U, PersistentWeaknessPolicy,
PersistentLocationPolicy,
PersistentCheckingPolicy>& p)
: BasicMember(p.Get()) {}
// Copy assignment.
BasicMember& operator=(const BasicMember& other) {
return operator=(other.Get());
V8_INLINE BasicMember& operator=(const BasicMember& other) {
return operator=(other.GetRawStorage());
}
// Allow heterogeneous copy assignment.
// Heterogeneous copy assignment. When the source pointer have a different
// type, perform a compress-decompress round, because the source pointer may
// need to be adjusted.
template <typename U, typename OtherWeaknessTag, typename OtherBarrierPolicy,
typename OtherCheckingPolicy,
typename = std::enable_if_t<std::is_base_of<T, U>::value>>
BasicMember& operator=(
typename OtherCheckingPolicy>
V8_INLINE BasicMember& operator=(
const BasicMember<U, OtherWeaknessTag, OtherBarrierPolicy,
OtherCheckingPolicy>& other) {
return operator=(other.Get());
OtherCheckingPolicy, StorageType>& other) {
if constexpr (internal::IsDecayedSameV<T, U>) {
return operator=(other.GetRawStorage());
} else {
static_assert(internal::IsStrictlyBaseOfV<T, U>);
return operator=(other.Get());
}
}
// Move assignment.
BasicMember& operator=(BasicMember&& other) noexcept {
operator=(other.Get());
V8_INLINE BasicMember& operator=(BasicMember&& other) noexcept {
operator=(other.GetRawStorage());
other.Clear();
return *this;
}
// Heterogeneous move assignment.
// Heterogeneous move assignment. When the source pointer have a different
// type, perform a compress-decompress round, because the source pointer may
// need to be adjusted.
template <typename U, typename OtherWeaknessTag, typename OtherBarrierPolicy,
typename OtherCheckingPolicy,
typename = std::enable_if_t<std::is_base_of<T, U>::value>>
BasicMember& operator=(BasicMember<U, OtherWeaknessTag, OtherBarrierPolicy,
OtherCheckingPolicy>&& other) noexcept {
operator=(other.Get());
typename OtherCheckingPolicy>
V8_INLINE BasicMember& operator=(
BasicMember<U, OtherWeaknessTag, OtherBarrierPolicy, OtherCheckingPolicy,
StorageType>&& other) noexcept {
if constexpr (internal::IsDecayedSameV<T, U>) {
operator=(other.GetRawStorage());
} else {
static_assert(internal::IsStrictlyBaseOfV<T, U>);
operator=(other.Get());
}
other.Clear();
return *this;
}
// Assignment from Persistent.
template <typename U, typename PersistentWeaknessPolicy,
typename PersistentLocationPolicy,
typename PersistentCheckingPolicy,
typename = std::enable_if_t<std::is_base_of<T, U>::value>>
BasicMember& operator=(
V8_INLINE BasicMember& operator=(
const BasicPersistent<U, PersistentWeaknessPolicy,
PersistentLocationPolicy, PersistentCheckingPolicy>&
other) {
return operator=(other.Get());
}
BasicMember& operator=(T* other) {
SetRawAtomic(other);
AssigningWriteBarrier();
V8_INLINE BasicMember& operator=(T* other) {
Base::SetRawAtomic(other);
AssigningWriteBarrier(other);
this->CheckPointer(Get());
return *this;
}
BasicMember& operator=(std::nullptr_t) {
V8_INLINE BasicMember& operator=(std::nullptr_t) {
Clear();
return *this;
}
BasicMember& operator=(SentinelPointer s) {
SetRawAtomic(s);
V8_INLINE BasicMember& operator=(SentinelPointer s) {
Base::SetRawAtomic(s);
return *this;
}
template <typename OtherWeaknessTag, typename OtherBarrierPolicy,
typename OtherCheckingPolicy>
void Swap(BasicMember<T, OtherWeaknessTag, OtherBarrierPolicy,
OtherCheckingPolicy>& other) {
T* tmp = Get();
V8_INLINE void Swap(BasicMember<T, OtherWeaknessTag, OtherBarrierPolicy,
OtherCheckingPolicy, StorageType>& other) {
auto tmp = GetRawStorage();
*this = other;
other = tmp;
}
explicit operator bool() const { return Get(); }
operator T*() const { return Get(); } // NOLINT
T* operator->() const { return Get(); }
T& operator*() const { return *Get(); }
V8_INLINE explicit operator bool() const { return !Base::IsCleared(); }
V8_INLINE operator T*() const { return Get(); }
V8_INLINE T* operator->() const { return Get(); }
V8_INLINE T& operator*() const { return *Get(); }
// CFI cast exemption to allow passing SentinelPointer through T* and support
// heterogeneous assignments between different Member and Persistent handles
// based on their actual types.
V8_CLANG_NO_SANITIZE("cfi-unrelated-cast") T* Get() const {
V8_INLINE V8_CLANG_NO_SANITIZE("cfi-unrelated-cast") T* Get() const {
// Executed by the mutator, hence non atomic load.
//
// The const_cast below removes the constness from MemberBase storage. The
// following static_cast re-adds any constness if specified through the
// user-visible template parameter T.
return static_cast<T*>(const_cast<void*>(MemberBase::GetRaw()));
return static_cast<T*>(const_cast<void*>(Base::GetRaw()));
}
void Clear() { SetRawAtomic(nullptr); }
V8_INLINE void Clear() {
Base::SetRawStorageAtomic(RawStorage{});
}
T* Release() {
V8_INLINE T* Release() {
T* result = Get();
Clear();
return result;
}
const T** GetSlotForTesting() const {
return reinterpret_cast<const T**>(GetRawSlot());
V8_INLINE const T** GetSlotForTesting() const {
return reinterpret_cast<const T**>(Base::GetRawSlot());
}
V8_INLINE RawStorage GetRawStorage() const {
return Base::GetRawStorage();
}
private:
const T* GetRawAtomic() const {
return static_cast<const T*>(MemberBase::GetRawAtomic());
V8_INLINE explicit BasicMember(RawStorage raw) : Base(raw) {
InitializingWriteBarrier(Get());
this->CheckPointer(Get());
}
void InitializingWriteBarrier() const {
WriteBarrierPolicy::InitializingBarrier(GetRawSlot(), GetRaw());
}
void AssigningWriteBarrier() const {
WriteBarrierPolicy::AssigningBarrier(GetRawSlot(), GetRaw());
V8_INLINE BasicMember& operator=(RawStorage other) {
Base::SetRawStorageAtomic(other);
AssigningWriteBarrier();
this->CheckPointer(Get());
return *this;
}
void ClearFromGC() const { MemberBase::ClearFromGC(); }
V8_INLINE const T* GetRawAtomic() const {
return static_cast<const T*>(Base::GetRawAtomic());
}
V8_INLINE void InitializingWriteBarrier(T* value) const {
WriteBarrierPolicy::InitializingBarrier(Base::GetRawSlot(), value);
}
V8_INLINE void AssigningWriteBarrier(T* value) const {
WriteBarrierPolicy::template AssigningBarrier<
StorageType::kWriteBarrierSlotType>(Base::GetRawSlot(), value);
}
V8_INLINE void AssigningWriteBarrier() const {
WriteBarrierPolicy::template AssigningBarrier<
StorageType::kWriteBarrierSlotType>(Base::GetRawSlot(),
Base::GetRawStorage());
}
V8_INLINE void ClearFromGC() const { Base::ClearFromGC(); }
V8_INLINE T* GetFromGC() const { return Get(); }
friend class cppgc::subtle::HeapConsistency;
friend class cppgc::Visitor;
template <typename U>
friend struct cppgc::TraceTrait;
template <typename T1, typename WeaknessTag1, typename WriteBarrierPolicy1,
typename CheckingPolicy1, typename StorageType1>
friend class BasicMember;
};
// Member equality operators.
template <typename T1, typename WeaknessTag1, typename WriteBarrierPolicy1,
typename CheckingPolicy1, typename T2, typename WeaknessTag2,
typename WriteBarrierPolicy2, typename CheckingPolicy2>
bool operator==(
BasicMember<T1, WeaknessTag1, WriteBarrierPolicy1, CheckingPolicy1> member1,
BasicMember<T2, WeaknessTag2, WriteBarrierPolicy2, CheckingPolicy2>
member2) {
return member1.Get() == member2.Get();
typename WriteBarrierPolicy2, typename CheckingPolicy2,
typename StorageType>
V8_INLINE bool operator==(
const BasicMember<T1, WeaknessTag1, WriteBarrierPolicy1, CheckingPolicy1,
StorageType>& member1,
const BasicMember<T2, WeaknessTag2, WriteBarrierPolicy2, CheckingPolicy2,
StorageType>& member2) {
if constexpr (internal::IsDecayedSameV<T1, T2>) {
// Check compressed pointers if types are the same.
return member1.GetRawStorage() == member2.GetRawStorage();
} else {
static_assert(internal::IsStrictlyBaseOfV<T1, T2> ||
internal::IsStrictlyBaseOfV<T2, T1>);
// Otherwise, check decompressed pointers.
return member1.Get() == member2.Get();
}
}
template <typename T1, typename WeaknessTag1, typename WriteBarrierPolicy1,
typename CheckingPolicy1, typename T2, typename WeaknessTag2,
typename WriteBarrierPolicy2, typename CheckingPolicy2>
bool operator!=(
BasicMember<T1, WeaknessTag1, WriteBarrierPolicy1, CheckingPolicy1> member1,
BasicMember<T2, WeaknessTag2, WriteBarrierPolicy2, CheckingPolicy2>
member2) {
typename WriteBarrierPolicy2, typename CheckingPolicy2,
typename StorageType>
V8_INLINE bool operator!=(
const BasicMember<T1, WeaknessTag1, WriteBarrierPolicy1, CheckingPolicy1,
StorageType>& member1,
const BasicMember<T2, WeaknessTag2, WriteBarrierPolicy2, CheckingPolicy2,
StorageType>& member2) {
return !(member1 == member2);
}
template <typename T, typename WriteBarrierPolicy, typename CheckingPolicy>
struct IsWeak<
internal::BasicMember<T, WeakMemberTag, WriteBarrierPolicy, CheckingPolicy>>
// Equality with raw pointers.
template <typename T, typename WeaknessTag, typename WriteBarrierPolicy,
typename CheckingPolicy, typename StorageType, typename U>
V8_INLINE bool operator==(
const BasicMember<T, WeaknessTag, WriteBarrierPolicy, CheckingPolicy,
StorageType>& member,
U* raw) {
// Never allow comparison with erased pointers.
static_assert(!internal::IsDecayedSameV<void, U>);
if constexpr (internal::IsDecayedSameV<T, U>) {
// Check compressed pointers if types are the same.
return member.GetRawStorage() == StorageType(raw);
} else if constexpr (internal::IsStrictlyBaseOfV<T, U>) {
// Cast the raw pointer to T, which may adjust the pointer.
return member.GetRawStorage() == StorageType(static_cast<T*>(raw));
} else {
// Otherwise, decompressed the member.
return member.Get() == raw;
}
}
template <typename T, typename WeaknessTag, typename WriteBarrierPolicy,
typename CheckingPolicy, typename StorageType, typename U>
V8_INLINE bool operator!=(
const BasicMember<T, WeaknessTag, WriteBarrierPolicy, CheckingPolicy,
StorageType>& member,
U* raw) {
return !(member == raw);
}
template <typename T, typename U, typename WeaknessTag,
typename WriteBarrierPolicy, typename CheckingPolicy,
typename StorageType>
V8_INLINE bool operator==(
T* raw, const BasicMember<U, WeaknessTag, WriteBarrierPolicy,
CheckingPolicy, StorageType>& member) {
return member == raw;
}
template <typename T, typename U, typename WeaknessTag,
typename WriteBarrierPolicy, typename CheckingPolicy,
typename StorageType>
V8_INLINE bool operator!=(
T* raw, const BasicMember<U, WeaknessTag, WriteBarrierPolicy,
CheckingPolicy, StorageType>& member) {
return !(raw == member);
}
// Equality with sentinel.
template <typename T, typename WeaknessTag, typename WriteBarrierPolicy,
typename CheckingPolicy, typename StorageType>
V8_INLINE bool operator==(
const BasicMember<T, WeaknessTag, WriteBarrierPolicy, CheckingPolicy,
StorageType>& member,
SentinelPointer) {
return member.GetRawStorage().IsSentinel();
}
template <typename T, typename WeaknessTag, typename WriteBarrierPolicy,
typename CheckingPolicy, typename StorageType>
V8_INLINE bool operator!=(
const BasicMember<T, WeaknessTag, WriteBarrierPolicy, CheckingPolicy,
StorageType>& member,
SentinelPointer s) {
return !(member == s);
}
template <typename T, typename WeaknessTag, typename WriteBarrierPolicy,
typename CheckingPolicy, typename StorageType>
V8_INLINE bool operator==(
SentinelPointer s, const BasicMember<T, WeaknessTag, WriteBarrierPolicy,
CheckingPolicy, StorageType>& member) {
return member == s;
}
template <typename T, typename WeaknessTag, typename WriteBarrierPolicy,
typename CheckingPolicy, typename StorageType>
V8_INLINE bool operator!=(
SentinelPointer s, const BasicMember<T, WeaknessTag, WriteBarrierPolicy,
CheckingPolicy, StorageType>& member) {
return !(s == member);
}
// Equality with nullptr.
template <typename T, typename WeaknessTag, typename WriteBarrierPolicy,
typename CheckingPolicy, typename StorageType>
V8_INLINE bool operator==(
const BasicMember<T, WeaknessTag, WriteBarrierPolicy, CheckingPolicy,
StorageType>& member,
std::nullptr_t) {
return !static_cast<bool>(member);
}
template <typename T, typename WeaknessTag, typename WriteBarrierPolicy,
typename CheckingPolicy, typename StorageType>
V8_INLINE bool operator!=(
const BasicMember<T, WeaknessTag, WriteBarrierPolicy, CheckingPolicy,
StorageType>& member,
std::nullptr_t n) {
return !(member == n);
}
template <typename T, typename WeaknessTag, typename WriteBarrierPolicy,
typename CheckingPolicy, typename StorageType>
V8_INLINE bool operator==(
std::nullptr_t n, const BasicMember<T, WeaknessTag, WriteBarrierPolicy,
CheckingPolicy, StorageType>& member) {
return member == n;
}
template <typename T, typename WeaknessTag, typename WriteBarrierPolicy,
typename CheckingPolicy, typename StorageType>
V8_INLINE bool operator!=(
std::nullptr_t n, const BasicMember<T, WeaknessTag, WriteBarrierPolicy,
CheckingPolicy, StorageType>& member) {
return !(n == member);
}
// Relational operators.
template <typename T1, typename WeaknessTag1, typename WriteBarrierPolicy1,
typename CheckingPolicy1, typename T2, typename WeaknessTag2,
typename WriteBarrierPolicy2, typename CheckingPolicy2,
typename StorageType>
V8_INLINE bool operator<(
const BasicMember<T1, WeaknessTag1, WriteBarrierPolicy1, CheckingPolicy1,
StorageType>& member1,
const BasicMember<T2, WeaknessTag2, WriteBarrierPolicy2, CheckingPolicy2,
StorageType>& member2) {
static_assert(
internal::IsDecayedSameV<T1, T2>,
"Comparison works only for same pointer type modulo cv-qualifiers");
return member1.GetRawStorage() < member2.GetRawStorage();
}
template <typename T1, typename WeaknessTag1, typename WriteBarrierPolicy1,
typename CheckingPolicy1, typename T2, typename WeaknessTag2,
typename WriteBarrierPolicy2, typename CheckingPolicy2,
typename StorageType>
V8_INLINE bool operator<=(
const BasicMember<T1, WeaknessTag1, WriteBarrierPolicy1, CheckingPolicy1,
StorageType>& member1,
const BasicMember<T2, WeaknessTag2, WriteBarrierPolicy2, CheckingPolicy2,
StorageType>& member2) {
static_assert(
internal::IsDecayedSameV<T1, T2>,
"Comparison works only for same pointer type modulo cv-qualifiers");
return member1.GetRawStorage() <= member2.GetRawStorage();
}
template <typename T1, typename WeaknessTag1, typename WriteBarrierPolicy1,
typename CheckingPolicy1, typename T2, typename WeaknessTag2,
typename WriteBarrierPolicy2, typename CheckingPolicy2,
typename StorageType>
V8_INLINE bool operator>(
const BasicMember<T1, WeaknessTag1, WriteBarrierPolicy1, CheckingPolicy1,
StorageType>& member1,
const BasicMember<T2, WeaknessTag2, WriteBarrierPolicy2, CheckingPolicy2,
StorageType>& member2) {
static_assert(
internal::IsDecayedSameV<T1, T2>,
"Comparison works only for same pointer type modulo cv-qualifiers");
return member1.GetRawStorage() > member2.GetRawStorage();
}
template <typename T1, typename WeaknessTag1, typename WriteBarrierPolicy1,
typename CheckingPolicy1, typename T2, typename WeaknessTag2,
typename WriteBarrierPolicy2, typename CheckingPolicy2,
typename StorageType>
V8_INLINE bool operator>=(
const BasicMember<T1, WeaknessTag1, WriteBarrierPolicy1, CheckingPolicy1,
StorageType>& member1,
const BasicMember<T2, WeaknessTag2, WriteBarrierPolicy2, CheckingPolicy2,
StorageType>& member2) {
static_assert(
internal::IsDecayedSameV<T1, T2>,
"Comparison works only for same pointer type modulo cv-qualifiers");
return member1.GetRawStorage() >= member2.GetRawStorage();
}
template <typename T, typename WriteBarrierPolicy, typename CheckingPolicy,
typename StorageType>
struct IsWeak<internal::BasicMember<T, WeakMemberTag, WriteBarrierPolicy,
CheckingPolicy, StorageType>>
: std::true_type {};
} // namespace internal
@ -241,8 +558,9 @@ struct IsWeak<
* trace method.
*/
template <typename T>
using Member = internal::BasicMember<T, internal::StrongMemberTag,
internal::DijkstraWriteBarrierPolicy>;
using Member = internal::BasicMember<
T, internal::StrongMemberTag, internal::DijkstraWriteBarrierPolicy,
internal::DefaultMemberCheckingPolicy, internal::DefaultMemberStorage>;
/**
* WeakMember is similar to Member in that it is used to point to other garbage
@ -253,8 +571,9 @@ using Member = internal::BasicMember<T, internal::StrongMemberTag,
* will automatically be set to null.
*/
template <typename T>
using WeakMember = internal::BasicMember<T, internal::WeakMemberTag,
internal::DijkstraWriteBarrierPolicy>;
using WeakMember = internal::BasicMember<
T, internal::WeakMemberTag, internal::DijkstraWriteBarrierPolicy,
internal::DefaultMemberCheckingPolicy, internal::DefaultMemberStorage>;
/**
* UntracedMember is a pointer to an on-heap object that is not traced for some
@ -263,8 +582,47 @@ using WeakMember = internal::BasicMember<T, internal::WeakMemberTag,
* must be kept alive through other means.
*/
template <typename T>
using UntracedMember = internal::BasicMember<T, internal::UntracedMemberTag,
internal::NoWriteBarrierPolicy>;
using UntracedMember = internal::BasicMember<
T, internal::UntracedMemberTag, internal::NoWriteBarrierPolicy,
internal::DefaultMemberCheckingPolicy, internal::DefaultMemberStorage>;
namespace subtle {
/**
* UncompressedMember. Use with care in hot paths that would otherwise cause
* many decompression cycles.
*/
template <typename T>
using UncompressedMember = internal::BasicMember<
T, internal::StrongMemberTag, internal::DijkstraWriteBarrierPolicy,
internal::DefaultMemberCheckingPolicy, internal::RawPointer>;
#if defined(CPPGC_POINTER_COMPRESSION)
/**
* CompressedMember. Default implementation of cppgc::Member on builds with
* pointer compression.
*/
template <typename T>
using CompressedMember = internal::BasicMember<
T, internal::StrongMemberTag, internal::DijkstraWriteBarrierPolicy,
internal::DefaultMemberCheckingPolicy, internal::CompressedPointer>;
#endif // defined(CPPGC_POINTER_COMPRESSION)
} // namespace subtle
namespace internal {
struct Dummy;
static constexpr size_t kSizeOfMember = sizeof(Member<Dummy>);
static constexpr size_t kSizeOfUncompressedMember =
sizeof(subtle::UncompressedMember<Dummy>);
#if defined(CPPGC_POINTER_COMPRESSION)
static constexpr size_t kSizeofCompressedMember =
sizeof(subtle::CompressedMember<Dummy>);
#endif // defined(CPPGC_POINTER_COMPRESSION)
} // namespace internal
} // namespace cppgc

View File

@ -37,15 +37,15 @@ class V8_EXPORT NameProvider {
static constexpr const char kNoNameDeducible[] = "<No name>";
/**
* Indicating whether internal names are hidden or not.
* Indicating whether the build supports extracting C++ names as object names.
*
* @returns true if C++ names should be hidden and represented by kHiddenName.
*/
static constexpr bool HideInternalNames() {
static constexpr bool SupportsCppClassNamesAsObjectNames() {
#if CPPGC_SUPPORTS_OBJECT_NAMES
return false;
#else // !CPPGC_SUPPORTS_OBJECT_NAMES
return true;
#else // !CPPGC_SUPPORTS_OBJECT_NAMES
return false;
#endif // !CPPGC_SUPPORTS_OBJECT_NAMES
}
@ -57,7 +57,7 @@ class V8_EXPORT NameProvider {
*
* @returns a human readable name for the object.
*/
virtual const char* GetName() const = 0;
virtual const char* GetHumanReadableName() const = 0;
};
} // namespace cppgc

View File

@ -16,9 +16,6 @@
#include "v8config.h" // NOLINT(build/include_directory)
namespace cppgc {
class Visitor;
namespace internal {
// PersistentBase always refers to the object as const object and defers to
@ -41,11 +38,11 @@ class PersistentBase {
node_ = nullptr;
}
private:
protected:
mutable const void* raw_ = nullptr;
mutable PersistentNode* node_ = nullptr;
friend class PersistentRegion;
friend class PersistentRegionBase;
};
// The basic class from which all Persistent classes are generated.
@ -78,7 +75,7 @@ class BasicPersistent final : public PersistentBase,
: PersistentBase(raw), LocationPolicy(loc) {
if (!IsValid()) return;
SetNode(WeaknessPolicy::GetPersistentRegion(GetValue())
.AllocateNode(this, &BasicPersistent::Trace));
.AllocateNode(this, &TraceAsRoot));
this->CheckPointer(Get());
}
@ -95,7 +92,7 @@ class BasicPersistent final : public PersistentBase,
template <typename U, typename OtherWeaknessPolicy,
typename OtherLocationPolicy, typename OtherCheckingPolicy,
typename = std::enable_if_t<std::is_base_of<T, U>::value>>
BasicPersistent( // NOLINT
BasicPersistent(
const BasicPersistent<U, OtherWeaknessPolicy, OtherLocationPolicy,
OtherCheckingPolicy>& other,
const SourceLocation& loc = SourceLocation::Current())
@ -117,10 +114,11 @@ class BasicPersistent final : public PersistentBase,
// Constructor from member.
template <typename U, typename MemberBarrierPolicy,
typename MemberWeaknessTag, typename MemberCheckingPolicy,
typename MemberStorageType,
typename = std::enable_if_t<std::is_base_of<T, U>::value>>
BasicPersistent(internal::BasicMember<U, MemberBarrierPolicy, // NOLINT
MemberWeaknessTag, MemberCheckingPolicy>
member,
BasicPersistent(const internal::BasicMember<
U, MemberBarrierPolicy, MemberWeaknessTag,
MemberCheckingPolicy, MemberStorageType>& member,
const SourceLocation& loc = SourceLocation::Current())
: BasicPersistent(member.Get(), loc) {}
@ -141,7 +139,7 @@ class BasicPersistent final : public PersistentBase,
}
// Move assignment.
BasicPersistent& operator=(BasicPersistent&& other) {
BasicPersistent& operator=(BasicPersistent&& other) noexcept {
if (this == &other) return *this;
Clear();
PersistentBase::operator=(std::move(other));
@ -157,10 +155,11 @@ class BasicPersistent final : public PersistentBase,
// Assignment from member.
template <typename U, typename MemberBarrierPolicy,
typename MemberWeaknessTag, typename MemberCheckingPolicy,
typename MemberStorageType,
typename = std::enable_if_t<std::is_base_of<T, U>::value>>
BasicPersistent& operator=(
internal::BasicMember<U, MemberBarrierPolicy, MemberWeaknessTag,
MemberCheckingPolicy>
const internal::BasicMember<U, MemberBarrierPolicy, MemberWeaknessTag,
MemberCheckingPolicy, MemberStorageType>&
member) {
return operator=(member.Get());
}
@ -181,7 +180,7 @@ class BasicPersistent final : public PersistentBase,
}
explicit operator bool() const { return Get(); }
operator T*() const { return Get(); } // NOLINT
operator T*() const { return Get(); }
T* operator->() const { return Get(); }
T& operator*() const { return *Get(); }
@ -222,9 +221,8 @@ class BasicPersistent final : public PersistentBase,
}
private:
static void Trace(Visitor* v, const void* ptr) {
const auto* persistent = static_cast<const BasicPersistent*>(ptr);
v->TraceRoot(*persistent, persistent->Location());
static void TraceAsRoot(RootVisitor& root_visitor, const void* ptr) {
root_visitor.Trace(*static_cast<const BasicPersistent*>(ptr));
}
bool IsValid() const {
@ -248,7 +246,7 @@ class BasicPersistent final : public PersistentBase,
SetValue(ptr);
if (!IsValid()) return;
SetNode(WeaknessPolicy::GetPersistentRegion(GetValue())
.AllocateNode(this, &BasicPersistent::Trace));
.AllocateNode(this, &TraceAsRoot));
this->CheckPointer(Get());
}
@ -259,7 +257,13 @@ class BasicPersistent final : public PersistentBase,
}
}
friend class cppgc::Visitor;
// Set Get() for details.
V8_CLANG_NO_SANITIZE("cfi-unrelated-cast")
T* GetFromGC() const {
return static_cast<T*>(const_cast<void*>(GetValue()));
}
friend class internal::RootVisitor;
};
template <typename T1, typename WeaknessPolicy1, typename LocationPolicy1,
@ -285,52 +289,56 @@ bool operator!=(const BasicPersistent<T1, WeaknessPolicy1, LocationPolicy1,
template <typename T1, typename PersistentWeaknessPolicy,
typename PersistentLocationPolicy, typename PersistentCheckingPolicy,
typename T2, typename MemberWriteBarrierPolicy,
typename MemberWeaknessTag, typename MemberCheckingPolicy>
bool operator==(const BasicPersistent<T1, PersistentWeaknessPolicy,
PersistentLocationPolicy,
PersistentCheckingPolicy>& p,
BasicMember<T2, MemberWeaknessTag, MemberWriteBarrierPolicy,
MemberCheckingPolicy>
m) {
typename MemberWeaknessTag, typename MemberCheckingPolicy,
typename MemberStorageType>
bool operator==(
const BasicPersistent<T1, PersistentWeaknessPolicy,
PersistentLocationPolicy, PersistentCheckingPolicy>&
p,
const BasicMember<T2, MemberWeaknessTag, MemberWriteBarrierPolicy,
MemberCheckingPolicy, MemberStorageType>& m) {
return p.Get() == m.Get();
}
template <typename T1, typename PersistentWeaknessPolicy,
typename PersistentLocationPolicy, typename PersistentCheckingPolicy,
typename T2, typename MemberWriteBarrierPolicy,
typename MemberWeaknessTag, typename MemberCheckingPolicy>
bool operator!=(const BasicPersistent<T1, PersistentWeaknessPolicy,
PersistentLocationPolicy,
PersistentCheckingPolicy>& p,
BasicMember<T2, MemberWeaknessTag, MemberWriteBarrierPolicy,
MemberCheckingPolicy>
m) {
typename MemberWeaknessTag, typename MemberCheckingPolicy,
typename MemberStorageType>
bool operator!=(
const BasicPersistent<T1, PersistentWeaknessPolicy,
PersistentLocationPolicy, PersistentCheckingPolicy>&
p,
const BasicMember<T2, MemberWeaknessTag, MemberWriteBarrierPolicy,
MemberCheckingPolicy, MemberStorageType>& m) {
return !(p == m);
}
template <typename T1, typename MemberWriteBarrierPolicy,
typename MemberWeaknessTag, typename MemberCheckingPolicy,
typename T2, typename PersistentWeaknessPolicy,
typename PersistentLocationPolicy, typename PersistentCheckingPolicy>
bool operator==(BasicMember<T2, MemberWeaknessTag, MemberWriteBarrierPolicy,
MemberCheckingPolicy>
m,
const BasicPersistent<T1, PersistentWeaknessPolicy,
PersistentLocationPolicy,
PersistentCheckingPolicy>& p) {
typename MemberStorageType, typename T2,
typename PersistentWeaknessPolicy, typename PersistentLocationPolicy,
typename PersistentCheckingPolicy>
bool operator==(
const BasicMember<T2, MemberWeaknessTag, MemberWriteBarrierPolicy,
MemberCheckingPolicy, MemberStorageType>& m,
const BasicPersistent<T1, PersistentWeaknessPolicy,
PersistentLocationPolicy, PersistentCheckingPolicy>&
p) {
return m.Get() == p.Get();
}
template <typename T1, typename MemberWriteBarrierPolicy,
typename MemberWeaknessTag, typename MemberCheckingPolicy,
typename T2, typename PersistentWeaknessPolicy,
typename PersistentLocationPolicy, typename PersistentCheckingPolicy>
bool operator!=(BasicMember<T2, MemberWeaknessTag, MemberWriteBarrierPolicy,
MemberCheckingPolicy>
m,
const BasicPersistent<T1, PersistentWeaknessPolicy,
PersistentLocationPolicy,
PersistentCheckingPolicy>& p) {
typename MemberStorageType, typename T2,
typename PersistentWeaknessPolicy, typename PersistentLocationPolicy,
typename PersistentCheckingPolicy>
bool operator!=(
const BasicMember<T2, MemberWeaknessTag, MemberWriteBarrierPolicy,
MemberCheckingPolicy, MemberStorageType>& m,
const BasicPersistent<T1, PersistentWeaknessPolicy,
PersistentLocationPolicy, PersistentCheckingPolicy>&
p) {
return !(m == p);
}

View File

@ -5,6 +5,9 @@
#ifndef INCLUDE_CPPGC_PLATFORM_H_
#define INCLUDE_CPPGC_PLATFORM_H_
#include <memory>
#include "cppgc/source-location.h"
#include "v8-platform.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
@ -30,8 +33,9 @@ class V8_EXPORT Platform {
virtual ~Platform() = default;
/**
* Returns the allocator used by cppgc to allocate its heap and various
* support structures.
* \returns the allocator used by cppgc to allocate its heap and various
* support structures. Returning nullptr results in using the `PageAllocator`
* provided by `cppgc::InitializeProcess()` instead.
*/
virtual PageAllocator* GetPageAllocator() = 0;
@ -129,10 +133,16 @@ class V8_EXPORT Platform {
*
* Can be called multiple times when paired with `ShutdownProcess()`.
*
* \param page_allocator The allocator used for maintaining meta data. Must not
* change between multiple calls to InitializeProcess.
* \param page_allocator The allocator used for maintaining meta data. Must stay
* always alive and not change between multiple calls to InitializeProcess. If
* no allocator is provided, a default internal version will be used.
* \param desired_heap_size Desired amount of virtual address space to reserve
* for the heap, in bytes. Actual size will be clamped to minimum and maximum
* values based on compile-time settings and may be rounded up. If this
* parameter is zero, a default value will be used.
*/
V8_EXPORT void InitializeProcess(PageAllocator* page_allocator);
V8_EXPORT void InitializeProcess(PageAllocator* page_allocator = nullptr,
size_t desired_heap_size = 0);
/**
* Must be called after destroying the last used heap. Some process-global
@ -143,9 +153,11 @@ V8_EXPORT void ShutdownProcess();
namespace internal {
V8_EXPORT void Abort();
V8_EXPORT void Fatal(const std::string& reason = std::string(),
const SourceLocation& = SourceLocation::Current());
} // namespace internal
} // namespace cppgc
#endif // INCLUDE_CPPGC_PLATFORM_H_

View File

@ -6,23 +6,17 @@
#define INCLUDE_CPPGC_PREFINALIZER_H_
#include "cppgc/internal/compiler-specific.h"
#include "cppgc/internal/prefinalizer-handler.h"
#include "cppgc/liveness-broker.h"
namespace cppgc {
namespace internal {
template <typename T>
class PrefinalizerRegistration final {
class V8_EXPORT PrefinalizerRegistration final {
public:
explicit PrefinalizerRegistration(T* self) {
static_assert(sizeof(&T::InvokePreFinalizer) > 0,
"USING_PRE_FINALIZER(T) must be defined.");
using Callback = bool (*)(const cppgc::LivenessBroker&, void*);
cppgc::internal::PreFinalizerRegistrationDispatcher::RegisterPrefinalizer(
{self, T::InvokePreFinalizer});
}
PrefinalizerRegistration(void*, Callback);
void* operator new(size_t, void* location) = delete;
void* operator new(size_t) = delete;
@ -30,6 +24,35 @@ class PrefinalizerRegistration final {
} // namespace internal
/**
* Macro must be used in the private section of `Class` and registers a
* prefinalization callback `void Class::PreFinalizer()`. The callback is
* invoked on garbage collection after the collector has found an object to be
* dead.
*
* Callback properties:
* - The callback is invoked before a possible destructor for the corresponding
* object.
* - The callback may access the whole object graph, irrespective of whether
* objects are considered dead or alive.
* - The callback is invoked on the same thread as the object was created on.
*
* Example:
* \code
* class WithPrefinalizer : public GarbageCollected<WithPrefinalizer> {
* CPPGC_USING_PRE_FINALIZER(WithPrefinalizer, Dispose);
*
* public:
* void Trace(Visitor*) const {}
* void Dispose() { prefinalizer_called = true; }
* ~WithPrefinalizer() {
* // prefinalizer_called == true
* }
* private:
* bool prefinalizer_called = false;
* };
* \endcode
*/
#define CPPGC_USING_PRE_FINALIZER(Class, PreFinalizer) \
public: \
static bool InvokePreFinalizer(const cppgc::LivenessBroker& liveness_broker, \
@ -38,13 +61,13 @@ class PrefinalizerRegistration final {
"Only garbage collected objects can have prefinalizers"); \
Class* self = static_cast<Class*>(object); \
if (liveness_broker.IsHeapObjectAlive(self)) return false; \
self->Class::PreFinalizer(); \
self->PreFinalizer(); \
return true; \
} \
\
private: \
CPPGC_NO_UNIQUE_ADDRESS cppgc::internal::PrefinalizerRegistration<Class> \
prefinalizer_dummy_{this}; \
CPPGC_NO_UNIQUE_ADDRESS cppgc::internal::PrefinalizerRegistration \
prefinalizer_dummy_{this, Class::InvokePreFinalizer}; \
static_assert(true, "Force semicolon.")
} // namespace cppgc

View File

@ -7,15 +7,22 @@
#include <cstdint>
#include "cppgc/internal/api-constants.h"
namespace cppgc {
namespace internal {
// Special tag type used to denote some sentinel member. The semantics of the
// sentinel is defined by the embedder.
struct SentinelPointer {
#if defined(CPPGC_POINTER_COMPRESSION)
static constexpr intptr_t kSentinelValue =
1 << api_constants::kPointerCompressionShift;
#else // !defined(CPPGC_POINTER_COMPRESSION)
static constexpr intptr_t kSentinelValue = 0b10;
#endif // !defined(CPPGC_POINTER_COMPRESSION)
template <typename T>
operator T*() const { // NOLINT
static constexpr intptr_t kSentinelValue = 1;
operator T*() const {
return reinterpret_cast<T*>(kSentinelValue);
}
// Hidden friends.

View File

@ -5,86 +5,11 @@
#ifndef INCLUDE_CPPGC_SOURCE_LOCATION_H_
#define INCLUDE_CPPGC_SOURCE_LOCATION_H_
#include <string>
#include "v8config.h" // NOLINT(build/include_directory)
#if defined(__has_builtin)
#define CPPGC_SUPPORTS_SOURCE_LOCATION \
(__has_builtin(__builtin_FUNCTION) && __has_builtin(__builtin_FILE) && \
__has_builtin(__builtin_LINE)) // NOLINT
#elif defined(V8_CC_GNU) && __GNUC__ >= 7
#define CPPGC_SUPPORTS_SOURCE_LOCATION 1
#elif defined(V8_CC_INTEL) && __ICC >= 1800
#define CPPGC_SUPPORTS_SOURCE_LOCATION 1
#else
#define CPPGC_SUPPORTS_SOURCE_LOCATION 0
#endif
#include "v8-source-location.h"
namespace cppgc {
/**
* Encapsulates source location information. Mimics C++20's
* `std::source_location`.
*/
class V8_EXPORT SourceLocation final {
public:
/**
* Construct source location information corresponding to the location of the
* call site.
*/
#if CPPGC_SUPPORTS_SOURCE_LOCATION
static constexpr SourceLocation Current(
const char* function = __builtin_FUNCTION(),
const char* file = __builtin_FILE(), size_t line = __builtin_LINE()) {
return SourceLocation(function, file, line);
}
#else
static constexpr SourceLocation Current() { return SourceLocation(); }
#endif // CPPGC_SUPPORTS_SOURCE_LOCATION
/**
* Constructs unspecified source location information.
*/
constexpr SourceLocation() = default;
/**
* Returns the name of the function associated with the position represented
* by this object, if any.
*
* \returns the function name as cstring.
*/
constexpr const char* Function() const { return function_; }
/**
* Returns the name of the current source file represented by this object.
*
* \returns the file name as cstring.
*/
constexpr const char* FileName() const { return file_; }
/**
* Returns the line number represented by this object.
*
* \returns the line number.
*/
constexpr size_t Line() const { return line_; }
/**
* Returns a human-readable string representing this object.
*
* \returns a human-readable string representing source location information.
*/
std::string ToString() const;
private:
constexpr SourceLocation(const char* function, const char* file, size_t line)
: function_(function), file_(file), line_(line) {}
const char* function_ = nullptr;
const char* file_ = nullptr;
size_t line_ = 0u;
};
using SourceLocation = v8::SourceLocation;
} // namespace cppgc

View File

@ -19,8 +19,13 @@ class HeapHandle;
namespace testing {
/**
* Overrides the state of the stack with the provided value. Takes precedence
* over other parameters that set the stack state. Must no be nested.
* Overrides the state of the stack with the provided value. Parameters passed
* to explicit garbage collection calls still take precedence. Must not be
* nested.
*
* This scope is useful to make the garbage collector consider the stack when
* tasks that invoke garbage collection (through the provided platform) contain
* interesting pointers on its stack.
*/
class V8_EXPORT V8_NODISCARD OverrideEmbedderStackStateScope final {
CPPGC_STACK_ALLOCATED();
@ -93,6 +98,8 @@ class V8_EXPORT StandaloneTestingHeap final {
HeapHandle& heap_handle_;
};
V8_EXPORT bool IsHeapObjectOld(void*);
} // namespace testing
} // namespace cppgc

View File

@ -16,6 +16,10 @@ class Visitor;
namespace internal {
class RootVisitor;
using TraceRootCallback = void (*)(RootVisitor&, const void* object);
// Implementation of the default TraceTrait handling GarbageCollected and
// GarbageCollectedMixin.
template <typename T,
@ -49,6 +53,14 @@ struct TraceDescriptor {
TraceCallback callback;
};
/**
* Callback for getting a TraceDescriptor for a given address.
*
* \param address Possibly inner address of an object.
* \returns a TraceDescriptor for the provided address.
*/
using TraceDescriptorCallback = TraceDescriptor (*)(const void* address);
namespace internal {
struct V8_EXPORT TraceTraitFromInnerAddressImpl {

View File

@ -7,6 +7,7 @@
// This file should stay with minimal dependencies to allow embedder to check
// against Oilpan types without including any other parts.
#include <cstddef>
#include <type_traits>
namespace cppgc {
@ -15,7 +16,7 @@ class Visitor;
namespace internal {
template <typename T, typename WeaknessTag, typename WriteBarrierPolicy,
typename CheckingPolicy>
typename CheckingPolicy, typename StorageType>
class BasicMember;
struct DijkstraWriteBarrierPolicy;
struct NoWriteBarrierPolicy;
@ -23,14 +24,6 @@ class StrongMemberTag;
class UntracedMemberTag;
class WeakMemberTag;
// Pre-C++17 custom implementation of std::void_t.
template <typename... Ts>
struct make_void {
typedef void type;
};
template <typename... Ts>
using void_t = typename make_void<Ts...>::type;
// Not supposed to be specialized by the user.
template <typename T>
struct IsWeak : std::false_type {};
@ -41,7 +34,7 @@ template <typename T, typename = void>
struct IsTraceMethodConst : std::false_type {};
template <typename T>
struct IsTraceMethodConst<T, void_t<decltype(std::declval<const T>().Trace(
struct IsTraceMethodConst<T, std::void_t<decltype(std::declval<const T>().Trace(
std::declval<Visitor*>()))>> : std::true_type {
};
@ -52,7 +45,7 @@ struct IsTraceable : std::false_type {
template <typename T>
struct IsTraceable<
T, void_t<decltype(std::declval<T>().Trace(std::declval<Visitor*>()))>>
T, std::void_t<decltype(std::declval<T>().Trace(std::declval<Visitor*>()))>>
: std::true_type {
// All Trace methods should be marked as const. If an object of type
// 'T' is traceable then any object of type 'const T' should also
@ -71,8 +64,8 @@ struct HasGarbageCollectedMixinTypeMarker : std::false_type {
template <typename T>
struct HasGarbageCollectedMixinTypeMarker<
T,
void_t<typename std::remove_const_t<T>::IsGarbageCollectedMixinTypeMarker>>
T, std::void_t<
typename std::remove_const_t<T>::IsGarbageCollectedMixinTypeMarker>>
: std::true_type {
static_assert(sizeof(T), "T must be fully defined");
};
@ -84,7 +77,8 @@ struct HasGarbageCollectedTypeMarker : std::false_type {
template <typename T>
struct HasGarbageCollectedTypeMarker<
T, void_t<typename std::remove_const_t<T>::IsGarbageCollectedTypeMarker>>
T,
std::void_t<typename std::remove_const_t<T>::IsGarbageCollectedTypeMarker>>
: std::true_type {
static_assert(sizeof(T), "T must be fully defined");
};
@ -132,9 +126,10 @@ template <typename BasicMemberCandidate, typename WeaknessTag,
typename WriteBarrierPolicy>
struct IsSubclassOfBasicMemberTemplate {
private:
template <typename T, typename CheckingPolicy>
template <typename T, typename CheckingPolicy, typename StorageType>
static std::true_type SubclassCheck(
BasicMember<T, WeaknessTag, WriteBarrierPolicy, CheckingPolicy>*);
BasicMember<T, WeaknessTag, WriteBarrierPolicy, CheckingPolicy,
StorageType>*);
static std::false_type SubclassCheck(...);
public:
@ -164,6 +159,27 @@ struct IsUntracedMemberType : std::false_type {};
template <typename T>
struct IsUntracedMemberType<T, true> : std::true_type {};
template <typename T>
struct IsComplete {
private:
template <typename U, size_t = sizeof(U)>
static std::true_type IsSizeOfKnown(U*);
static std::false_type IsSizeOfKnown(...);
public:
static constexpr bool value =
decltype(IsSizeOfKnown(std::declval<T*>()))::value;
};
template <typename T, typename U>
constexpr bool IsDecayedSameV =
std::is_same_v<std::decay_t<T>, std::decay_t<U>>;
template <typename B, typename D>
constexpr bool IsStrictlyBaseOfV =
std::is_base_of_v<std::decay_t<B>, std::decay_t<D>> &&
!IsDecayedSameV<B, D>;
} // namespace internal
/**
@ -223,6 +239,12 @@ constexpr bool IsWeakMemberTypeV = internal::IsWeakMemberType<T>::value;
template <typename T>
constexpr bool IsWeakV = internal::IsWeak<T>::value;
/**
* Value is true for types that are complete, and false otherwise.
*/
template <typename T>
constexpr bool IsCompleteV = internal::IsComplete<T>::value;
} // namespace cppgc
#endif // INCLUDE_CPPGC_TYPE_TRAITS_H_

View File

@ -5,13 +5,17 @@
#ifndef INCLUDE_CPPGC_VISITOR_H_
#define INCLUDE_CPPGC_VISITOR_H_
#include <type_traits>
#include "cppgc/custom-space.h"
#include "cppgc/ephemeron-pair.h"
#include "cppgc/garbage-collected.h"
#include "cppgc/internal/logging.h"
#include "cppgc/internal/member-storage.h"
#include "cppgc/internal/pointer-policies.h"
#include "cppgc/liveness-broker.h"
#include "cppgc/member.h"
#include "cppgc/sentinel-pointer.h"
#include "cppgc/source-location.h"
#include "cppgc/trace-trait.h"
#include "cppgc/type-traits.h"
@ -61,22 +65,6 @@ class V8_EXPORT Visitor {
virtual ~Visitor() = default;
/**
* Trace method for raw pointers. Prefer the versions for managed pointers.
*
* \param member Reference retaining an object.
*/
template <typename T>
void Trace(const T* t) {
static_assert(sizeof(T), "Pointee type must be fully defined.");
static_assert(internal::IsGarbageCollectedOrMixinType<T>::value,
"T must be GarbageCollected or GarbageCollectedMixin type");
if (!t) {
return;
}
Visit(t, TraceTrait<T>::GetTraceDescriptor(t));
}
/**
* Trace method for Member.
*
@ -86,7 +74,7 @@ class V8_EXPORT Visitor {
void Trace(const Member<T>& member) {
const T* value = member.GetRawAtomic();
CPPGC_DCHECK(value != kSentinelPointer);
Trace(value);
TraceImpl(value);
}
/**
@ -114,6 +102,44 @@ class V8_EXPORT Visitor {
&HandleWeak<WeakMember<T>>, &weak_member);
}
#if defined(CPPGC_POINTER_COMPRESSION)
/**
* Trace method for UncompressedMember.
*
* \param member UncompressedMember reference retaining an object.
*/
template <typename T>
void Trace(const subtle::UncompressedMember<T>& member) {
const T* value = member.GetRawAtomic();
CPPGC_DCHECK(value != kSentinelPointer);
TraceImpl(value);
}
#endif // defined(CPPGC_POINTER_COMPRESSION)
template <typename T>
void TraceMultiple(const subtle::UncompressedMember<T>* start, size_t len) {
static_assert(sizeof(T), "Pointee type must be fully defined.");
static_assert(internal::IsGarbageCollectedOrMixinType<T>::value,
"T must be GarbageCollected or GarbageCollectedMixin type");
VisitMultipleUncompressedMember(start, len,
&TraceTrait<T>::GetTraceDescriptor);
}
template <typename T,
std::enable_if_t<!std::is_same_v<
Member<T>, subtle::UncompressedMember<T>>>* = nullptr>
void TraceMultiple(const Member<T>* start, size_t len) {
static_assert(sizeof(T), "Pointee type must be fully defined.");
static_assert(internal::IsGarbageCollectedOrMixinType<T>::value,
"T must be GarbageCollected or GarbageCollectedMixin type");
#if defined(CPPGC_POINTER_COMPRESSION)
static_assert(std::is_same_v<Member<T>, subtle::CompressedMember<T>>,
"Member and CompressedMember must be the same.");
VisitMultipleCompressedMember(start, len,
&TraceTrait<T>::GetTraceDescriptor);
#endif // defined(CPPGC_POINTER_COMPRESSION)
}
/**
* Trace method for inlined objects that are not allocated themselves but
* otherwise follow managed heap layout and have a Trace() method.
@ -132,6 +158,26 @@ class V8_EXPORT Visitor {
TraceTrait<T>::Trace(this, &object);
}
template <typename T>
void TraceMultiple(const T* start, size_t len) {
#if V8_ENABLE_CHECKS
// This object is embedded in potentially multiple nested objects. The
// outermost object must not be in construction as such objects are (a) not
// processed immediately, and (b) only processed conservatively if not
// otherwise possible.
CheckObjectNotInConstruction(start);
#endif // V8_ENABLE_CHECKS
for (size_t i = 0; i < len; ++i) {
const T* object = &start[i];
if constexpr (std::is_polymorphic_v<T>) {
// The object's vtable may be uninitialized in which case the object is
// not traced.
if (*reinterpret_cast<const uintptr_t*>(object) == 0) continue;
}
TraceTrait<T>::Trace(this, object);
}
}
/**
* Registers a weak callback method on the object of type T. See
* LivenessBroker for an usage example.
@ -230,23 +276,34 @@ class V8_EXPORT Visitor {
void TraceStrongly(const WeakMember<T>& weak_member) {
const T* value = weak_member.GetRawAtomic();
CPPGC_DCHECK(value != kSentinelPointer);
Trace(value);
TraceImpl(value);
}
/**
* Trace method for weak containers.
* Trace method for retaining containers strongly.
*
* \param object reference of the weak container.
* \param object reference to the container.
*/
template <typename T>
void TraceStrongContainer(const T* object) {
TraceImpl(object);
}
/**
* Trace method for retaining containers weakly. Note that weak containers
* should emit write barriers.
*
* \param object reference to the container.
* \param callback to be invoked.
* \param data custom data that is passed to the callback.
* \param callback_data custom data that is passed to the callback.
*/
template <typename T>
void TraceWeakContainer(const T* object, WeakCallback callback,
const void* data) {
const void* callback_data) {
if (!object) return;
VisitWeakContainer(object, TraceTrait<T>::GetTraceDescriptor(object),
TraceTrait<T>::GetWeakTraceDescriptor(object), callback,
data);
callback_data);
}
/**
@ -254,6 +311,7 @@ class V8_EXPORT Visitor {
* compactable space. Such references maybe be arbitrarily moved by the GC.
*
* \param slot location of reference to object that might be moved by the GC.
* The slot must contain an uncompressed pointer.
*/
template <typename T>
void RegisterMovableReference(const T** slot) {
@ -296,9 +354,6 @@ class V8_EXPORT Visitor {
virtual void Visit(const void* self, TraceDescriptor) {}
virtual void VisitWeak(const void* self, TraceDescriptor, WeakCallback,
const void* weak_member) {}
virtual void VisitRoot(const void*, TraceDescriptor, const SourceLocation&) {}
virtual void VisitWeakRoot(const void* self, TraceDescriptor, WeakCallback,
const void* weak_root, const SourceLocation&) {}
virtual void VisitEphemeron(const void* key, const void* value,
TraceDescriptor value_desc) {}
virtual void VisitWeakContainer(const void* self, TraceDescriptor strong_desc,
@ -306,6 +361,39 @@ class V8_EXPORT Visitor {
WeakCallback callback, const void* data) {}
virtual void HandleMovableReference(const void**) {}
virtual void VisitMultipleUncompressedMember(
const void* start, size_t len,
TraceDescriptorCallback get_trace_descriptor) {
// Default implementation merely delegates to Visit().
const char* it = static_cast<const char*>(start);
const char* end = it + len * internal::kSizeOfUncompressedMember;
for (; it < end; it += internal::kSizeOfUncompressedMember) {
const auto* current = reinterpret_cast<const internal::RawPointer*>(it);
const void* object = current->LoadAtomic();
if (!object) continue;
Visit(object, get_trace_descriptor(object));
}
}
#if defined(CPPGC_POINTER_COMPRESSION)
virtual void VisitMultipleCompressedMember(
const void* start, size_t len,
TraceDescriptorCallback get_trace_descriptor) {
// Default implementation merely delegates to Visit().
const char* it = static_cast<const char*>(start);
const char* end = it + len * internal::kSizeofCompressedMember;
for (; it < end; it += internal::kSizeofCompressedMember) {
const auto* current =
reinterpret_cast<const internal::CompressedPointer*>(it);
const void* object = current->LoadAtomic();
if (!object) continue;
Visit(object, get_trace_descriptor(object));
}
}
#endif // defined(CPPGC_POINTER_COMPRESSION)
private:
template <typename T, void (T::*method)(const LivenessBroker&)>
static void WeakCallbackMethodDelegate(const LivenessBroker& info,
@ -318,44 +406,20 @@ class V8_EXPORT Visitor {
template <typename PointerType>
static void HandleWeak(const LivenessBroker& info, const void* object) {
const PointerType* weak = static_cast<const PointerType*>(object);
// Sentinel values are preserved for weak pointers.
if (*weak == kSentinelPointer) return;
const auto* raw = weak->Get();
if (!info.IsHeapObjectAlive(raw)) {
if (!info.IsHeapObjectAlive(weak->GetFromGC())) {
weak->ClearFromGC();
}
}
template <typename Persistent,
std::enable_if_t<Persistent::IsStrongPersistent::value>* = nullptr>
void TraceRoot(const Persistent& p, const SourceLocation& loc) {
using PointeeType = typename Persistent::PointeeType;
static_assert(sizeof(PointeeType),
"Persistent's pointee type must be fully defined");
static_assert(internal::IsGarbageCollectedOrMixinType<PointeeType>::value,
"Persistent's pointee type must be GarbageCollected or "
"GarbageCollectedMixin");
if (!p.Get()) {
template <typename T>
void TraceImpl(const T* t) {
static_assert(sizeof(T), "Pointee type must be fully defined.");
static_assert(internal::IsGarbageCollectedOrMixinType<T>::value,
"T must be GarbageCollected or GarbageCollectedMixin type");
if (!t) {
return;
}
VisitRoot(p.Get(), TraceTrait<PointeeType>::GetTraceDescriptor(p.Get()),
loc);
}
template <
typename WeakPersistent,
std::enable_if_t<!WeakPersistent::IsStrongPersistent::value>* = nullptr>
void TraceRoot(const WeakPersistent& p, const SourceLocation& loc) {
using PointeeType = typename WeakPersistent::PointeeType;
static_assert(sizeof(PointeeType),
"Persistent's pointee type must be fully defined");
static_assert(internal::IsGarbageCollectedOrMixinType<PointeeType>::value,
"Persistent's pointee type must be GarbageCollected or "
"GarbageCollectedMixin");
static_assert(!internal::IsAllocatedOnCompactableSpace<PointeeType>::value,
"Weak references to compactable objects are not allowed");
VisitWeakRoot(p.Get(), TraceTrait<PointeeType>::GetTraceDescriptor(p.Get()),
&HandleWeak<WeakPersistent>, &p, loc);
Visit(t, TraceTrait<T>::GetTraceDescriptor(t));
}
#if V8_ENABLE_CHECKS
@ -372,6 +436,69 @@ class V8_EXPORT Visitor {
friend class internal::VisitorBase;
};
namespace internal {
class V8_EXPORT RootVisitor {
public:
explicit RootVisitor(Visitor::Key) {}
virtual ~RootVisitor() = default;
template <typename AnyStrongPersistentType,
std::enable_if_t<
AnyStrongPersistentType::IsStrongPersistent::value>* = nullptr>
void Trace(const AnyStrongPersistentType& p) {
using PointeeType = typename AnyStrongPersistentType::PointeeType;
const void* object = Extract(p);
if (!object) {
return;
}
VisitRoot(object, TraceTrait<PointeeType>::GetTraceDescriptor(object),
p.Location());
}
template <typename AnyWeakPersistentType,
std::enable_if_t<
!AnyWeakPersistentType::IsStrongPersistent::value>* = nullptr>
void Trace(const AnyWeakPersistentType& p) {
using PointeeType = typename AnyWeakPersistentType::PointeeType;
static_assert(!internal::IsAllocatedOnCompactableSpace<PointeeType>::value,
"Weak references to compactable objects are not allowed");
const void* object = Extract(p);
if (!object) {
return;
}
VisitWeakRoot(object, TraceTrait<PointeeType>::GetTraceDescriptor(object),
&HandleWeak<AnyWeakPersistentType>, &p, p.Location());
}
protected:
virtual void VisitRoot(const void*, TraceDescriptor, const SourceLocation&) {}
virtual void VisitWeakRoot(const void* self, TraceDescriptor, WeakCallback,
const void* weak_root, const SourceLocation&) {}
private:
template <typename AnyPersistentType>
static const void* Extract(AnyPersistentType& p) {
using PointeeType = typename AnyPersistentType::PointeeType;
static_assert(sizeof(PointeeType),
"Persistent's pointee type must be fully defined");
static_assert(internal::IsGarbageCollectedOrMixinType<PointeeType>::value,
"Persistent's pointee type must be GarbageCollected or "
"GarbageCollectedMixin");
return p.GetFromGC();
}
template <typename PointerType>
static void HandleWeak(const LivenessBroker& info, const void* object) {
const PointerType* weak = static_cast<const PointerType*>(object);
if (!info.IsHeapObjectAlive(weak->GetFromGC())) {
weak->ClearFromGC();
}
}
};
} // namespace internal
} // namespace cppgc
#endif // INCLUDE_CPPGC_VISITOR_H_

View File

@ -946,34 +946,6 @@
{ "name": "url", "type": "string", "description": "JavaScript script name or url." },
{ "name": "functions", "type": "array", "items": { "$ref": "FunctionCoverage" }, "description": "Functions contained in the script that has coverage data." }
]
},
{ "id": "TypeObject",
"type": "object",
"description": "Describes a type collected during runtime.",
"properties": [
{ "name": "name", "type": "string", "description": "Name of a type collected with type profiling." }
],
"experimental": true
},
{ "id": "TypeProfileEntry",
"type": "object",
"description": "Source offset and types for a parameter or return value.",
"properties": [
{ "name": "offset", "type": "integer", "description": "Source offset of the parameter or end of function for return values." },
{ "name": "types", "type": "array", "items": {"$ref": "TypeObject"}, "description": "The types for this parameter or return value."}
],
"experimental": true
},
{
"id": "ScriptTypeProfile",
"type": "object",
"description": "Type profile data collected during runtime for a JavaScript script.",
"properties": [
{ "name": "scriptId", "$ref": "Runtime.ScriptId", "description": "JavaScript script id." },
{ "name": "url", "type": "string", "description": "JavaScript script name or url." },
{ "name": "entries", "type": "array", "items": { "$ref": "TypeProfileEntry" }, "description": "Type profile entries for parameters and return values of the functions in the script." }
],
"experimental": true
}
],
"commands": [
@ -1024,24 +996,6 @@
{ "name": "result", "type": "array", "items": { "$ref": "ScriptCoverage" }, "description": "Coverage data for the current isolate." }
],
"description": "Collect coverage data for the current isolate. The coverage data may be incomplete due to garbage collection."
},
{
"name": "startTypeProfile",
"description": "Enable type profile.",
"experimental": true
},
{
"name": "stopTypeProfile",
"description": "Disable type profile. Disabling releases type profile data collected so far.",
"experimental": true
},
{
"name": "takeTypeProfile",
"returns": [
{ "name": "result", "type": "array", "items": { "$ref": "ScriptTypeProfile" }, "description": "Type profile for all scripts since startTypeProfile() was turned on." }
],
"description": "Collect type profile.",
"experimental": true
}
],
"events": [

View File

@ -104,13 +104,20 @@ domain Debugger
# Location in the source code.
Location location
# JavaScript script name or url.
string url
# Deprecated in favor of using the `location.scriptId` to resolve the URL via a previously
# sent `Debugger.scriptParsed` event.
deprecated string url
# Scope chain for this call frame.
array of Scope scopeChain
# `this` object for this call frame.
Runtime.RemoteObject this
# The value being returned, if the function is at return point.
optional Runtime.RemoteObject returnValue
# Valid only while the VM is paused and indicates whether this frame
# can be restarted or not. Note that a `true` value here does not
# guarantee that Debugger#restartFrame with this CallFrameId will be
# successful, but it is very likely.
experimental optional boolean canBeRestarted
# Scope description.
type Scope extends object
@ -175,7 +182,7 @@ domain Debugger
command enable
parameters
# The maximum size in bytes of collected scripts (not referenced by other heap objects)
# the debugger can hold. Puts no limit if paramter is omitted.
# the debugger can hold. Puts no limit if parameter is omitted.
experimental optional number maxScriptsCacheSize
returns
# Unique identifier of the debugger.
@ -237,6 +244,40 @@ domain Debugger
# Wasm bytecode.
optional binary bytecode
experimental type WasmDisassemblyChunk extends object
properties
# The next chunk of disassembled lines.
array of string lines
# The bytecode offsets describing the start of each line.
array of integer bytecodeOffsets
experimental command disassembleWasmModule
parameters
# Id of the script to disassemble
Runtime.ScriptId scriptId
returns
# For large modules, return a stream from which additional chunks of
# disassembly can be read successively.
optional string streamId
# The total number of lines in the disassembly text.
integer totalNumberOfLines
# The offsets of all function bodies, in the format [start1, end1,
# start2, end2, ...] where all ends are exclusive.
array of integer functionBodyOffsets
# The first chunk of disassembly.
WasmDisassemblyChunk chunk
# Disassemble the next chunk of lines for the module corresponding to the
# stream. If disassembly is complete, this API will invalidate the streamId
# and return an empty chunk. Any subsequent calls for the now invalid stream
# will return errors.
experimental command nextWasmDisassemblyChunk
parameters
string streamId
returns
# The next chunk of disassembly.
WasmDisassemblyChunk chunk
# This command is deprecated. Use getScriptSource instead.
deprecated command getWasmBytecode
parameters
@ -266,18 +307,35 @@ domain Debugger
parameters
BreakpointId breakpointId
# Restarts particular call frame from the beginning.
# Restarts particular call frame from the beginning. The old, deprecated
# behavior of `restartFrame` is to stay paused and allow further CDP commands
# after a restart was scheduled. This can cause problems with restarting, so
# we now continue execution immediatly after it has been scheduled until we
# reach the beginning of the restarted frame.
#
# To stay back-wards compatible, `restartFrame` now expects a `mode`
# parameter to be present. If the `mode` parameter is missing, `restartFrame`
# errors out.
#
# The various return values are deprecated and `callFrames` is always empty.
# Use the call frames from the `Debugger#paused` events instead, that fires
# once V8 pauses at the beginning of the restarted function.
command restartFrame
parameters
# Call frame identifier to evaluate on.
CallFrameId callFrameId
# The `mode` parameter must be present and set to 'StepInto', otherwise
# `restartFrame` will error out.
experimental optional enum mode
# Pause at the beginning of the restarted function
StepInto
returns
# New stack trace.
array of CallFrame callFrames
deprecated array of CallFrame callFrames
# Async stack trace, if any.
optional Runtime.StackTrace asyncStackTrace
deprecated optional Runtime.StackTrace asyncStackTrace
# Async stack trace, if any.
experimental optional Runtime.StackTraceId asyncStackTraceId
deprecated optional Runtime.StackTraceId asyncStackTraceId
# Resumes JavaScript execution.
command resume
@ -400,13 +458,14 @@ domain Debugger
# New value for breakpoints active state.
boolean active
# Defines pause on exceptions state. Can be set to stop on all exceptions, uncaught exceptions or
# no exceptions. Initial pause on exceptions state is `none`.
# Defines pause on exceptions state. Can be set to stop on all exceptions, uncaught exceptions,
# or caught exceptions, no exceptions. Initial pause on exceptions state is `none`.
command setPauseOnExceptions
parameters
# Pause on exceptions mode.
enum state
none
caught
uncaught
all
@ -417,6 +476,12 @@ domain Debugger
Runtime.CallArgument newValue
# Edits JavaScript source live.
#
# In general, functions that are currently on the stack can not be edited with
# a single exception: If the edited function is the top-most stack frame and
# that is the only activation of that function on the stack. In this case
# the live edit will be successful and a `Debugger.restartFrame` for the
# top-most function is automatically triggered.
command setScriptSource
parameters
# Id of the script to edit.
@ -426,16 +491,28 @@ domain Debugger
# If true the change will not actually be applied. Dry run may be used to get result
# description without actually modifying the code.
optional boolean dryRun
# If true, then `scriptSource` is allowed to change the function on top of the stack
# as long as the top-most stack frame is the only activation of that function.
experimental optional boolean allowTopFrameEditing
returns
# New stack trace in case editing has happened while VM was stopped.
optional array of CallFrame callFrames
deprecated optional array of CallFrame callFrames
# Whether current call stack was modified after applying the changes.
optional boolean stackChanged
deprecated optional boolean stackChanged
# Async stack trace, if any.
optional Runtime.StackTrace asyncStackTrace
deprecated optional Runtime.StackTrace asyncStackTrace
# Async stack trace, if any.
experimental optional Runtime.StackTraceId asyncStackTraceId
# Exception details if any.
deprecated optional Runtime.StackTraceId asyncStackTraceId
# Whether the operation was successful or not. Only `Ok` denotes a
# successful live edit while the other enum variants denote why
# the live edit failed.
experimental enum status
Ok
CompileError
BlockedByActiveGenerator
BlockedByActiveFunction
BlockedByTopLevelEsModuleChange
# Exception details if any. Only present when `status` is `CompileError`.
optional Runtime.ExceptionDetails exceptionDetails
# Makes page not interrupt on any pauses (breakpoint, exception, dom exception etc).
@ -503,6 +580,7 @@ domain Debugger
other
promiseRejection
XHR
step
# Object containing break-specific auxiliary properties.
optional object data
# Hit breakpoints IDs
@ -552,9 +630,9 @@ domain Debugger
integer endColumn
# Specifies script creation context.
Runtime.ExecutionContextId executionContextId
# Content hash of the script.
# Content hash of the script, SHA-256.
string hash
# Embedder-specific auxiliary data.
# Embedder-specific auxiliary data likely matching {isDefault: boolean, type: 'default'|'isolated'|'worker', frameId: string}
optional object executionContextAuxData
# URL of source map associated with script (if any).
optional string sourceMapURL
@ -591,9 +669,9 @@ domain Debugger
integer endColumn
# Specifies script creation context.
Runtime.ExecutionContextId executionContextId
# Content hash of the script.
# Content hash of the script, SHA-256.
string hash
# Embedder-specific auxiliary data.
# Embedder-specific auxiliary data likely matching {isDefault: boolean, type: 'default'|'isolated'|'worker', frameId: string}
optional object executionContextAuxData
# True, if this script is generated as a result of the live edit operation.
experimental optional boolean isLiveEdit
@ -691,6 +769,22 @@ experimental domain HeapProfiler
# Average sample interval in bytes. Poisson distribution is used for the intervals. The
# default value is 32768 bytes.
optional number samplingInterval
# By default, the sampling heap profiler reports only objects which are
# still alive when the profile is returned via getSamplingProfile or
# stopSampling, which is useful for determining what functions contribute
# the most to steady-state memory usage. This flag instructs the sampling
# heap profiler to also include information about objects discarded by
# major GC, which will show which functions cause large temporary memory
# usage or long GC pauses.
optional boolean includeObjectsCollectedByMajorGC
# By default, the sampling heap profiler reports only objects which are
# still alive when the profile is returned via getSamplingProfile or
# stopSampling, which is useful for determining what functions contribute
# the most to steady-state memory usage. This flag instructs the sampling
# heap profiler to also include information about objects discarded by
# minor GC, which is useful when tuning a latency-sensitive application
# for minimal GC activity.
optional boolean includeObjectsCollectedByMinorGC
command startTrackingHeapObjects
parameters
@ -706,14 +800,24 @@ experimental domain HeapProfiler
# If true 'reportHeapSnapshotProgress' events will be generated while snapshot is being taken
# when the tracking is stopped.
optional boolean reportProgress
optional boolean treatGlobalObjectsAsRoots
# Deprecated in favor of `exposeInternals`.
deprecated optional boolean treatGlobalObjectsAsRoots
# If true, numerical values are included in the snapshot
optional boolean captureNumericValue
# If true, exposes internals of the snapshot.
experimental optional boolean exposeInternals
command takeHeapSnapshot
parameters
# If true 'reportHeapSnapshotProgress' events will be generated while snapshot is being taken.
optional boolean reportProgress
# If true, a raw snapshot without artifical roots will be generated
optional boolean treatGlobalObjectsAsRoots
# If true, a raw snapshot without artificial roots will be generated.
# Deprecated in favor of `exposeInternals`.
deprecated optional boolean treatGlobalObjectsAsRoots
# If true, numerical values are included in the snapshot
optional boolean captureNumericValue
# If true, exposes internals of the snapshot.
experimental optional boolean exposeInternals
event addHeapSnapshotChunk
parameters
@ -817,48 +921,6 @@ domain Profiler
# Functions contained in the script that has coverage data.
array of FunctionCoverage functions
# Describes a type collected during runtime.
experimental type TypeObject extends object
properties
# Name of a type collected with type profiling.
string name
# Source offset and types for a parameter or return value.
experimental type TypeProfileEntry extends object
properties
# Source offset of the parameter or end of function for return values.
integer offset
# The types for this parameter or return value.
array of TypeObject types
# Type profile data collected during runtime for a JavaScript script.
experimental type ScriptTypeProfile extends object
properties
# JavaScript script id.
Runtime.ScriptId scriptId
# JavaScript script name or url.
string url
# Type profile entries for parameters and return values of the functions in the script.
array of TypeProfileEntry entries
# Collected counter information.
experimental type CounterInfo extends object
properties
# Counter name.
string name
# Counter value.
integer value
# Runtime call counter information.
experimental type RuntimeCallCounterInfo extends object
properties
# Counter name.
string name
# Counter value.
number value
# Counter time in seconds.
number time
command disable
command enable
@ -893,9 +955,6 @@ domain Profiler
# Monotonically increasing time (in seconds) when the coverage update was taken in the backend.
number timestamp
# Enable type profile.
experimental command startTypeProfile
command stop
returns
# Recorded profile.
@ -905,9 +964,6 @@ domain Profiler
# executing optimized code.
command stopPreciseCoverage
# Disable type profile. Disabling releases type profile data collected so far.
experimental command stopTypeProfile
# Collect coverage data for the current isolate, and resets execution counters. Precise code
# coverage needs to have started.
command takePreciseCoverage
@ -917,36 +973,6 @@ domain Profiler
# Monotonically increasing time (in seconds) when the coverage update was taken in the backend.
number timestamp
# Collect type profile.
experimental command takeTypeProfile
returns
# Type profile for all scripts since startTypeProfile() was turned on.
array of ScriptTypeProfile result
# Enable counters collection.
experimental command enableCounters
# Disable counters collection.
experimental command disableCounters
# Retrieve counters.
experimental command getCounters
returns
# Collected counters information.
array of CounterInfo result
# Enable run time call stats collection.
experimental command enableRuntimeCallStats
# Disable run time call stats collection.
experimental command disableRuntimeCallStats
# Retrieve run time call stats.
experimental command getRuntimeCallStats
returns
# Collected runtime call counter information.
array of RuntimeCallCounterInfo result
event consoleProfileFinished
parameters
string id
@ -968,13 +994,13 @@ domain Profiler
# Reports coverage delta since the last poll (either from an event like this, or from
# `takePreciseCoverage` for the current isolate. May only be sent if precise code
# coverage has been started. This event can be trigged by the embedder to, for example,
# trigger collection of coverage data immediatelly at a certain point in time.
# trigger collection of coverage data immediately at a certain point in time.
experimental event preciseCoverageDeltaUpdate
parameters
# Monotonically increasing time (in seconds) when the coverage update was taken in the backend.
number timestamp
# Identifier for distinguishing coverage events.
string occassion
string occasion
# Coverage data for the current isolate.
array of ScriptCoverage result
@ -988,6 +1014,60 @@ domain Runtime
# Unique script identifier.
type ScriptId extends string
# Represents options for serialization. Overrides `generatePreview`, `returnByValue` and
# `generateWebDriverValue`.
type SerializationOptions extends object
properties
enum serialization
# Whether the result should be deep-serialized. The result is put into
# `deepSerializedValue` and `ObjectId` is provided.
deep
# Whether the result is expected to be a JSON object which should be sent by value.
# The result is put either into `value` or into `unserializableValue`. Synonym of
# `returnByValue: true`. Overrides `returnByValue`.
json
# Only remote object id is put in the result. Same bahaviour as if no
# `serializationOptions`, `generatePreview`, `returnByValue` nor `generateWebDriverValue`
# are provided.
idOnly
# Deep serialization depth. Default is full depth. Respected only in `deep` serialization mode.
optional integer maxDepth
# Represents deep serialized value.
type DeepSerializedValue extends object
properties
enum type
undefined
null
string
number
boolean
bigint
regexp
date
symbol
array
object
function
map
set
weakmap
weakset
error
proxy
promise
typedarray
arraybuffer
node
window
optional any value
optional string objectId
# Set if value reference met more then once during serialization. In such
# case, value is provided only to one of the serialized values. Unique
# per value in the scope of one CDP call.
optional integer weakLocalObjectReference
# Unique object identifier.
type RemoteObjectId extends string
@ -1040,6 +1120,10 @@ domain Runtime
optional UnserializableValue unserializableValue
# String representation of the object.
optional string description
# Deprecated. Use `deepSerializedValue` instead. WebDriver BiDi representation of the value.
deprecated optional DeepSerializedValue webDriverValue
# Deep serialized value.
experimental optional DeepSerializedValue deepSerializedValue
# Unique object identifier (for non-primitive values).
optional RemoteObjectId objectId
# Preview containing abbreviated property values. Specified for `object` type values only.
@ -1221,11 +1305,11 @@ domain Runtime
string origin
# Human readable name describing given context.
string name
# A system-unique execution context identifier. Unlike the id, this is unique accross
# A system-unique execution context identifier. Unlike the id, this is unique across
# multiple processes, so can be reliably used to identify specific context while backend
# performs a cross-process navigation.
experimental string uniqueId
# Embedder-specific auxiliary data.
# Embedder-specific auxiliary data likely matching {isDefault: boolean, type: 'default'|'isolated'|'worker', frameId: string}
optional object auxData
# Detailed information about exception (or error) that was thrown during script compilation or
@ -1250,6 +1334,10 @@ domain Runtime
optional RemoteObject exception
# Identifier of the context where exception happened.
optional ExecutionContextId executionContextId
# Dictionary with entries of meta data that the client associated
# with this exception, such as information about associated network
# requests, etc.
experimental optional object exceptionMetaData
# Number of milliseconds since epoch.
type Timestamp extends number
@ -1325,6 +1413,7 @@ domain Runtime
# execution. Overrides `setPauseOnException` state.
optional boolean silent
# Whether the result is expected to be a JSON object which should be sent by value.
# Can be overriden by `serializationOptions`.
optional boolean returnByValue
# Whether preview should be generated for the result.
experimental optional boolean generatePreview
@ -1339,6 +1428,24 @@ domain Runtime
# Symbolic group name that can be used to release multiple objects. If objectGroup is not
# specified and objectId is, objectGroup will be inherited from object.
optional string objectGroup
# Whether to throw an exception if side effect cannot be ruled out during evaluation.
experimental optional boolean throwOnSideEffect
# An alternative way to specify the execution context to call function on.
# Compared to contextId that may be reused across processes, this is guaranteed to be
# system-unique, so it can be used to prevent accidental function call
# in context different than intended (e.g. as a result of navigation across process
# boundaries).
# This is mutually exclusive with `executionContextId`.
experimental optional string uniqueContextId
# Deprecated. Use `serializationOptions: {serialization:"deep"}` instead.
# Whether the result should contain `webDriverValue`, serialized according to
# https://w3c.github.io/webdriver-bidi. This is mutually exclusive with `returnByValue`, but
# resulting `objectId` is still provided.
deprecated optional boolean generateWebDriverValue
# Specifies the result serialization. If provided, overrides
# `generatePreview`, `returnByValue` and `generateWebDriverValue`.
experimental optional SerializationOptions serializationOptions
returns
# Call result.
RemoteObject result
@ -1418,12 +1525,21 @@ domain Runtime
# evaluation and allows unsafe-eval. Defaults to true.
experimental optional boolean allowUnsafeEvalBlockedByCSP
# An alternative way to specify the execution context to evaluate in.
# Compared to contextId that may be reused accross processes, this is guaranteed to be
# Compared to contextId that may be reused across processes, this is guaranteed to be
# system-unique, so it can be used to prevent accidental evaluation of the expression
# in context different than intended (e.g. as a result of navigation accross process
# in context different than intended (e.g. as a result of navigation across process
# boundaries).
# This is mutually exclusive with `contextId`.
experimental optional string uniqueContextId
# Deprecated. Use `serializationOptions: {serialization:"deep"}` instead.
# Whether the result should contain `webDriverValue`, serialized
# according to
# https://w3c.github.io/webdriver-bidi. This is mutually exclusive with `returnByValue`, but
# resulting `objectId` is still provided.
deprecated optional boolean generateWebDriverValue
# Specifies the result serialization. If provided, overrides
# `generatePreview`, `returnByValue` and `generateWebDriverValue`.
experimental optional SerializationOptions serializationOptions
returns
# Evaluation result.
RemoteObject result
@ -1459,6 +1575,8 @@ domain Runtime
experimental optional boolean accessorPropertiesOnly
# Whether preview should be generated for the results.
experimental optional boolean generatePreview
# If true, returns non-indexed properties only.
experimental optional boolean nonIndexedPropertiesOnly
returns
# Object properties.
array of PropertyDescriptor result
@ -1563,7 +1681,10 @@ domain Runtime
# execution context. If omitted and `executionContextName` is not set,
# the binding is exposed to all execution contexts of the target.
# This parameter is mutually exclusive with `executionContextName`.
optional ExecutionContextId executionContextId
# Deprecated in favor of `executionContextName` due to an unclear use case
# and bugs in implementation (crbug.com/1169639). `executionContextId` will be
# removed in the future.
deprecated optional ExecutionContextId executionContextId
# If specified, the binding is exposed to the executionContext with
# matching name, even for contexts created after the binding is added.
# See also `ExecutionContext.name` and `worldName` parameter to
@ -1577,6 +1698,18 @@ domain Runtime
parameters
string name
# This method tries to lookup and populate exception details for a
# JavaScript Error object.
# Note that the stackTrace portion of the resulting exceptionDetails will
# only be populated if the Runtime domain was enabled at the time when the
# Error was thrown.
experimental command getExceptionDetails
parameters
# The error object for which to resolve the exception details.
RemoteObjectId errorObjectId
returns
optional ExceptionDetails exceptionDetails
# Notification is issued every time when binding is called.
experimental event bindingCalled
parameters
@ -1648,7 +1781,9 @@ domain Runtime
event executionContextDestroyed
parameters
# Id of the destroyed context
ExecutionContextId executionContextId
deprecated ExecutionContextId executionContextId
# Unique Id of the destroyed context
experimental string executionContextUniqueId
# Issued when all executionContexts were cleared in browser
event executionContextsCleared
@ -1659,6 +1794,8 @@ domain Runtime
parameters
RemoteObject object
object hints
# Identifier of the context where the call was made.
experimental optional ExecutionContextId executionContextId
# This domain is deprecated.
deprecated domain Schema

View File

@ -89,17 +89,6 @@ V8_PLATFORM_EXPORT void RunIdleTasks(v8::Platform* platform,
v8::Isolate* isolate,
double idle_time_in_seconds);
/**
* Attempts to set the tracing controller for the given platform.
*
* The |platform| has to be created using |NewDefaultPlatform|.
*
*/
V8_DEPRECATE_SOON("Access the DefaultPlatform directly")
V8_PLATFORM_EXPORT void SetTracingController(
v8::Platform* platform,
v8::platform::tracing::TracingController* tracing_controller);
/**
* Notifies the given platform about the Isolate getting deleted soon. Has to be
* called for all Isolates which are deleted - unless we're shutting down the

View File

@ -37,7 +37,6 @@ const int kTraceMaxNumArgs = 2;
class V8_PLATFORM_EXPORT TraceObject {
public:
union ArgValue {
V8_DEPRECATED("use as_uint ? true : false") bool as_bool;
uint64_t as_uint;
int64_t as_int;
double as_double;
@ -283,12 +282,12 @@ class V8_PLATFORM_EXPORT TracingController
const char* name, uint64_t handle) override;
static const char* GetCategoryGroupName(const uint8_t* category_enabled_flag);
#endif // !defined(V8_USE_PERFETTO)
void AddTraceStateObserver(
v8::TracingController::TraceStateObserver* observer) override;
void RemoveTraceStateObserver(
v8::TracingController::TraceStateObserver* observer) override;
#endif // !defined(V8_USE_PERFETTO)
void StartTracing(TraceConfig* trace_config);
void StopTracing();
@ -308,7 +307,6 @@ class V8_PLATFORM_EXPORT TracingController
std::unique_ptr<base::Mutex> mutex_;
std::unique_ptr<TraceConfig> trace_config_;
std::atomic_bool recording_{false};
std::unordered_set<v8::TracingController::TraceStateObserver*> observers_;
#if defined(V8_USE_PERFETTO)
std::ostream* output_stream_ = nullptr;
@ -317,6 +315,7 @@ class V8_PLATFORM_EXPORT TracingController
TraceEventListener* listener_for_testing_ = nullptr;
std::unique_ptr<perfetto::TracingSession> tracing_session_;
#else // !defined(V8_USE_PERFETTO)
std::unordered_set<v8::TracingController::TraceStateObserver*> observers_;
std::unique_ptr<TraceBuffer> trace_buffer_;
#endif // !defined(V8_USE_PERFETTO)

View File

@ -0,0 +1,512 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_ARRAY_BUFFER_H_
#define INCLUDE_V8_ARRAY_BUFFER_H_
#include <stddef.h>
#include <memory>
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-object.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class SharedArrayBuffer;
#ifndef V8_ARRAY_BUFFER_INTERNAL_FIELD_COUNT
// The number of required internal fields can be defined by embedder.
#define V8_ARRAY_BUFFER_INTERNAL_FIELD_COUNT 2
#endif
enum class ArrayBufferCreationMode { kInternalized, kExternalized };
/**
* A wrapper around the backing store (i.e. the raw memory) of an array buffer.
* See a document linked in http://crbug.com/v8/9908 for more information.
*
* The allocation and destruction of backing stores is generally managed by
* V8. Clients should always use standard C++ memory ownership types (i.e.
* std::unique_ptr and std::shared_ptr) to manage lifetimes of backing stores
* properly, since V8 internal objects may alias backing stores.
*
* This object does not keep the underlying |ArrayBuffer::Allocator| alive by
* default. Use Isolate::CreateParams::array_buffer_allocator_shared when
* creating the Isolate to make it hold a reference to the allocator itself.
*/
class V8_EXPORT BackingStore : public v8::internal::BackingStoreBase {
public:
~BackingStore();
/**
* Return a pointer to the beginning of the memory block for this backing
* store. The pointer is only valid as long as this backing store object
* lives.
*/
void* Data() const;
/**
* The length (in bytes) of this backing store.
*/
size_t ByteLength() const;
/**
* The maximum length (in bytes) that this backing store may grow to.
*
* If this backing store was created for a resizable ArrayBuffer or a growable
* SharedArrayBuffer, it is >= ByteLength(). Otherwise it is ==
* ByteLength().
*/
size_t MaxByteLength() const;
/**
* Indicates whether the backing store was created for an ArrayBuffer or
* a SharedArrayBuffer.
*/
bool IsShared() const;
/**
* Indicates whether the backing store was created for a resizable ArrayBuffer
* or a growable SharedArrayBuffer, and thus may be resized by user JavaScript
* code.
*/
bool IsResizableByUserJavaScript() const;
/**
* Prevent implicit instantiation of operator delete with size_t argument.
* The size_t argument would be incorrect because ptr points to the
* internal BackingStore object.
*/
void operator delete(void* ptr) { ::operator delete(ptr); }
/**
* Wrapper around ArrayBuffer::Allocator::Reallocate that preserves IsShared.
* Assumes that the backing_store was allocated by the ArrayBuffer allocator
* of the given isolate.
*/
static std::unique_ptr<BackingStore> Reallocate(
v8::Isolate* isolate, std::unique_ptr<BackingStore> backing_store,
size_t byte_length);
/**
* This callback is used only if the memory block for a BackingStore cannot be
* allocated with an ArrayBuffer::Allocator. In such cases the destructor of
* the BackingStore invokes the callback to free the memory block.
*/
using DeleterCallback = void (*)(void* data, size_t length,
void* deleter_data);
/**
* If the memory block of a BackingStore is static or is managed manually,
* then this empty deleter along with nullptr deleter_data can be passed to
* ArrayBuffer::NewBackingStore to indicate that.
*
* The manually managed case should be used with caution and only when it
* is guaranteed that the memory block freeing happens after detaching its
* ArrayBuffer.
*/
static void EmptyDeleter(void* data, size_t length, void* deleter_data);
private:
/**
* See [Shared]ArrayBuffer::GetBackingStore and
* [Shared]ArrayBuffer::NewBackingStore.
*/
BackingStore();
};
#if !defined(V8_IMMINENT_DEPRECATION_WARNINGS)
// Use v8::BackingStore::DeleterCallback instead.
using BackingStoreDeleterCallback = void (*)(void* data, size_t length,
void* deleter_data);
#endif
/**
* An instance of the built-in ArrayBuffer constructor (ES6 draft 15.13.5).
*/
class V8_EXPORT ArrayBuffer : public Object {
public:
/**
* A thread-safe allocator that V8 uses to allocate |ArrayBuffer|'s memory.
* The allocator is a global V8 setting. It has to be set via
* Isolate::CreateParams.
*
* Memory allocated through this allocator by V8 is accounted for as external
* memory by V8. Note that V8 keeps track of the memory for all internalized
* |ArrayBuffer|s. Responsibility for tracking external memory (using
* Isolate::AdjustAmountOfExternalAllocatedMemory) is handed over to the
* embedder upon externalization and taken over upon internalization (creating
* an internalized buffer from an existing buffer).
*
* Note that it is unsafe to call back into V8 from any of the allocator
* functions.
*/
class V8_EXPORT Allocator {
public:
virtual ~Allocator() = default;
/**
* Allocate |length| bytes. Return nullptr if allocation is not successful.
* Memory should be initialized to zeroes.
*/
virtual void* Allocate(size_t length) = 0;
/**
* Allocate |length| bytes. Return nullptr if allocation is not successful.
* Memory does not have to be initialized.
*/
virtual void* AllocateUninitialized(size_t length) = 0;
/**
* Free the memory block of size |length|, pointed to by |data|.
* That memory is guaranteed to be previously allocated by |Allocate|.
*/
virtual void Free(void* data, size_t length) = 0;
/**
* Reallocate the memory block of size |old_length| to a memory block of
* size |new_length| by expanding, contracting, or copying the existing
* memory block. If |new_length| > |old_length|, then the new part of
* the memory must be initialized to zeros. Return nullptr if reallocation
* is not successful.
*
* The caller guarantees that the memory block was previously allocated
* using Allocate or AllocateUninitialized.
*
* The default implementation allocates a new block and copies data.
*/
virtual void* Reallocate(void* data, size_t old_length, size_t new_length);
/**
* ArrayBuffer allocation mode. kNormal is a malloc/free style allocation,
* while kReservation is for larger allocations with the ability to set
* access permissions.
*/
enum class AllocationMode { kNormal, kReservation };
/**
* Convenience allocator.
*
* When the sandbox is enabled, this allocator will allocate its backing
* memory inside the sandbox. Otherwise, it will rely on malloc/free.
*
* Caller takes ownership, i.e. the returned object needs to be freed using
* |delete allocator| once it is no longer in use.
*/
static Allocator* NewDefaultAllocator();
};
/**
* Data length in bytes.
*/
size_t ByteLength() const;
/**
* Maximum length in bytes.
*/
size_t MaxByteLength() const;
/**
* Create a new ArrayBuffer. Allocate |byte_length| bytes.
* Allocated memory will be owned by a created ArrayBuffer and
* will be deallocated when it is garbage-collected,
* unless the object is externalized.
*/
static Local<ArrayBuffer> New(Isolate* isolate, size_t byte_length);
/**
* Create a new ArrayBuffer with an existing backing store.
* The created array keeps a reference to the backing store until the array
* is garbage collected. Note that the IsExternal bit does not affect this
* reference from the array to the backing store.
*
* In future IsExternal bit will be removed. Until then the bit is set as
* follows. If the backing store does not own the underlying buffer, then
* the array is created in externalized state. Otherwise, the array is created
* in internalized state. In the latter case the array can be transitioned
* to the externalized state using Externalize(backing_store).
*/
static Local<ArrayBuffer> New(Isolate* isolate,
std::shared_ptr<BackingStore> backing_store);
/**
* Returns a new standalone BackingStore that is allocated using the array
* buffer allocator of the isolate. The result can be later passed to
* ArrayBuffer::New.
*
* If the allocator returns nullptr, then the function may cause GCs in the
* given isolate and re-try the allocation. If GCs do not help, then the
* function will crash with an out-of-memory error.
*/
static std::unique_ptr<BackingStore> NewBackingStore(Isolate* isolate,
size_t byte_length);
/**
* Returns a new standalone BackingStore that takes over the ownership of
* the given buffer. The destructor of the BackingStore invokes the given
* deleter callback.
*
* The result can be later passed to ArrayBuffer::New. The raw pointer
* to the buffer must not be passed again to any V8 API function.
*/
static std::unique_ptr<BackingStore> NewBackingStore(
void* data, size_t byte_length, v8::BackingStore::DeleterCallback deleter,
void* deleter_data);
/**
* Returns a new resizable standalone BackingStore that is allocated using the
* array buffer allocator of the isolate. The result can be later passed to
* ArrayBuffer::New.
*
* |byte_length| must be <= |max_byte_length|.
*
* This function is usable without an isolate. Unlike |NewBackingStore| calls
* with an isolate, GCs cannot be triggered, and there are no
* retries. Allocation failure will cause the function to crash with an
* out-of-memory error.
*/
static std::unique_ptr<BackingStore> NewResizableBackingStore(
size_t byte_length, size_t max_byte_length);
/**
* Returns true if this ArrayBuffer may be detached.
*/
bool IsDetachable() const;
/**
* Returns true if this ArrayBuffer has been detached.
*/
bool WasDetached() const;
/**
* Detaches this ArrayBuffer and all its views (typed arrays).
* Detaching sets the byte length of the buffer and all typed arrays to zero,
* preventing JavaScript from ever accessing underlying backing store.
* ArrayBuffer should have been externalized and must be detachable.
*/
V8_DEPRECATE_SOON(
"Use the version which takes a key parameter (passing a null handle is "
"ok).")
void Detach();
/**
* Detaches this ArrayBuffer and all its views (typed arrays).
* Detaching sets the byte length of the buffer and all typed arrays to zero,
* preventing JavaScript from ever accessing underlying backing store.
* ArrayBuffer should have been externalized and must be detachable. Returns
* Nothing if the key didn't pass the [[ArrayBufferDetachKey]] check,
* Just(true) otherwise.
*/
V8_WARN_UNUSED_RESULT Maybe<bool> Detach(v8::Local<v8::Value> key);
/**
* Sets the ArrayBufferDetachKey.
*/
void SetDetachKey(v8::Local<v8::Value> key);
/**
* Get a shared pointer to the backing store of this array buffer. This
* pointer coordinates the lifetime management of the internal storage
* with any live ArrayBuffers on the heap, even across isolates. The embedder
* should not attempt to manage lifetime of the storage through other means.
*
* The returned shared pointer will not be empty, even if the ArrayBuffer has
* been detached. Use |WasDetached| to tell if it has been detached instead.
*/
std::shared_ptr<BackingStore> GetBackingStore();
/**
* More efficient shortcut for GetBackingStore()->Data(). The returned pointer
* is valid as long as the ArrayBuffer is alive.
*/
void* Data() const;
V8_INLINE static ArrayBuffer* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<ArrayBuffer*>(value);
}
static const int kInternalFieldCount = V8_ARRAY_BUFFER_INTERNAL_FIELD_COUNT;
static const int kEmbedderFieldCount = V8_ARRAY_BUFFER_INTERNAL_FIELD_COUNT;
private:
ArrayBuffer();
static void CheckCast(Value* obj);
};
#ifndef V8_ARRAY_BUFFER_VIEW_INTERNAL_FIELD_COUNT
// The number of required internal fields can be defined by embedder.
#define V8_ARRAY_BUFFER_VIEW_INTERNAL_FIELD_COUNT 2
#endif
/**
* A base class for an instance of one of "views" over ArrayBuffer,
* including TypedArrays and DataView (ES6 draft 15.13).
*/
class V8_EXPORT ArrayBufferView : public Object {
public:
/**
* Returns underlying ArrayBuffer.
*/
Local<ArrayBuffer> Buffer();
/**
* Byte offset in |Buffer|.
*/
size_t ByteOffset();
/**
* Size of a view in bytes.
*/
size_t ByteLength();
/**
* Copy the contents of the ArrayBufferView's buffer to an embedder defined
* memory without additional overhead that calling ArrayBufferView::Buffer
* might incur.
*
* Will write at most min(|byte_length|, ByteLength) bytes starting at
* ByteOffset of the underlying buffer to the memory starting at |dest|.
* Returns the number of bytes actually written.
*/
size_t CopyContents(void* dest, size_t byte_length);
/**
* Returns true if ArrayBufferView's backing ArrayBuffer has already been
* allocated.
*/
bool HasBuffer() const;
V8_INLINE static ArrayBufferView* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<ArrayBufferView*>(value);
}
static const int kInternalFieldCount =
V8_ARRAY_BUFFER_VIEW_INTERNAL_FIELD_COUNT;
static const int kEmbedderFieldCount =
V8_ARRAY_BUFFER_VIEW_INTERNAL_FIELD_COUNT;
private:
ArrayBufferView();
static void CheckCast(Value* obj);
};
/**
* An instance of DataView constructor (ES6 draft 15.13.7).
*/
class V8_EXPORT DataView : public ArrayBufferView {
public:
static Local<DataView> New(Local<ArrayBuffer> array_buffer,
size_t byte_offset, size_t length);
static Local<DataView> New(Local<SharedArrayBuffer> shared_array_buffer,
size_t byte_offset, size_t length);
V8_INLINE static DataView* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<DataView*>(value);
}
private:
DataView();
static void CheckCast(Value* obj);
};
/**
* An instance of the built-in SharedArrayBuffer constructor.
*/
class V8_EXPORT SharedArrayBuffer : public Object {
public:
/**
* Data length in bytes.
*/
size_t ByteLength() const;
/**
* Maximum length in bytes.
*/
size_t MaxByteLength() const;
/**
* Create a new SharedArrayBuffer. Allocate |byte_length| bytes.
* Allocated memory will be owned by a created SharedArrayBuffer and
* will be deallocated when it is garbage-collected,
* unless the object is externalized.
*/
static Local<SharedArrayBuffer> New(Isolate* isolate, size_t byte_length);
/**
* Create a new SharedArrayBuffer with an existing backing store.
* The created array keeps a reference to the backing store until the array
* is garbage collected. Note that the IsExternal bit does not affect this
* reference from the array to the backing store.
*
* In future IsExternal bit will be removed. Until then the bit is set as
* follows. If the backing store does not own the underlying buffer, then
* the array is created in externalized state. Otherwise, the array is created
* in internalized state. In the latter case the array can be transitioned
* to the externalized state using Externalize(backing_store).
*/
static Local<SharedArrayBuffer> New(
Isolate* isolate, std::shared_ptr<BackingStore> backing_store);
/**
* Returns a new standalone BackingStore that is allocated using the array
* buffer allocator of the isolate. The result can be later passed to
* SharedArrayBuffer::New.
*
* If the allocator returns nullptr, then the function may cause GCs in the
* given isolate and re-try the allocation. If GCs do not help, then the
* function will crash with an out-of-memory error.
*/
static std::unique_ptr<BackingStore> NewBackingStore(Isolate* isolate,
size_t byte_length);
/**
* Returns a new standalone BackingStore that takes over the ownership of
* the given buffer. The destructor of the BackingStore invokes the given
* deleter callback.
*
* The result can be later passed to SharedArrayBuffer::New. The raw pointer
* to the buffer must not be passed again to any V8 functions.
*/
static std::unique_ptr<BackingStore> NewBackingStore(
void* data, size_t byte_length, v8::BackingStore::DeleterCallback deleter,
void* deleter_data);
/**
* Get a shared pointer to the backing store of this array buffer. This
* pointer coordinates the lifetime management of the internal storage
* with any live ArrayBuffers on the heap, even across isolates. The embedder
* should not attempt to manage lifetime of the storage through other means.
*/
std::shared_ptr<BackingStore> GetBackingStore();
/**
* More efficient shortcut for GetBackingStore()->Data(). The returned pointer
* is valid as long as the ArrayBuffer is alive.
*/
void* Data() const;
V8_INLINE static SharedArrayBuffer* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<SharedArrayBuffer*>(value);
}
static const int kInternalFieldCount = V8_ARRAY_BUFFER_INTERNAL_FIELD_COUNT;
private:
SharedArrayBuffer();
static void CheckCast(Value* obj);
};
} // namespace v8
#endif // INCLUDE_V8_ARRAY_BUFFER_H_

View File

@ -0,0 +1,422 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_ISOLATE_CALLBACKS_H_
#define INCLUDE_V8_ISOLATE_CALLBACKS_H_
#include <stddef.h>
#include <functional>
#include <string>
#include "cppgc/common.h"
#include "v8-data.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-promise.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
#if defined(V8_OS_WIN)
struct _EXCEPTION_POINTERS;
#endif
namespace v8 {
template <typename T>
class FunctionCallbackInfo;
class Isolate;
class Message;
class Module;
class Object;
class Promise;
class ScriptOrModule;
class String;
class UnboundScript;
class Value;
/**
* A JIT code event is issued each time code is added, moved or removed.
*
* \note removal events are not currently issued.
*/
struct JitCodeEvent {
enum EventType {
CODE_ADDED,
CODE_MOVED,
CODE_REMOVED,
CODE_ADD_LINE_POS_INFO,
CODE_START_LINE_INFO_RECORDING,
CODE_END_LINE_INFO_RECORDING
};
// Definition of the code position type. The "POSITION" type means the place
// in the source code which are of interest when making stack traces to
// pin-point the source location of a stack frame as close as possible.
// The "STATEMENT_POSITION" means the place at the beginning of each
// statement, and is used to indicate possible break locations.
enum PositionType { POSITION, STATEMENT_POSITION };
// There are three different kinds of CodeType, one for JIT code generated
// by the optimizing compiler, one for byte code generated for the
// interpreter, and one for code generated from Wasm. For JIT_CODE and
// WASM_CODE, |code_start| points to the beginning of jitted assembly code,
// while for BYTE_CODE events, |code_start| points to the first bytecode of
// the interpreted function.
enum CodeType { BYTE_CODE, JIT_CODE, WASM_CODE };
// Type of event.
EventType type;
CodeType code_type;
// Start of the instructions.
void* code_start;
// Size of the instructions.
size_t code_len;
// Script info for CODE_ADDED event.
Local<UnboundScript> script;
// User-defined data for *_LINE_INFO_* event. It's used to hold the source
// code line information which is returned from the
// CODE_START_LINE_INFO_RECORDING event. And it's passed to subsequent
// CODE_ADD_LINE_POS_INFO and CODE_END_LINE_INFO_RECORDING events.
void* user_data;
struct name_t {
// Name of the object associated with the code, note that the string is not
// zero-terminated.
const char* str;
// Number of chars in str.
size_t len;
};
struct line_info_t {
// PC offset
size_t offset;
// Code position
size_t pos;
// The position type.
PositionType position_type;
};
struct wasm_source_info_t {
// Source file name.
const char* filename;
// Length of filename.
size_t filename_size;
// Line number table, which maps offsets of JITted code to line numbers of
// source file.
const line_info_t* line_number_table;
// Number of entries in the line number table.
size_t line_number_table_size;
};
wasm_source_info_t* wasm_source_info = nullptr;
union {
// Only valid for CODE_ADDED.
struct name_t name;
// Only valid for CODE_ADD_LINE_POS_INFO
struct line_info_t line_info;
// New location of instructions. Only valid for CODE_MOVED.
void* new_code_start;
};
Isolate* isolate;
};
/**
* Option flags passed to the SetJitCodeEventHandler function.
*/
enum JitCodeEventOptions {
kJitCodeEventDefault = 0,
// Generate callbacks for already existent code.
kJitCodeEventEnumExisting = 1
};
/**
* Callback function passed to SetJitCodeEventHandler.
*
* \param event code add, move or removal event.
*/
using JitCodeEventHandler = void (*)(const JitCodeEvent* event);
// --- Garbage Collection Callbacks ---
/**
* Applications can register callback functions which will be called before and
* after certain garbage collection operations. Allocations are not allowed in
* the callback functions, you therefore cannot manipulate objects (set or
* delete properties for example) since it is possible such operations will
* result in the allocation of objects.
*/
enum GCType {
kGCTypeScavenge = 1 << 0,
kGCTypeMinorMarkCompact = 1 << 1,
kGCTypeMarkSweepCompact = 1 << 2,
kGCTypeIncrementalMarking = 1 << 3,
kGCTypeProcessWeakCallbacks = 1 << 4,
kGCTypeAll = kGCTypeScavenge | kGCTypeMinorMarkCompact |
kGCTypeMarkSweepCompact | kGCTypeIncrementalMarking |
kGCTypeProcessWeakCallbacks
};
/**
* GCCallbackFlags is used to notify additional information about the GC
* callback.
* - kGCCallbackFlagConstructRetainedObjectInfos: The GC callback is for
* constructing retained object infos.
* - kGCCallbackFlagForced: The GC callback is for a forced GC for testing.
* - kGCCallbackFlagSynchronousPhantomCallbackProcessing: The GC callback
* is called synchronously without getting posted to an idle task.
* - kGCCallbackFlagCollectAllAvailableGarbage: The GC callback is called
* in a phase where V8 is trying to collect all available garbage
* (e.g., handling a low memory notification).
* - kGCCallbackScheduleIdleGarbageCollection: The GC callback is called to
* trigger an idle garbage collection.
*/
enum GCCallbackFlags {
kNoGCCallbackFlags = 0,
kGCCallbackFlagConstructRetainedObjectInfos = 1 << 1,
kGCCallbackFlagForced = 1 << 2,
kGCCallbackFlagSynchronousPhantomCallbackProcessing = 1 << 3,
kGCCallbackFlagCollectAllAvailableGarbage = 1 << 4,
kGCCallbackFlagCollectAllExternalMemory = 1 << 5,
kGCCallbackScheduleIdleGarbageCollection = 1 << 6,
};
using GCCallback = void (*)(GCType type, GCCallbackFlags flags);
using InterruptCallback = void (*)(Isolate* isolate, void* data);
/**
* This callback is invoked when the heap size is close to the heap limit and
* V8 is likely to abort with out-of-memory error.
* The callback can extend the heap limit by returning a value that is greater
* than the current_heap_limit. The initial heap limit is the limit that was
* set after heap setup.
*/
using NearHeapLimitCallback = size_t (*)(void* data, size_t current_heap_limit,
size_t initial_heap_limit);
/**
* Callback function passed to SetUnhandledExceptionCallback.
*/
#if defined(V8_OS_WIN)
using UnhandledExceptionCallback =
int (*)(_EXCEPTION_POINTERS* exception_pointers);
#endif
// --- Counters Callbacks ---
using CounterLookupCallback = int* (*)(const char* name);
using CreateHistogramCallback = void* (*)(const char* name, int min, int max,
size_t buckets);
using AddHistogramSampleCallback = void (*)(void* histogram, int sample);
// --- Exceptions ---
using FatalErrorCallback = void (*)(const char* location, const char* message);
struct OOMDetails {
bool is_heap_oom = false;
const char* detail = nullptr;
};
using OOMErrorCallback = void (*)(const char* location,
const OOMDetails& details);
using MessageCallback = void (*)(Local<Message> message, Local<Value> data);
// --- Tracing ---
enum LogEventStatus : int { kStart = 0, kEnd = 1, kStamp = 2 };
using LogEventCallback = void (*)(const char* name,
int /* LogEventStatus */ status);
// --- Crashkeys Callback ---
enum class CrashKeyId {
kIsolateAddress,
kReadonlySpaceFirstPageAddress,
kMapSpaceFirstPageAddress V8_ENUM_DEPRECATE_SOON("Map space got removed"),
kOldSpaceFirstPageAddress,
kCodeRangeBaseAddress,
kCodeSpaceFirstPageAddress,
kDumpType,
kSnapshotChecksumCalculated,
kSnapshotChecksumExpected,
};
using AddCrashKeyCallback = void (*)(CrashKeyId id, const std::string& value);
// --- Enter/Leave Script Callback ---
using BeforeCallEnteredCallback = void (*)(Isolate*);
using CallCompletedCallback = void (*)(Isolate*);
// --- AllowCodeGenerationFromStrings callbacks ---
/**
* Callback to check if code generation from strings is allowed. See
* Context::AllowCodeGenerationFromStrings.
*/
using AllowCodeGenerationFromStringsCallback = bool (*)(Local<Context> context,
Local<String> source);
struct ModifyCodeGenerationFromStringsResult {
// If true, proceed with the codegen algorithm. Otherwise, block it.
bool codegen_allowed = false;
// Overwrite the original source with this string, if present.
// Use the original source if empty.
// This field is considered only if codegen_allowed is true.
MaybeLocal<String> modified_source;
};
/**
* Access type specification.
*/
enum AccessType {
ACCESS_GET,
ACCESS_SET,
ACCESS_HAS,
ACCESS_DELETE,
ACCESS_KEYS
};
// --- Failed Access Check Callback ---
using FailedAccessCheckCallback = void (*)(Local<Object> target,
AccessType type, Local<Value> data);
/**
* Callback to check if codegen is allowed from a source object, and convert
* the source to string if necessary. See: ModifyCodeGenerationFromStrings.
*/
using ModifyCodeGenerationFromStringsCallback =
ModifyCodeGenerationFromStringsResult (*)(Local<Context> context,
Local<Value> source);
using ModifyCodeGenerationFromStringsCallback2 =
ModifyCodeGenerationFromStringsResult (*)(Local<Context> context,
Local<Value> source,
bool is_code_like);
// --- WebAssembly compilation callbacks ---
using ExtensionCallback = bool (*)(const FunctionCallbackInfo<Value>&);
using AllowWasmCodeGenerationCallback = bool (*)(Local<Context> context,
Local<String> source);
// --- Callback for APIs defined on v8-supported objects, but implemented
// by the embedder. Example: WebAssembly.{compile|instantiate}Streaming ---
using ApiImplementationCallback = void (*)(const FunctionCallbackInfo<Value>&);
// --- Callback for WebAssembly.compileStreaming ---
using WasmStreamingCallback = void (*)(const FunctionCallbackInfo<Value>&);
enum class WasmAsyncSuccess { kSuccess, kFail };
// --- Callback called when async WebAssembly operations finish ---
using WasmAsyncResolvePromiseCallback = void (*)(
Isolate* isolate, Local<Context> context, Local<Promise::Resolver> resolver,
Local<Value> result, WasmAsyncSuccess success);
// --- Callback for loading source map file for Wasm profiling support
using WasmLoadSourceMapCallback = Local<String> (*)(Isolate* isolate,
const char* name);
// --- Callback for checking if WebAssembly GC is enabled ---
// If the callback returns true, it will also enable Wasm stringrefs.
using WasmGCEnabledCallback = bool (*)(Local<Context> context);
// --- Callback for checking if the SharedArrayBuffer constructor is enabled ---
using SharedArrayBufferConstructorEnabledCallback =
bool (*)(Local<Context> context);
// --- Callback for checking if the compile hints magic comments are enabled ---
using JavaScriptCompileHintsMagicEnabledCallback =
bool (*)(Local<Context> context);
/**
* HostImportModuleDynamicallyCallback is called when we
* require the embedder to load a module. This is used as part of the dynamic
* import syntax.
*
* The referrer contains metadata about the script/module that calls
* import.
*
* The specifier is the name of the module that should be imported.
*
* The import_assertions are import assertions for this request in the form:
* [key1, value1, key2, value2, ...] where the keys and values are of type
* v8::String. Note, unlike the FixedArray passed to ResolveModuleCallback and
* returned from ModuleRequest::GetImportAssertions(), this array does not
* contain the source Locations of the assertions.
*
* The embedder must compile, instantiate, evaluate the Module, and
* obtain its namespace object.
*
* The Promise returned from this function is forwarded to userland
* JavaScript. The embedder must resolve this promise with the module
* namespace object. In case of an exception, the embedder must reject
* this promise with the exception. If the promise creation itself
* fails (e.g. due to stack overflow), the embedder must propagate
* that exception by returning an empty MaybeLocal.
*/
using HostImportModuleDynamicallyWithImportAssertionsCallback =
MaybeLocal<Promise> (*)(Local<Context> context,
Local<ScriptOrModule> referrer,
Local<String> specifier,
Local<FixedArray> import_assertions);
using HostImportModuleDynamicallyCallback = MaybeLocal<Promise> (*)(
Local<Context> context, Local<Data> host_defined_options,
Local<Value> resource_name, Local<String> specifier,
Local<FixedArray> import_assertions);
/**
* Callback for requesting a compile hint for a function from the embedder. The
* first parameter is the position of the function in source code and the second
* parameter is embedder data to be passed back.
*/
using CompileHintCallback = bool (*)(int, void*);
/**
* HostInitializeImportMetaObjectCallback is called the first time import.meta
* is accessed for a module. Subsequent access will reuse the same value.
*
* The method combines two implementation-defined abstract operations into one:
* HostGetImportMetaProperties and HostFinalizeImportMeta.
*
* The embedder should use v8::Object::CreateDataProperty to add properties on
* the meta object.
*/
using HostInitializeImportMetaObjectCallback = void (*)(Local<Context> context,
Local<Module> module,
Local<Object> meta);
/**
* HostCreateShadowRealmContextCallback is called each time a ShadowRealm is
* being constructed in the initiator_context.
*
* The method combines Context creation and implementation defined abstract
* operation HostInitializeShadowRealm into one.
*
* The embedder should use v8::Context::New or v8::Context:NewFromSnapshot to
* create a new context. If the creation fails, the embedder must propagate
* that exception by returning an empty MaybeLocal.
*/
using HostCreateShadowRealmContextCallback =
MaybeLocal<Context> (*)(Local<Context> initiator_context);
/**
* PrepareStackTraceCallback is called when the stack property of an error is
* first accessed. The return value will be used as the stack value. If this
* callback is registed, the |Error.prepareStackTrace| API will be disabled.
* |sites| is an array of call sites, specified in
* https://v8.dev/docs/stack-trace-api
*/
using PrepareStackTraceCallback = MaybeLocal<Value> (*)(Local<Context> context,
Local<Value> error,
Local<Array> sites);
} // namespace v8
#endif // INCLUDE_V8_ISOLATE_CALLBACKS_H_

View File

@ -0,0 +1,129 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_CONTAINER_H_
#define INCLUDE_V8_CONTAINER_H_
#include <stddef.h>
#include <stdint.h>
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-object.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Context;
class Isolate;
/**
* An instance of the built-in array constructor (ECMA-262, 15.4.2).
*/
class V8_EXPORT Array : public Object {
public:
uint32_t Length() const;
/**
* Creates a JavaScript array with the given length. If the length
* is negative the returned array will have length 0.
*/
static Local<Array> New(Isolate* isolate, int length = 0);
/**
* Creates a JavaScript array out of a Local<Value> array in C++
* with a known length.
*/
static Local<Array> New(Isolate* isolate, Local<Value>* elements,
size_t length);
V8_INLINE static Array* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Array*>(value);
}
private:
Array();
static void CheckCast(Value* obj);
};
/**
* An instance of the built-in Map constructor (ECMA-262, 6th Edition, 23.1.1).
*/
class V8_EXPORT Map : public Object {
public:
size_t Size() const;
void Clear();
V8_WARN_UNUSED_RESULT MaybeLocal<Value> Get(Local<Context> context,
Local<Value> key);
V8_WARN_UNUSED_RESULT MaybeLocal<Map> Set(Local<Context> context,
Local<Value> key,
Local<Value> value);
V8_WARN_UNUSED_RESULT Maybe<bool> Has(Local<Context> context,
Local<Value> key);
V8_WARN_UNUSED_RESULT Maybe<bool> Delete(Local<Context> context,
Local<Value> key);
/**
* Returns an array of length Size() * 2, where index N is the Nth key and
* index N + 1 is the Nth value.
*/
Local<Array> AsArray() const;
/**
* Creates a new empty Map.
*/
static Local<Map> New(Isolate* isolate);
V8_INLINE static Map* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Map*>(value);
}
private:
Map();
static void CheckCast(Value* obj);
};
/**
* An instance of the built-in Set constructor (ECMA-262, 6th Edition, 23.2.1).
*/
class V8_EXPORT Set : public Object {
public:
size_t Size() const;
void Clear();
V8_WARN_UNUSED_RESULT MaybeLocal<Set> Add(Local<Context> context,
Local<Value> key);
V8_WARN_UNUSED_RESULT Maybe<bool> Has(Local<Context> context,
Local<Value> key);
V8_WARN_UNUSED_RESULT Maybe<bool> Delete(Local<Context> context,
Local<Value> key);
/**
* Returns an array of the keys in this Set.
*/
Local<Array> AsArray() const;
/**
* Creates a new empty Set.
*/
static Local<Set> New(Isolate* isolate);
V8_INLINE static Set* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Set*>(value);
}
private:
Set();
static void CheckCast(Value* obj);
};
} // namespace v8
#endif // INCLUDE_V8_CONTAINER_H_

View File

@ -0,0 +1,455 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_CONTEXT_H_
#define INCLUDE_V8_CONTEXT_H_
#include <stdint.h>
#include <vector>
#include "v8-data.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-maybe.h" // NOLINT(build/include_directory)
#include "v8-snapshot.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Function;
class MicrotaskQueue;
class Object;
class ObjectTemplate;
class Value;
class String;
/**
* A container for extension names.
*/
class V8_EXPORT ExtensionConfiguration {
public:
ExtensionConfiguration() : name_count_(0), names_(nullptr) {}
ExtensionConfiguration(int name_count, const char* names[])
: name_count_(name_count), names_(names) {}
const char** begin() const { return &names_[0]; }
const char** end() const { return &names_[name_count_]; }
private:
const int name_count_;
const char** names_;
};
/**
* A sandboxed execution context with its own set of built-in objects
* and functions.
*/
class V8_EXPORT Context : public Data {
public:
/**
* Returns the global proxy object.
*
* Global proxy object is a thin wrapper whose prototype points to actual
* context's global object with the properties like Object, etc. This is done
* that way for security reasons (for more details see
* https://wiki.mozilla.org/Gecko:SplitWindow).
*
* Please note that changes to global proxy object prototype most probably
* would break VM---v8 expects only global object as a prototype of global
* proxy object.
*/
Local<Object> Global();
/**
* Detaches the global object from its context before
* the global object can be reused to create a new context.
*/
void DetachGlobal();
/**
* Creates a new context and returns a handle to the newly allocated
* context.
*
* \param isolate The isolate in which to create the context.
*
* \param extensions An optional extension configuration containing
* the extensions to be installed in the newly created context.
*
* \param global_template An optional object template from which the
* global object for the newly created context will be created.
*
* \param global_object An optional global object to be reused for
* the newly created context. This global object must have been
* created by a previous call to Context::New with the same global
* template. The state of the global object will be completely reset
* and only object identify will remain.
*/
static Local<Context> New(
Isolate* isolate, ExtensionConfiguration* extensions = nullptr,
MaybeLocal<ObjectTemplate> global_template = MaybeLocal<ObjectTemplate>(),
MaybeLocal<Value> global_object = MaybeLocal<Value>(),
DeserializeInternalFieldsCallback internal_fields_deserializer =
DeserializeInternalFieldsCallback(),
MicrotaskQueue* microtask_queue = nullptr);
/**
* Create a new context from a (non-default) context snapshot. There
* is no way to provide a global object template since we do not create
* a new global object from template, but we can reuse a global object.
*
* \param isolate See v8::Context::New.
*
* \param context_snapshot_index The index of the context snapshot to
* deserialize from. Use v8::Context::New for the default snapshot.
*
* \param embedder_fields_deserializer Optional callback to deserialize
* internal fields. It should match the SerializeInternalFieldCallback used
* to serialize.
*
* \param extensions See v8::Context::New.
*
* \param global_object See v8::Context::New.
*/
static MaybeLocal<Context> FromSnapshot(
Isolate* isolate, size_t context_snapshot_index,
DeserializeInternalFieldsCallback embedder_fields_deserializer =
DeserializeInternalFieldsCallback(),
ExtensionConfiguration* extensions = nullptr,
MaybeLocal<Value> global_object = MaybeLocal<Value>(),
MicrotaskQueue* microtask_queue = nullptr);
/**
* Returns an global object that isn't backed by an actual context.
*
* The global template needs to have access checks with handlers installed.
* If an existing global object is passed in, the global object is detached
* from its context.
*
* Note that this is different from a detached context where all accesses to
* the global proxy will fail. Instead, the access check handlers are invoked.
*
* It is also not possible to detach an object returned by this method.
* Instead, the access check handlers need to return nothing to achieve the
* same effect.
*
* It is possible, however, to create a new context from the global object
* returned by this method.
*/
static MaybeLocal<Object> NewRemoteContext(
Isolate* isolate, Local<ObjectTemplate> global_template,
MaybeLocal<Value> global_object = MaybeLocal<Value>());
/**
* Sets the security token for the context. To access an object in
* another context, the security tokens must match.
*/
void SetSecurityToken(Local<Value> token);
/** Restores the security token to the default value. */
void UseDefaultSecurityToken();
/** Returns the security token of this context.*/
Local<Value> GetSecurityToken();
/**
* Enter this context. After entering a context, all code compiled
* and run is compiled and run in this context. If another context
* is already entered, this old context is saved so it can be
* restored when the new context is exited.
*/
void Enter();
/**
* Exit this context. Exiting the current context restores the
* context that was in place when entering the current context.
*/
void Exit();
/**
* Delegate to help with Deep freezing embedder-specific objects (such as
* JSApiObjects) that can not be frozen natively.
*/
class DeepFreezeDelegate {
public:
/**
* Performs embedder-specific operations to freeze the provided embedder
* object. The provided object *will* be frozen by DeepFreeze after this
* function returns, so only embedder-specific objects need to be frozen.
* This function *may not* create new JS objects or perform JS allocations.
* Any v8 objects reachable from the provided embedder object that should
* also be considered for freezing should be added to the children_out
* parameter. Returns true if the operation completed successfully.
*/
virtual bool FreezeEmbedderObjectAndGetChildren(
Local<Object> obj, std::vector<Local<Object>>& children_out) = 0;
};
/**
* Attempts to recursively freeze all objects reachable from this context.
* Some objects (generators, iterators, non-const closures) can not be frozen
* and will cause this method to throw an error. An optional delegate can be
* provided to help freeze embedder-specific objects.
*
* Freezing occurs in two steps:
* 1. "Marking" where we iterate through all objects reachable by this
* context, accumulating a list of objects that need to be frozen and
* looking for objects that can't be frozen. This step is separated because
* it is more efficient when we can assume there is no garbage collection.
* 2. "Freezing" where we go through the list of objects and freezing them.
* This effectively requires copying them so it may trigger garbage
* collection.
*/
Maybe<void> DeepFreeze(DeepFreezeDelegate* delegate = nullptr);
/** Returns the isolate associated with a current context. */
Isolate* GetIsolate();
/** Returns the microtask queue associated with a current context. */
MicrotaskQueue* GetMicrotaskQueue();
/** Sets the microtask queue associated with the current context. */
void SetMicrotaskQueue(MicrotaskQueue* queue);
/**
* The field at kDebugIdIndex used to be reserved for the inspector.
* It now serves no purpose.
*/
enum EmbedderDataFields { kDebugIdIndex = 0 };
/**
* Return the number of fields allocated for embedder data.
*/
uint32_t GetNumberOfEmbedderDataFields();
/**
* Gets the embedder data with the given index, which must have been set by a
* previous call to SetEmbedderData with the same index.
*/
V8_INLINE Local<Value> GetEmbedderData(int index);
/**
* Gets the binding object used by V8 extras. Extra natives get a reference
* to this object and can use it to "export" functionality by adding
* properties. Extra natives can also "import" functionality by accessing
* properties added by the embedder using the V8 API.
*/
Local<Object> GetExtrasBindingObject();
/**
* Sets the embedder data with the given index, growing the data as
* needed. Note that index 0 currently has a special meaning for Chrome's
* debugger.
*/
void SetEmbedderData(int index, Local<Value> value);
/**
* Gets a 2-byte-aligned native pointer from the embedder data with the given
* index, which must have been set by a previous call to
* SetAlignedPointerInEmbedderData with the same index. Note that index 0
* currently has a special meaning for Chrome's debugger.
*/
V8_INLINE void* GetAlignedPointerFromEmbedderData(int index);
/**
* Sets a 2-byte-aligned native pointer in the embedder data with the given
* index, growing the data as needed. Note that index 0 currently has a
* special meaning for Chrome's debugger.
*/
void SetAlignedPointerInEmbedderData(int index, void* value);
/**
* Control whether code generation from strings is allowed. Calling
* this method with false will disable 'eval' and the 'Function'
* constructor for code running in this context. If 'eval' or the
* 'Function' constructor are used an exception will be thrown.
*
* If code generation from strings is not allowed the
* V8::AllowCodeGenerationFromStrings callback will be invoked if
* set before blocking the call to 'eval' or the 'Function'
* constructor. If that callback returns true, the call will be
* allowed, otherwise an exception will be thrown. If no callback is
* set an exception will be thrown.
*/
void AllowCodeGenerationFromStrings(bool allow);
/**
* Returns true if code generation from strings is allowed for the context.
* For more details see AllowCodeGenerationFromStrings(bool) documentation.
*/
bool IsCodeGenerationFromStringsAllowed() const;
/**
* Sets the error description for the exception that is thrown when
* code generation from strings is not allowed and 'eval' or the 'Function'
* constructor are called.
*/
void SetErrorMessageForCodeGenerationFromStrings(Local<String> message);
/**
* Sets the error description for the exception that is thrown when
* wasm code generation is not allowed.
*/
void SetErrorMessageForWasmCodeGeneration(Local<String> message);
/**
* Return data that was previously attached to the context snapshot via
* SnapshotCreator, and removes the reference to it.
* Repeated call with the same index returns an empty MaybeLocal.
*/
template <class T>
V8_INLINE MaybeLocal<T> GetDataFromSnapshotOnce(size_t index);
/**
* If callback is set, abort any attempt to execute JavaScript in this
* context, call the specified callback, and throw an exception.
* To unset abort, pass nullptr as callback.
*/
using AbortScriptExecutionCallback = void (*)(Isolate* isolate,
Local<Context> context);
void SetAbortScriptExecution(AbortScriptExecutionCallback callback);
/**
* Returns the value that was set or restored by
* SetContinuationPreservedEmbedderData(), if any.
*/
Local<Value> GetContinuationPreservedEmbedderData() const;
/**
* Sets a value that will be stored on continuations and reset while the
* continuation runs.
*/
void SetContinuationPreservedEmbedderData(Local<Value> context);
/**
* Set or clear hooks to be invoked for promise lifecycle operations.
* To clear a hook, set it to an empty v8::Function. Each function will
* receive the observed promise as the first argument. If a chaining
* operation is used on a promise, the init will additionally receive
* the parent promise as the second argument.
*/
void SetPromiseHooks(Local<Function> init_hook, Local<Function> before_hook,
Local<Function> after_hook,
Local<Function> resolve_hook);
bool HasTemplateLiteralObject(Local<Value> object);
/**
* Stack-allocated class which sets the execution context for all
* operations executed within a local scope.
*/
class V8_NODISCARD Scope {
public:
explicit V8_INLINE Scope(Local<Context> context) : context_(context) {
context_->Enter();
}
V8_INLINE ~Scope() { context_->Exit(); }
private:
Local<Context> context_;
};
/**
* Stack-allocated class to support the backup incumbent settings object
* stack.
* https://html.spec.whatwg.org/multipage/webappapis.html#backup-incumbent-settings-object-stack
*/
class V8_EXPORT V8_NODISCARD BackupIncumbentScope final {
public:
/**
* |backup_incumbent_context| is pushed onto the backup incumbent settings
* object stack.
*/
explicit BackupIncumbentScope(Local<Context> backup_incumbent_context);
~BackupIncumbentScope();
private:
friend class internal::Isolate;
uintptr_t JSStackComparableAddressPrivate() const {
return js_stack_comparable_address_;
}
Local<Context> backup_incumbent_context_;
uintptr_t js_stack_comparable_address_ = 0;
const BackupIncumbentScope* prev_ = nullptr;
};
V8_INLINE static Context* Cast(Data* data);
private:
friend class Value;
friend class Script;
friend class Object;
friend class Function;
static void CheckCast(Data* obj);
internal::Address* GetDataFromSnapshotOnce(size_t index);
Local<Value> SlowGetEmbedderData(int index);
void* SlowGetAlignedPointerFromEmbedderData(int index);
};
// --- Implementation ---
Local<Value> Context::GetEmbedderData(int index) {
#ifndef V8_ENABLE_CHECKS
using A = internal::Address;
using I = internal::Internals;
A ctx = internal::ValueHelper::ValueAsAddress(this);
A embedder_data =
I::ReadTaggedPointerField(ctx, I::kNativeContextEmbedderDataOffset);
int value_offset =
I::kEmbedderDataArrayHeaderSize + (I::kEmbedderDataSlotSize * index);
A value = I::ReadRawField<A>(embedder_data, value_offset);
#ifdef V8_COMPRESS_POINTERS
// We read the full pointer value and then decompress it in order to avoid
// dealing with potential endiannes issues.
value = I::DecompressTaggedField(embedder_data, static_cast<uint32_t>(value));
#endif
auto isolate = reinterpret_cast<v8::Isolate*>(
internal::IsolateFromNeverReadOnlySpaceObject(ctx));
return Local<Value>::New(isolate, value);
#else
return SlowGetEmbedderData(index);
#endif
}
void* Context::GetAlignedPointerFromEmbedderData(int index) {
#if !defined(V8_ENABLE_CHECKS)
using A = internal::Address;
using I = internal::Internals;
A ctx = internal::ValueHelper::ValueAsAddress(this);
A embedder_data =
I::ReadTaggedPointerField(ctx, I::kNativeContextEmbedderDataOffset);
int value_offset = I::kEmbedderDataArrayHeaderSize +
(I::kEmbedderDataSlotSize * index) +
I::kEmbedderDataSlotExternalPointerOffset;
Isolate* isolate = I::GetIsolateForSandbox(ctx);
return reinterpret_cast<void*>(
I::ReadExternalPointerField<internal::kEmbedderDataSlotPayloadTag>(
isolate, embedder_data, value_offset));
#else
return SlowGetAlignedPointerFromEmbedderData(index);
#endif
}
template <class T>
MaybeLocal<T> Context::GetDataFromSnapshotOnce(size_t index) {
auto slot = GetDataFromSnapshotOnce(index);
if (slot) {
internal::PerformCastCheck(internal::ValueHelper::SlotAsValue<T>(slot));
}
return Local<T>::FromSlot(slot);
}
Context* Context::Cast(v8::Data* data) {
#ifdef V8_ENABLE_CHECKS
CheckCast(data);
#endif
return static_cast<Context*>(data);
}
} // namespace v8
#endif // INCLUDE_V8_CONTEXT_H_

View File

@ -12,10 +12,10 @@
#include "cppgc/common.h"
#include "cppgc/custom-space.h"
#include "cppgc/heap-statistics.h"
#include "cppgc/internal/write-barrier.h"
#include "cppgc/visitor.h"
#include "v8-internal.h" // NOLINT(build/include_directory)
#include "v8.h" // NOLINT(build/include_directory)
#include "v8-internal.h" // NOLINT(build/include_directory)
#include "v8-platform.h" // NOLINT(build/include_directory)
#include "v8-traced-handle.h" // NOLINT(build/include_directory)
namespace cppgc {
class AllocationHandle;
@ -24,10 +24,14 @@ class HeapHandle;
namespace v8 {
class Object;
namespace internal {
class CppHeap;
} // namespace internal
class CustomSpaceStatisticsReceiver;
/**
* Describes how V8 wrapper objects maintain references to garbage-collected C++
* objects.
@ -73,15 +77,37 @@ struct WrapperDescriptor final {
};
struct V8_EXPORT CppHeapCreateParams {
CppHeapCreateParams(
std::vector<std::unique_ptr<cppgc::CustomSpaceBase>> custom_spaces,
WrapperDescriptor wrapper_descriptor)
: custom_spaces(std::move(custom_spaces)),
wrapper_descriptor(wrapper_descriptor) {}
CppHeapCreateParams(const CppHeapCreateParams&) = delete;
CppHeapCreateParams& operator=(const CppHeapCreateParams&) = delete;
std::vector<std::unique_ptr<cppgc::CustomSpaceBase>> custom_spaces;
WrapperDescriptor wrapper_descriptor;
/**
* Specifies which kind of marking are supported by the heap. The type may be
* further reduced via runtime flags when attaching the heap to an Isolate.
*/
cppgc::Heap::MarkingType marking_support =
cppgc::Heap::MarkingType::kIncrementalAndConcurrent;
/**
* Specifies which kind of sweeping is supported by the heap. The type may be
* further reduced via runtime flags when attaching the heap to an Isolate.
*/
cppgc::Heap::SweepingType sweeping_support =
cppgc::Heap::SweepingType::kIncrementalAndConcurrent;
};
/**
* A heap for allocating managed C++ objects.
*
* Similar to v8::Isolate, the heap may only be accessed from one thread at a
* time. The heap may be used from different threads using the
* v8::Locker/v8::Unlocker APIs which is different from generic Oilpan.
*/
class V8_EXPORT CppHeap {
public:
@ -119,6 +145,16 @@ class V8_EXPORT CppHeap {
cppgc::HeapStatistics CollectStatistics(
cppgc::HeapStatistics::DetailLevel detail_level);
/**
* Collects statistics for the given spaces and reports them to the receiver.
*
* \param custom_spaces a collection of custom space indicies.
* \param receiver an object that gets the results.
*/
void CollectCustomSpaceStatisticsAtLastGC(
std::vector<cppgc::CustomSpaceIndex> custom_spaces,
std::unique_ptr<CustomSpaceStatisticsReceiver> receiver);
/**
* Enables a detached mode that allows testing garbage collection using
* `cppgc::testing` APIs. Once used, the heap cannot be attached to an
@ -133,6 +169,14 @@ class V8_EXPORT CppHeap {
*/
void CollectGarbageForTesting(cppgc::EmbedderStackState stack_state);
/**
* Performs a stop-the-world minor garbage collection for testing purposes.
*
* \param stack_state The stack state to assume for the garbage collection.
*/
void CollectGarbageInYoungGenerationForTesting(
cppgc::EmbedderStackState stack_state);
private:
CppHeap() = default;
@ -142,6 +186,7 @@ class V8_EXPORT CppHeap {
class JSVisitor : public cppgc::Visitor {
public:
explicit JSVisitor(cppgc::Visitor::Key key) : cppgc::Visitor(key) {}
~JSVisitor() override = default;
void Trace(const TracedReferenceBase& ref) {
if (ref.IsEmptyThreadSafe()) return;
@ -155,126 +200,23 @@ class JSVisitor : public cppgc::Visitor {
};
/**
* **DO NOT USE: Use the appropriate managed types.**
* Provided as input to `CppHeap::CollectCustomSpaceStatisticsAtLastGC()`.
*
* Consistency helpers that aid in maintaining a consistent internal state of
* the garbage collector.
* Its method is invoked with the results of the statistic collection.
*/
class V8_EXPORT JSHeapConsistency final {
class CustomSpaceStatisticsReceiver {
public:
using WriteBarrierParams = cppgc::internal::WriteBarrier::Params;
using WriteBarrierType = cppgc::internal::WriteBarrier::Type;
virtual ~CustomSpaceStatisticsReceiver() = default;
/**
* Gets the required write barrier type for a specific write.
* Reports the size of a space at the last GC. It is called for each space
* that was requested in `CollectCustomSpaceStatisticsAtLastGC()`.
*
* Note: Handling for C++ to JS references.
*
* \param ref The reference being written to.
* \param params Parameters that may be used for actual write barrier calls.
* Only filled if return value indicates that a write barrier is needed. The
* contents of the `params` are an implementation detail.
* \param callback Callback returning the corresponding heap handle. The
* callback is only invoked if the heap cannot otherwise be figured out. The
* callback must not allocate.
* \returns whether a write barrier is needed and which barrier to invoke.
* \param space_index The index of the space.
* \param bytes The total size of live objects in the space at the last GC.
* It is zero if there was no GC yet.
*/
template <typename HeapHandleCallback>
static V8_INLINE WriteBarrierType
GetWriteBarrierType(const TracedReferenceBase& ref,
WriteBarrierParams& params, HeapHandleCallback callback) {
if (ref.IsEmpty()) return WriteBarrierType::kNone;
if (V8_LIKELY(!cppgc::internal::WriteBarrier::
IsAnyIncrementalOrConcurrentMarking())) {
return cppgc::internal::WriteBarrier::Type::kNone;
}
cppgc::HeapHandle& handle = callback();
if (!cppgc::subtle::HeapState::IsMarking(handle)) {
return cppgc::internal::WriteBarrier::Type::kNone;
}
params.heap = &handle;
#if V8_ENABLE_CHECKS
params.type = cppgc::internal::WriteBarrier::Type::kMarking;
#endif // !V8_ENABLE_CHECKS
return cppgc::internal::WriteBarrier::Type::kMarking;
}
/**
* Gets the required write barrier type for a specific write.
*
* Note: Handling for JS to C++ references.
*
* \param wrapper The wrapper that has been written into.
* \param wrapper_index The wrapper index in `wrapper` that has been written
* into.
* \param wrappable The value that was written.
* \param params Parameters that may be used for actual write barrier calls.
* Only filled if return value indicates that a write barrier is needed. The
* contents of the `params` are an implementation detail.
* \param callback Callback returning the corresponding heap handle. The
* callback is only invoked if the heap cannot otherwise be figured out. The
* callback must not allocate.
* \returns whether a write barrier is needed and which barrier to invoke.
*/
template <typename HeapHandleCallback>
static V8_INLINE WriteBarrierType GetWriteBarrierType(
v8::Local<v8::Object>& wrapper, int wrapper_index, const void* wrappable,
WriteBarrierParams& params, HeapHandleCallback callback) {
#if V8_ENABLE_CHECKS
CheckWrapper(wrapper, wrapper_index, wrappable);
#endif // V8_ENABLE_CHECKS
return cppgc::internal::WriteBarrier::
GetWriteBarrierTypeForExternallyReferencedObject(wrappable, params,
callback);
}
/**
* Conservative Dijkstra-style write barrier that processes an object if it
* has not yet been processed.
*
* \param params The parameters retrieved from `GetWriteBarrierType()`.
* \param ref The reference being written to.
*/
static V8_INLINE void DijkstraMarkingBarrier(const WriteBarrierParams& params,
cppgc::HeapHandle& heap_handle,
const TracedReferenceBase& ref) {
cppgc::internal::WriteBarrier::CheckParams(WriteBarrierType::kMarking,
params);
DijkstraMarkingBarrierSlow(heap_handle, ref);
}
/**
* Conservative Dijkstra-style write barrier that processes an object if it
* has not yet been processed.
*
* \param params The parameters retrieved from `GetWriteBarrierType()`.
* \param object The pointer to the object. May be an interior pointer to a
* an interface of the actual object.
*/
static V8_INLINE void DijkstraMarkingBarrier(const WriteBarrierParams& params,
cppgc::HeapHandle& heap_handle,
const void* object) {
cppgc::internal::WriteBarrier::DijkstraMarkingBarrier(params, object);
}
/**
* Generational barrier for maintaining consistency when running with multiple
* generations.
*
* \param params The parameters retrieved from `GetWriteBarrierType()`.
* \param ref The reference being written to.
*/
static V8_INLINE void GenerationalBarrier(const WriteBarrierParams& params,
const TracedReferenceBase& ref) {}
private:
JSHeapConsistency() = delete;
static void CheckWrapper(v8::Local<v8::Object>&, int, const void*);
static void DijkstraMarkingBarrierSlow(cppgc::HeapHandle&,
const TracedReferenceBase& ref);
virtual void AllocatedBytes(cppgc::CustomSpaceIndex space_index,
size_t bytes) = 0;
};
} // namespace v8
@ -283,8 +225,13 @@ namespace cppgc {
template <typename T>
struct TraceTrait<v8::TracedReference<T>> {
static void Trace(Visitor* visitor, const v8::TracedReference<T>* self) {
static_cast<v8::JSVisitor*>(visitor)->Trace(*self);
static cppgc::TraceDescriptor GetTraceDescriptor(const void* self) {
return {nullptr, Trace};
}
static void Trace(Visitor* visitor, const void* self) {
static_cast<v8::JSVisitor*>(visitor)->Trace(
*static_cast<const v8::TracedReference<T>*>(self));
}
};

View File

@ -0,0 +1,80 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_DATA_H_
#define INCLUDE_V8_DATA_H_
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Context;
/**
* The superclass of objects that can reside on V8's heap.
*/
class V8_EXPORT Data {
public:
/**
* Returns true if this data is a |v8::Value|.
*/
bool IsValue() const;
/**
* Returns true if this data is a |v8::Module|.
*/
bool IsModule() const;
/**
* Returns tru if this data is a |v8::FixedArray|
*/
bool IsFixedArray() const;
/**
* Returns true if this data is a |v8::Private|.
*/
bool IsPrivate() const;
/**
* Returns true if this data is a |v8::ObjectTemplate|.
*/
bool IsObjectTemplate() const;
/**
* Returns true if this data is a |v8::FunctionTemplate|.
*/
bool IsFunctionTemplate() const;
/**
* Returns true if this data is a |v8::Context|.
*/
bool IsContext() const;
private:
Data() = delete;
};
/**
* A fixed-sized array with elements of type Data.
*/
class V8_EXPORT FixedArray : public Data {
public:
int Length() const;
Local<Data> Get(Local<Context> context, int i) const;
V8_INLINE static FixedArray* Cast(Data* data) {
#ifdef V8_ENABLE_CHECKS
CheckCast(data);
#endif
return reinterpret_cast<FixedArray*>(data);
}
private:
static void CheckCast(Data* obj);
};
} // namespace v8
#endif // INCLUDE_V8_DATA_H_

View File

@ -0,0 +1,48 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_DATE_H_
#define INCLUDE_V8_DATE_H_
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-object.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Context;
/**
* An instance of the built-in Date constructor (ECMA-262, 15.9).
*/
class V8_EXPORT Date : public Object {
public:
static V8_WARN_UNUSED_RESULT MaybeLocal<Value> New(Local<Context> context,
double time);
/**
* A specialization of Value::NumberValue that is more efficient
* because we know the structure of this object.
*/
double ValueOf() const;
/**
* Generates ISO string representation.
*/
v8::Local<v8::String> ToISOString() const;
V8_INLINE static Date* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Date*>(value);
}
private:
static void CheckCast(Value* obj);
};
} // namespace v8
#endif // INCLUDE_V8_DATE_H_

View File

@ -0,0 +1,168 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_DEBUG_H_
#define INCLUDE_V8_DEBUG_H_
#include <stdint.h>
#include "v8-script.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Isolate;
class String;
/**
* A single JavaScript stack frame.
*/
class V8_EXPORT StackFrame {
public:
/**
* Returns the source location, 0-based, for the associated function call.
*/
Location GetLocation() const;
/**
* Returns the number, 1-based, of the line for the associate function call.
* This method will return Message::kNoLineNumberInfo if it is unable to
* retrieve the line number, or if kLineNumber was not passed as an option
* when capturing the StackTrace.
*/
int GetLineNumber() const { return GetLocation().GetLineNumber() + 1; }
/**
* Returns the 1-based column offset on the line for the associated function
* call.
* This method will return Message::kNoColumnInfo if it is unable to retrieve
* the column number, or if kColumnOffset was not passed as an option when
* capturing the StackTrace.
*/
int GetColumn() const { return GetLocation().GetColumnNumber() + 1; }
/**
* Returns the id of the script for the function for this StackFrame.
* This method will return Message::kNoScriptIdInfo if it is unable to
* retrieve the script id, or if kScriptId was not passed as an option when
* capturing the StackTrace.
*/
int GetScriptId() const;
/**
* Returns the name of the resource that contains the script for the
* function for this StackFrame.
*/
Local<String> GetScriptName() const;
/**
* Returns the name of the resource that contains the script for the
* function for this StackFrame or sourceURL value if the script name
* is undefined and its source ends with //# sourceURL=... string or
* deprecated //@ sourceURL=... string.
*/
Local<String> GetScriptNameOrSourceURL() const;
/**
* Returns the source of the script for the function for this StackFrame.
*/
Local<String> GetScriptSource() const;
/**
* Returns the source mapping URL (if one is present) of the script for
* the function for this StackFrame.
*/
Local<String> GetScriptSourceMappingURL() const;
/**
* Returns the name of the function associated with this stack frame.
*/
Local<String> GetFunctionName() const;
/**
* Returns whether or not the associated function is compiled via a call to
* eval().
*/
bool IsEval() const;
/**
* Returns whether or not the associated function is called as a
* constructor via "new".
*/
bool IsConstructor() const;
/**
* Returns whether or not the associated functions is defined in wasm.
*/
bool IsWasm() const;
/**
* Returns whether or not the associated function is defined by the user.
*/
bool IsUserJavaScript() const;
};
/**
* Representation of a JavaScript stack trace. The information collected is a
* snapshot of the execution stack and the information remains valid after
* execution continues.
*/
class V8_EXPORT StackTrace {
public:
/**
* Flags that determine what information is placed captured for each
* StackFrame when grabbing the current stack trace.
* Note: these options are deprecated and we always collect all available
* information (kDetailed).
*/
enum StackTraceOptions {
kLineNumber = 1,
kColumnOffset = 1 << 1 | kLineNumber,
kScriptName = 1 << 2,
kFunctionName = 1 << 3,
kIsEval = 1 << 4,
kIsConstructor = 1 << 5,
kScriptNameOrSourceURL = 1 << 6,
kScriptId = 1 << 7,
kExposeFramesAcrossSecurityOrigins = 1 << 8,
kOverview = kLineNumber | kColumnOffset | kScriptName | kFunctionName,
kDetailed = kOverview | kIsEval | kIsConstructor | kScriptNameOrSourceURL
};
/**
* Returns a StackFrame at a particular index.
*/
Local<StackFrame> GetFrame(Isolate* isolate, uint32_t index) const;
/**
* Returns the number of StackFrames.
*/
int GetFrameCount() const;
/**
* Grab a snapshot of the current JavaScript execution stack.
*
* \param frame_limit The maximum number of stack frames we want to capture.
* \param options Enumerates the set of things we will capture for each
* StackFrame.
*/
static Local<StackTrace> CurrentStackTrace(
Isolate* isolate, int frame_limit, StackTraceOptions options = kDetailed);
/**
* Returns the first valid script name or source URL starting at the top of
* the JS stack. The returned string is either an empty handle if no script
* name/url was found or a non-zero-length string.
*
* This method is equivalent to calling StackTrace::CurrentStackTrace and
* walking the resulting frames from the beginning until a non-empty script
* name/url is found. The difference is that this method won't allocate
* a stack trace.
*/
static Local<String> CurrentScriptNameOrSourceURL(Isolate* isolate);
};
} // namespace v8
#endif // INCLUDE_V8_DEBUG_H_

View File

@ -0,0 +1,66 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_EMBEDDER_HEAP_H_
#define INCLUDE_V8_EMBEDDER_HEAP_H_
#include "v8-traced-handle.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Isolate;
class Value;
/**
* Handler for embedder roots on non-unified heap garbage collections.
*/
class V8_EXPORT EmbedderRootsHandler {
public:
virtual ~EmbedderRootsHandler() = default;
/**
* Returns true if the |TracedReference| handle should be considered as root
* for the currently running non-tracing garbage collection and false
* otherwise. The default implementation will keep all |TracedReference|
* references as roots.
*
* If this returns false, then V8 may decide that the object referred to by
* such a handle is reclaimed. In that case, V8 calls |ResetRoot()| for the
* |TracedReference|.
*
* Note that the `handle` is different from the handle that the embedder holds
* for retaining the object. The embedder may use |WrapperClassId()| to
* distinguish cases where it wants handles to be treated as roots from not
* being treated as roots.
*
* The concrete implementations must be thread-safe.
*/
virtual bool IsRoot(const v8::TracedReference<v8::Value>& handle) = 0;
/**
* Used in combination with |IsRoot|. Called by V8 when an
* object that is backed by a handle is reclaimed by a non-tracing garbage
* collection. It is up to the embedder to reset the original handle.
*
* Note that the |handle| is different from the handle that the embedder holds
* for retaining the object. It is up to the embedder to find the original
* handle via the object or class id.
*/
virtual void ResetRoot(const v8::TracedReference<v8::Value>& handle) = 0;
/**
* Similar to |ResetRoot()|, but opportunistic. The function is called in
* parallel for different handles and as such must be thread-safe. In case,
* |false| is returned, |ResetRoot()| will be recalled for the same handle.
*/
virtual bool TryResetRoot(const v8::TracedReference<v8::Value>& handle) {
ResetRoot(handle);
return true;
}
};
} // namespace v8
#endif // INCLUDE_V8_EMBEDDER_HEAP_H_

View File

@ -0,0 +1,51 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_EMBEDDER_STATE_SCOPE_H_
#define INCLUDE_V8_EMBEDDER_STATE_SCOPE_H_
#include <memory>
#include "v8-context.h" // NOLINT(build/include_directory)
#include "v8-internal.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
namespace v8 {
namespace internal {
class EmbedderState;
} // namespace internal
// A StateTag represents a possible state of the embedder.
enum class EmbedderStateTag : uint8_t {
// reserved
EMPTY = 0,
OTHER = 1,
// embedder can define any state after
};
// A stack-allocated class that manages an embedder state on the isolate.
// After an EmbedderState scope has been created, a new embedder state will be
// pushed on the isolate stack.
class V8_EXPORT EmbedderStateScope {
public:
EmbedderStateScope(Isolate* isolate, Local<v8::Context> context,
EmbedderStateTag tag);
~EmbedderStateScope();
private:
// Declaring operator new and delete as deleted is not spec compliant.
// Therefore declare them private instead to disable dynamic alloc
void* operator new(size_t size);
void* operator new[](size_t size);
void operator delete(void*, size_t);
void operator delete[](void*, size_t);
std::unique_ptr<internal::EmbedderState> embedder_state_;
};
} // namespace v8
#endif // INCLUDE_V8_EMBEDDER_STATE_SCOPE_H_

View File

@ -0,0 +1,217 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_EXCEPTION_H_
#define INCLUDE_V8_EXCEPTION_H_
#include <stddef.h>
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Context;
class Isolate;
class Message;
class StackTrace;
class String;
class Value;
namespace internal {
class Isolate;
class ThreadLocalTop;
} // namespace internal
/**
* Create new error objects by calling the corresponding error object
* constructor with the message.
*/
class V8_EXPORT Exception {
public:
static Local<Value> RangeError(Local<String> message);
static Local<Value> ReferenceError(Local<String> message);
static Local<Value> SyntaxError(Local<String> message);
static Local<Value> TypeError(Local<String> message);
static Local<Value> WasmCompileError(Local<String> message);
static Local<Value> WasmLinkError(Local<String> message);
static Local<Value> WasmRuntimeError(Local<String> message);
static Local<Value> Error(Local<String> message);
/**
* Creates an error message for the given exception.
* Will try to reconstruct the original stack trace from the exception value,
* or capture the current stack trace if not available.
*/
static Local<Message> CreateMessage(Isolate* isolate, Local<Value> exception);
/**
* Returns the original stack trace that was captured at the creation time
* of a given exception, or an empty handle if not available.
*/
static Local<StackTrace> GetStackTrace(Local<Value> exception);
};
/**
* An external exception handler.
*/
class V8_EXPORT TryCatch {
public:
/**
* Creates a new try/catch block and registers it with v8. Note that
* all TryCatch blocks should be stack allocated because the memory
* location itself is compared against JavaScript try/catch blocks.
*/
explicit TryCatch(Isolate* isolate);
/**
* Unregisters and deletes this try/catch block.
*/
~TryCatch();
/**
* Returns true if an exception has been caught by this try/catch block.
*/
bool HasCaught() const;
/**
* For certain types of exceptions, it makes no sense to continue execution.
*
* If CanContinue returns false, the correct action is to perform any C++
* cleanup needed and then return. If CanContinue returns false and
* HasTerminated returns true, it is possible to call
* CancelTerminateExecution in order to continue calling into the engine.
*/
bool CanContinue() const;
/**
* Returns true if an exception has been caught due to script execution
* being terminated.
*
* There is no JavaScript representation of an execution termination
* exception. Such exceptions are thrown when the TerminateExecution
* methods are called to terminate a long-running script.
*
* If such an exception has been thrown, HasTerminated will return true,
* indicating that it is possible to call CancelTerminateExecution in order
* to continue calling into the engine.
*/
bool HasTerminated() const;
/**
* Throws the exception caught by this TryCatch in a way that avoids
* it being caught again by this same TryCatch. As with ThrowException
* it is illegal to execute any JavaScript operations after calling
* ReThrow; the caller must return immediately to where the exception
* is caught.
*/
Local<Value> ReThrow();
/**
* Returns the exception caught by this try/catch block. If no exception has
* been caught an empty handle is returned.
*/
Local<Value> Exception() const;
/**
* Returns the .stack property of an object. If no .stack
* property is present an empty handle is returned.
*/
V8_WARN_UNUSED_RESULT static MaybeLocal<Value> StackTrace(
Local<Context> context, Local<Value> exception);
/**
* Returns the .stack property of the thrown object. If no .stack property is
* present or if this try/catch block has not caught an exception, an empty
* handle is returned.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Value> StackTrace(
Local<Context> context) const;
/**
* Returns the message associated with this exception. If there is
* no message associated an empty handle is returned.
*/
Local<v8::Message> Message() const;
/**
* Clears any exceptions that may have been caught by this try/catch block.
* After this method has been called, HasCaught() will return false. Cancels
* the scheduled exception if it is caught and ReThrow() is not called before.
*
* It is not necessary to clear a try/catch block before using it again; if
* another exception is thrown the previously caught exception will just be
* overwritten. However, it is often a good idea since it makes it easier
* to determine which operation threw a given exception.
*/
void Reset();
/**
* Set verbosity of the external exception handler.
*
* By default, exceptions that are caught by an external exception
* handler are not reported. Call SetVerbose with true on an
* external exception handler to have exceptions caught by the
* handler reported as if they were not caught.
*/
void SetVerbose(bool value);
/**
* Returns true if verbosity is enabled.
*/
bool IsVerbose() const;
/**
* Set whether or not this TryCatch should capture a Message object
* which holds source information about where the exception
* occurred. True by default.
*/
void SetCaptureMessage(bool value);
TryCatch(const TryCatch&) = delete;
void operator=(const TryCatch&) = delete;
private:
// Declaring operator new and delete as deleted is not spec compliant.
// Therefore declare them private instead to disable dynamic alloc
void* operator new(size_t size);
void* operator new[](size_t size);
void operator delete(void*, size_t);
void operator delete[](void*, size_t);
/**
* There are cases when the raw address of C++ TryCatch object cannot be
* used for comparisons with addresses into the JS stack. The cases are:
* 1) ARM, ARM64 and MIPS simulators which have separate JS stack.
* 2) Address sanitizer allocates local C++ object in the heap when
* UseAfterReturn mode is enabled.
* This method returns address that can be used for comparisons with
* addresses into the JS stack. When neither simulator nor ASAN's
* UseAfterReturn is enabled, then the address returned will be the address
* of the C++ try catch handler itself.
*/
internal::Address JSStackComparableAddressPrivate() {
return js_stack_comparable_address_;
}
void ResetInternal();
internal::Isolate* i_isolate_;
TryCatch* next_;
void* exception_;
void* message_obj_;
internal::Address js_stack_comparable_address_;
bool is_verbose_ : 1;
bool can_continue_ : 1;
bool capture_message_ : 1;
bool rethrow_ : 1;
bool has_terminated_ : 1;
friend class internal::Isolate;
friend class internal::ThreadLocalTop;
};
} // namespace v8
#endif // INCLUDE_V8_EXCEPTION_H_

View File

@ -0,0 +1,62 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_EXTENSION_H_
#define INCLUDE_V8_EXTENSION_H_
#include <memory>
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-primitive.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class FunctionTemplate;
// --- Extensions ---
/**
* Ignore
*/
class V8_EXPORT Extension {
public:
// Note that the strings passed into this constructor must live as long
// as the Extension itself.
Extension(const char* name, const char* source = nullptr, int dep_count = 0,
const char** deps = nullptr, int source_length = -1);
virtual ~Extension() { delete source_; }
virtual Local<FunctionTemplate> GetNativeFunctionTemplate(
Isolate* isolate, Local<String> name) {
return Local<FunctionTemplate>();
}
const char* name() const { return name_; }
size_t source_length() const { return source_length_; }
const String::ExternalOneByteStringResource* source() const {
return source_;
}
int dependency_count() const { return dep_count_; }
const char** dependencies() const { return deps_; }
void set_auto_enable(bool value) { auto_enable_ = value; }
bool auto_enable() { return auto_enable_; }
// Disallow copying and assigning.
Extension(const Extension&) = delete;
void operator=(const Extension&) = delete;
private:
const char* name_;
size_t source_length_; // expected to initialize before source_
String::ExternalOneByteStringResource* source_;
int dep_count_;
const char** deps_;
bool auto_enable_;
};
void V8_EXPORT RegisterExtension(std::unique_ptr<Extension>);
} // namespace v8
#endif // INCLUDE_V8_EXTENSION_H_

View File

@ -0,0 +1,37 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_EXTERNAL_H_
#define INCLUDE_V8_EXTERNAL_H_
#include "v8-value.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Isolate;
/**
* A JavaScript value that wraps a C++ void*. This type of value is mainly used
* to associate C++ data structures with JavaScript objects.
*/
class V8_EXPORT External : public Value {
public:
static Local<External> New(Isolate* isolate, void* value);
V8_INLINE static External* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<External*>(value);
}
void* Value() const;
private:
static void CheckCast(v8::Value* obj);
};
} // namespace v8
#endif // INCLUDE_V8_EXTERNAL_H_

View File

@ -70,8 +70,7 @@
* return GetInternalField<CustomEmbedderType,
* kV8EmbedderWrapperObjectIndex>(wrapper);
* }
* static void FastMethod(v8::ApiObject receiver_obj, int param) {
* v8::Object* v8_object = reinterpret_cast<v8::Object*>(&api_object);
* static void FastMethod(v8::Local<v8::Object> receiver_obj, int param) {
* CustomEmbedderType* receiver = static_cast<CustomEmbedderType*>(
* receiver_obj->GetAlignedPointerFromInternalField(
* kV8EmbedderWrapperObjectIndex));
@ -157,6 +156,7 @@
* - float64_t
* Currently supported argument types:
* - pointer to an embedder type
* - JavaScript array of primitive types
* - bool
* - int32_t
* - uint32_t
@ -177,8 +177,43 @@
* passes NaN values as-is, i.e. doesn't normalize them.
*
* To be supported types:
* - arrays of C types
* - TypedArrays and ArrayBuffers
* - arrays of embedder types
*
*
* The API offers a limited support for function overloads:
*
* \code
* void FastMethod_2Args(int param, bool another_param);
* void FastMethod_3Args(int param, bool another_param, int third_param);
*
* v8::CFunction fast_method_2args_c_func =
* MakeV8CFunction(FastMethod_2Args);
* v8::CFunction fast_method_3args_c_func =
* MakeV8CFunction(FastMethod_3Args);
* const v8::CFunction fast_method_overloads[] = {fast_method_2args_c_func,
* fast_method_3args_c_func};
* Local<v8::FunctionTemplate> method_template =
* v8::FunctionTemplate::NewWithCFunctionOverloads(
* isolate, SlowCallback, data, signature, length,
* constructor_behavior, side_effect_type,
* {fast_method_overloads, 2});
* \endcode
*
* In this example a single FunctionTemplate is associated to multiple C++
* functions. The overload resolution is currently only based on the number of
* arguments passed in a call. For example, if this method_template is
* registered with a wrapper JS object as described above, a call with two
* arguments:
* obj.method(42, true);
* will result in a fast call to FastMethod_2Args, while a call with three or
* more arguments:
* obj.method(42, true, 11);
* will result in a fast call to FastMethod_3Args. Instead a call with less than
* two arguments, like:
* obj.method(42);
* would not result in a fast call but would fall back to executing the
* associated SlowCallback.
*/
#ifndef INCLUDE_V8_FAST_API_CALLS_H_
@ -190,22 +225,42 @@
#include <tuple>
#include <type_traits>
#include "v8config.h" // NOLINT(build/include_directory)
#include "v8-internal.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-typed-array.h" // NOLINT(build/include_directory)
#include "v8-value.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Isolate;
class CTypeInfo {
public:
enum class Type : uint8_t {
kVoid,
kBool,
kUint8,
kInt32,
kUint32,
kInt64,
kUint64,
kFloat32,
kFloat64,
kPointer,
kV8Value,
kSeqOneByteString,
kApiObject, // This will be deprecated once all users have
// migrated from v8::ApiObject to v8::Local<v8::Value>.
kAny, // This is added to enable untyped representation of fast
// call arguments for test purposes. It can represent any of
// the other types stored in the same memory as a union (see
// the AnyCType struct declared below). This allows for
// uniform passing of arguments w.r.t. their location
// (in a register or on the stack), independent of their
// actual type. It's currently used by the arm64 simulator
// and can be added to the other simulators as well when fast
// calls having both GP and FP params need to be supported.
};
// kCallbackOptionsType is not part of the Type enum
@ -213,31 +268,139 @@ class CTypeInfo {
// than any valid Type enum.
static constexpr Type kCallbackOptionsType = Type(255);
enum class Flags : uint8_t {
kNone = 0,
enum class SequenceType : uint8_t {
kScalar,
kIsSequence, // sequence<T>
kIsTypedArray, // TypedArray of T or any ArrayBufferView if T
// is void
kIsArrayBuffer // ArrayBuffer
};
explicit constexpr CTypeInfo(Type type, Flags flags = Flags::kNone)
: type_(type), flags_(flags) {}
enum class Flags : uint8_t {
kNone = 0,
kAllowSharedBit = 1 << 0, // Must be an ArrayBuffer or TypedArray
kEnforceRangeBit = 1 << 1, // T must be integral
kClampBit = 1 << 2, // T must be integral
kIsRestrictedBit = 1 << 3, // T must be float or double
};
explicit constexpr CTypeInfo(
Type type, SequenceType sequence_type = SequenceType::kScalar,
Flags flags = Flags::kNone)
: type_(type), sequence_type_(sequence_type), flags_(flags) {}
typedef uint32_t Identifier;
explicit constexpr CTypeInfo(Identifier identifier)
: CTypeInfo(static_cast<Type>(identifier >> 16),
static_cast<SequenceType>((identifier >> 8) & 255),
static_cast<Flags>(identifier & 255)) {}
constexpr Identifier GetId() const {
return static_cast<uint8_t>(type_) << 16 |
static_cast<uint8_t>(sequence_type_) << 8 |
static_cast<uint8_t>(flags_);
}
constexpr Type GetType() const { return type_; }
constexpr SequenceType GetSequenceType() const { return sequence_type_; }
constexpr Flags GetFlags() const { return flags_; }
static constexpr bool IsIntegralType(Type type) {
return type == Type::kUint8 || type == Type::kInt32 ||
type == Type::kUint32 || type == Type::kInt64 ||
type == Type::kUint64;
}
static constexpr bool IsFloatingPointType(Type type) {
return type == Type::kFloat32 || type == Type::kFloat64;
}
static constexpr bool IsPrimitive(Type type) {
return IsIntegralType(type) || IsFloatingPointType(type) ||
type == Type::kBool;
}
private:
Type type_;
SequenceType sequence_type_;
Flags flags_;
};
struct FastApiTypedArrayBase {
public:
// Returns the length in number of elements.
size_t V8_EXPORT length() const { return length_; }
// Checks whether the given index is within the bounds of the collection.
void V8_EXPORT ValidateIndex(size_t index) const;
protected:
size_t length_ = 0;
};
template <typename T>
struct FastApiTypedArray : public FastApiTypedArrayBase {
public:
V8_INLINE T get(size_t index) const {
#ifdef DEBUG
ValidateIndex(index);
#endif // DEBUG
T tmp;
memcpy(&tmp, reinterpret_cast<T*>(data_) + index, sizeof(T));
return tmp;
}
bool getStorageIfAligned(T** elements) const {
if (reinterpret_cast<uintptr_t>(data_) % alignof(T) != 0) {
return false;
}
*elements = reinterpret_cast<T*>(data_);
return true;
}
private:
// This pointer should include the typed array offset applied.
// It's not guaranteed that it's aligned to sizeof(T), it's only
// guaranteed that it's 4-byte aligned, so for 8-byte types we need to
// provide a special implementation for reading from it, which hides
// the possibly unaligned read in the `get` method.
void* data_;
};
// Any TypedArray. It uses kTypedArrayBit with base type void
// Overloaded args of ArrayBufferView and TypedArray are not supported
// (for now) because the generic “any” ArrayBufferView doesnt have its
// own instance type. It could be supported if we specify that
// TypedArray<T> always has precedence over the generic ArrayBufferView,
// but this complicates overload resolution.
struct FastApiArrayBufferView {
void* data;
size_t byte_length;
};
struct FastApiArrayBuffer {
void* data;
size_t byte_length;
};
struct FastOneByteString {
const char* data;
uint32_t length;
};
class V8_EXPORT CFunctionInfo {
public:
enum class Int64Representation : uint8_t {
kNumber = 0, // Use numbers to represent 64 bit integers.
kBigInt = 1, // Use BigInts to represent 64 bit integers.
};
// Construct a struct to hold a CFunction's type information.
// |return_info| describes the function's return type.
// |arg_info| is an array of |arg_count| CTypeInfos describing the
// arguments. Only the last argument may be of the special type
// CTypeInfo::kCallbackOptionsType.
CFunctionInfo(const CTypeInfo& return_info, unsigned int arg_count,
const CTypeInfo* arg_info);
const CTypeInfo* arg_info,
Int64Representation repr = Int64Representation::kNumber);
const CTypeInfo& ReturnInfo() const { return return_info_; }
@ -247,6 +410,8 @@ class V8_EXPORT CFunctionInfo {
return HasOptions() ? arg_count_ - 1 : arg_count_;
}
Int64Representation GetInt64Representation() const { return repr_; }
// |index| must be less than ArgumentCount().
// Note: if the last argument passed on construction of CFunctionInfo
// has type CTypeInfo::kCallbackOptionsType, it is not included in
@ -261,10 +426,45 @@ class V8_EXPORT CFunctionInfo {
private:
const CTypeInfo return_info_;
const Int64Representation repr_;
const unsigned int arg_count_;
const CTypeInfo* arg_info_;
};
struct FastApiCallbackOptions;
// Provided for testing.
struct AnyCType {
AnyCType() : int64_value(0) {}
union {
bool bool_value;
int32_t int32_value;
uint32_t uint32_value;
int64_t int64_value;
uint64_t uint64_value;
float float_value;
double double_value;
void* pointer_value;
Local<Object> object_value;
Local<Array> sequence_value;
const FastApiTypedArray<uint8_t>* uint8_ta_value;
const FastApiTypedArray<int32_t>* int32_ta_value;
const FastApiTypedArray<uint32_t>* uint32_ta_value;
const FastApiTypedArray<int64_t>* int64_ta_value;
const FastApiTypedArray<uint64_t>* uint64_ta_value;
const FastApiTypedArray<float>* float_ta_value;
const FastApiTypedArray<double>* double_ta_value;
const FastOneByteString* string_value;
FastApiCallbackOptions* options_value;
};
};
static_assert(
sizeof(AnyCType) == 8,
"The AnyCType struct should have size == 64 bits, as this is assumed "
"by EffectControlLinearizer.");
class V8_EXPORT CFunction {
public:
constexpr CFunction() : address_(nullptr), type_info_(nullptr) {}
@ -278,17 +478,63 @@ class V8_EXPORT CFunction {
unsigned int ArgumentCount() const { return type_info_->ArgumentCount(); }
const void* GetAddress() const { return address_; }
CFunctionInfo::Int64Representation GetInt64Representation() const {
return type_info_->GetInt64Representation();
}
const CFunctionInfo* GetTypeInfo() const { return type_info_; }
enum class OverloadResolution { kImpossible, kAtRuntime, kAtCompileTime };
// Returns whether an overload between this and the given CFunction can
// be resolved at runtime by the RTTI available for the arguments or at
// compile time for functions with different number of arguments.
OverloadResolution GetOverloadResolution(const CFunction* other) {
// Runtime overload resolution can only deal with functions with the
// same number of arguments. Functions with different arity are handled
// by compile time overload resolution though.
if (ArgumentCount() != other->ArgumentCount()) {
return OverloadResolution::kAtCompileTime;
}
// The functions can only differ by a single argument position.
int diff_index = -1;
for (unsigned int i = 0; i < ArgumentCount(); ++i) {
if (ArgumentInfo(i).GetSequenceType() !=
other->ArgumentInfo(i).GetSequenceType()) {
if (diff_index >= 0) {
return OverloadResolution::kImpossible;
}
diff_index = i;
// We only support overload resolution between sequence types.
if (ArgumentInfo(i).GetSequenceType() ==
CTypeInfo::SequenceType::kScalar ||
other->ArgumentInfo(i).GetSequenceType() ==
CTypeInfo::SequenceType::kScalar) {
return OverloadResolution::kImpossible;
}
}
}
return OverloadResolution::kAtRuntime;
}
template <typename F>
static CFunction Make(F* func) {
return ArgUnwrap<F*>::Make(func);
}
template <typename F>
V8_DEPRECATED("Use CFunctionBuilder instead.")
static CFunction MakeWithFallbackSupport(F* func) {
return ArgUnwrap<F*>::Make(func);
// Provided for testing purposes.
template <typename R, typename... Args, typename R_Patch,
typename... Args_Patch>
static CFunction Make(R (*func)(Args...),
R_Patch (*patching_func)(Args_Patch...)) {
CFunction c_func = ArgUnwrap<R (*)(Args...)>::Make(func);
static_assert(
sizeof...(Args_Patch) == sizeof...(Args),
"The patching function must have the same number of arguments.");
c_func.address_ = reinterpret_cast<void*>(patching_func);
return c_func;
}
CFunction(const void* address, const CFunctionInfo* type_info);
@ -310,10 +556,6 @@ class V8_EXPORT CFunction {
};
};
struct ApiObject {
uintptr_t address;
};
/**
* A struct which may be passed to a fast call callback, like so:
* \code
@ -321,6 +563,14 @@ struct ApiObject {
* \endcode
*/
struct FastApiCallbackOptions {
/**
* Creates a new instance of FastApiCallbackOptions for testing purpose. The
* returned instance may be filled with mock data.
*/
static FastApiCallbackOptions CreateForTesting(Isolate* isolate) {
return {false, {0}, nullptr};
}
/**
* If the callback wants to signal an error condition or to perform an
* allocation, it must set options.fallback to true and do an early return
@ -336,8 +586,17 @@ struct FastApiCallbackOptions {
/**
* The `data` passed to the FunctionTemplate constructor, or `undefined`.
* `data_ptr` allows for default constructing FastApiCallbackOptions.
*/
const ApiObject data;
union {
uintptr_t data_ptr;
v8::Local<v8::Value> data;
};
/**
* When called from WebAssembly, a view of the calling module's memory.
*/
FastApiTypedArray<uint8_t>* const wasm_memory;
};
namespace internal {
@ -351,7 +610,8 @@ struct count<T, T, Args...>
template <typename T, typename U, typename... Args>
struct count<T, U, Args...> : count<T, Args...> {};
template <typename RetBuilder, typename... ArgBuilders>
template <CFunctionInfo::Int64Representation Representation,
typename RetBuilder, typename... ArgBuilders>
class CFunctionInfoImpl : public CFunctionInfo {
static constexpr int kOptionsArgCount =
count<FastApiCallbackOptions&, ArgBuilders...>();
@ -366,16 +626,20 @@ class CFunctionInfoImpl : public CFunctionInfo {
public:
constexpr CFunctionInfoImpl()
: CFunctionInfo(RetBuilder::Build(), sizeof...(ArgBuilders),
arg_info_storage_),
arg_info_storage_, Representation),
arg_info_storage_{ArgBuilders::Build()...} {
constexpr CTypeInfo::Type kReturnType = RetBuilder::Build().GetType();
static_assert(kReturnType == CTypeInfo::Type::kVoid ||
kReturnType == CTypeInfo::Type::kBool ||
kReturnType == CTypeInfo::Type::kInt32 ||
kReturnType == CTypeInfo::Type::kUint32 ||
kReturnType == CTypeInfo::Type::kInt64 ||
kReturnType == CTypeInfo::Type::kUint64 ||
kReturnType == CTypeInfo::Type::kFloat32 ||
kReturnType == CTypeInfo::Type::kFloat64,
"64-bit int and api object values are not currently "
kReturnType == CTypeInfo::Type::kFloat64 ||
kReturnType == CTypeInfo::Type::kPointer ||
kReturnType == CTypeInfo::Type::kAny,
"String and api object values are not currently "
"supported return types.");
}
@ -396,22 +660,94 @@ struct TypeInfoHelper {
} \
\
static constexpr CTypeInfo::Type Type() { return CTypeInfo::Type::Enum; } \
static constexpr CTypeInfo::SequenceType SequenceType() { \
return CTypeInfo::SequenceType::kScalar; \
} \
};
#define BASIC_C_TYPES(V) \
V(void, kVoid) \
V(bool, kBool) \
V(int32_t, kInt32) \
V(uint32_t, kUint32) \
V(int64_t, kInt64) \
V(uint64_t, kUint64) \
V(float, kFloat32) \
V(double, kFloat64) \
V(ApiObject, kV8Value)
template <CTypeInfo::Type type>
struct CTypeInfoTraits {};
BASIC_C_TYPES(SPECIALIZE_GET_TYPE_INFO_HELPER_FOR)
#define DEFINE_TYPE_INFO_TRAITS(CType, Enum) \
template <> \
struct CTypeInfoTraits<CTypeInfo::Type::Enum> { \
using ctype = CType; \
};
#undef BASIC_C_TYPES
#define PRIMITIVE_C_TYPES(V) \
V(bool, kBool) \
V(uint8_t, kUint8) \
V(int32_t, kInt32) \
V(uint32_t, kUint32) \
V(int64_t, kInt64) \
V(uint64_t, kUint64) \
V(float, kFloat32) \
V(double, kFloat64) \
V(void*, kPointer)
// Same as above, but includes deprecated types for compatibility.
#define ALL_C_TYPES(V) \
PRIMITIVE_C_TYPES(V) \
V(void, kVoid) \
V(v8::Local<v8::Value>, kV8Value) \
V(v8::Local<v8::Object>, kV8Value) \
V(AnyCType, kAny)
// ApiObject was a temporary solution to wrap the pointer to the v8::Value.
// Please use v8::Local<v8::Value> in new code for the arguments and
// v8::Local<v8::Object> for the receiver, as ApiObject will be deprecated.
ALL_C_TYPES(SPECIALIZE_GET_TYPE_INFO_HELPER_FOR)
PRIMITIVE_C_TYPES(DEFINE_TYPE_INFO_TRAITS)
#undef PRIMITIVE_C_TYPES
#undef ALL_C_TYPES
#define SPECIALIZE_GET_TYPE_INFO_HELPER_FOR_TA(T, Enum) \
template <> \
struct TypeInfoHelper<const FastApiTypedArray<T>&> { \
static constexpr CTypeInfo::Flags Flags() { \
return CTypeInfo::Flags::kNone; \
} \
\
static constexpr CTypeInfo::Type Type() { return CTypeInfo::Type::Enum; } \
static constexpr CTypeInfo::SequenceType SequenceType() { \
return CTypeInfo::SequenceType::kIsTypedArray; \
} \
};
#define TYPED_ARRAY_C_TYPES(V) \
V(uint8_t, kUint8) \
V(int32_t, kInt32) \
V(uint32_t, kUint32) \
V(int64_t, kInt64) \
V(uint64_t, kUint64) \
V(float, kFloat32) \
V(double, kFloat64)
TYPED_ARRAY_C_TYPES(SPECIALIZE_GET_TYPE_INFO_HELPER_FOR_TA)
#undef TYPED_ARRAY_C_TYPES
template <>
struct TypeInfoHelper<v8::Local<v8::Array>> {
static constexpr CTypeInfo::Flags Flags() { return CTypeInfo::Flags::kNone; }
static constexpr CTypeInfo::Type Type() { return CTypeInfo::Type::kVoid; }
static constexpr CTypeInfo::SequenceType SequenceType() {
return CTypeInfo::SequenceType::kIsSequence;
}
};
template <>
struct TypeInfoHelper<v8::Local<v8::Uint32Array>> {
static constexpr CTypeInfo::Flags Flags() { return CTypeInfo::Flags::kNone; }
static constexpr CTypeInfo::Type Type() { return CTypeInfo::Type::kUint32; }
static constexpr CTypeInfo::SequenceType SequenceType() {
return CTypeInfo::SequenceType::kIsTypedArray;
}
};
template <>
struct TypeInfoHelper<FastApiCallbackOptions&> {
@ -420,28 +756,80 @@ struct TypeInfoHelper<FastApiCallbackOptions&> {
static constexpr CTypeInfo::Type Type() {
return CTypeInfo::kCallbackOptionsType;
}
static constexpr CTypeInfo::SequenceType SequenceType() {
return CTypeInfo::SequenceType::kScalar;
}
};
template <>
struct TypeInfoHelper<const FastOneByteString&> {
static constexpr CTypeInfo::Flags Flags() { return CTypeInfo::Flags::kNone; }
static constexpr CTypeInfo::Type Type() {
return CTypeInfo::Type::kSeqOneByteString;
}
static constexpr CTypeInfo::SequenceType SequenceType() {
return CTypeInfo::SequenceType::kScalar;
}
};
#define STATIC_ASSERT_IMPLIES(COND, ASSERTION, MSG) \
static_assert(((COND) == 0) || (ASSERTION), MSG)
} // namespace internal
template <typename T, CTypeInfo::Flags... Flags>
class CTypeInfoBuilder {
class V8_EXPORT CTypeInfoBuilder {
public:
using BaseType = T;
static constexpr CTypeInfo Build() {
// Get the flags and merge in any additional flags.
uint8_t flags = uint8_t(TypeInfoHelper<T>::Flags());
int unused[] = {0, (flags |= uint8_t(Flags), 0)...};
// With C++17, we could use a "..." fold expression over a parameter pack.
// Since we're still using C++14, we have to evaluate an OR expresion while
// constructing an unused list of 0's. This applies the binary operator
// for each value in Flags.
(void)unused;
constexpr CTypeInfo::Flags kFlags =
MergeFlags(internal::TypeInfoHelper<T>::Flags(), Flags...);
constexpr CTypeInfo::Type kType = internal::TypeInfoHelper<T>::Type();
constexpr CTypeInfo::SequenceType kSequenceType =
internal::TypeInfoHelper<T>::SequenceType();
STATIC_ASSERT_IMPLIES(
uint8_t(kFlags) & uint8_t(CTypeInfo::Flags::kAllowSharedBit),
(kSequenceType == CTypeInfo::SequenceType::kIsTypedArray ||
kSequenceType == CTypeInfo::SequenceType::kIsArrayBuffer),
"kAllowSharedBit is only allowed for TypedArrays and ArrayBuffers.");
STATIC_ASSERT_IMPLIES(
uint8_t(kFlags) & uint8_t(CTypeInfo::Flags::kEnforceRangeBit),
CTypeInfo::IsIntegralType(kType),
"kEnforceRangeBit is only allowed for integral types.");
STATIC_ASSERT_IMPLIES(
uint8_t(kFlags) & uint8_t(CTypeInfo::Flags::kClampBit),
CTypeInfo::IsIntegralType(kType),
"kClampBit is only allowed for integral types.");
STATIC_ASSERT_IMPLIES(
uint8_t(kFlags) & uint8_t(CTypeInfo::Flags::kIsRestrictedBit),
CTypeInfo::IsFloatingPointType(kType),
"kIsRestrictedBit is only allowed for floating point types.");
STATIC_ASSERT_IMPLIES(kSequenceType == CTypeInfo::SequenceType::kIsSequence,
kType == CTypeInfo::Type::kVoid,
"Sequences are only supported from void type.");
STATIC_ASSERT_IMPLIES(
kSequenceType == CTypeInfo::SequenceType::kIsTypedArray,
CTypeInfo::IsPrimitive(kType) || kType == CTypeInfo::Type::kVoid,
"TypedArrays are only supported from primitive types or void.");
// Return the same type with the merged flags.
return CTypeInfo(TypeInfoHelper<T>::Type(), CTypeInfo::Flags(flags));
return CTypeInfo(internal::TypeInfoHelper<T>::Type(),
internal::TypeInfoHelper<T>::SequenceType(), kFlags);
}
private:
template <typename... Rest>
static constexpr CTypeInfo::Flags MergeFlags(CTypeInfo::Flags flags,
Rest... rest) {
return CTypeInfo::Flags(uint8_t(flags) | uint8_t(MergeFlags(rest...)));
}
static constexpr CTypeInfo::Flags MergeFlags() { return CTypeInfo::Flags(0); }
};
namespace internal {
template <typename RetBuilder, typename... ArgBuilders>
class CFunctionBuilderWithFunction {
public:
@ -462,8 +850,21 @@ class CFunctionBuilderWithFunction {
std::make_index_sequence<sizeof...(ArgBuilders)>());
}
// Provided for testing purposes.
template <typename Ret, typename... Args>
auto Patch(Ret (*patching_func)(Args...)) {
static_assert(
sizeof...(Args) == sizeof...(ArgBuilders),
"The patching function must have the same number of arguments.");
fn_ = reinterpret_cast<void*>(patching_func);
return *this;
}
template <CFunctionInfo::Int64Representation Representation =
CFunctionInfo::Int64Representation::kNumber>
auto Build() {
static CFunctionInfoImpl<RetBuilder, ArgBuilders...> instance;
static CFunctionInfoImpl<Representation, RetBuilder, ArgBuilders...>
instance;
return CFunction(fn_, &instance);
}
@ -491,8 +892,9 @@ class CFunctionBuilderWithFunction {
Flags...>;
};
// Return a copy of the CFunctionBuilder, but merges the Flags on ArgBuilder
// index N with the new Flags passed in the template parameter pack.
// Return a copy of the CFunctionBuilder, but merges the Flags on
// ArgBuilder index N with the new Flags passed in the template parameter
// pack.
template <unsigned int N, CTypeInfo::Flags... Flags, size_t... I>
constexpr auto ArgImpl(std::index_sequence<I...>) {
return CFunctionBuilderWithFunction<
@ -524,6 +926,50 @@ CFunction CFunction::ArgUnwrap<R (*)(Args...)>::Make(R (*func)(Args...)) {
using CFunctionBuilder = internal::CFunctionBuilder;
static constexpr CTypeInfo kTypeInfoInt32 = CTypeInfo(CTypeInfo::Type::kInt32);
static constexpr CTypeInfo kTypeInfoFloat64 =
CTypeInfo(CTypeInfo::Type::kFloat64);
/**
* Copies the contents of this JavaScript array to a C++ buffer with
* a given max_length. A CTypeInfo is passed as an argument,
* instructing different rules for conversion (e.g. restricted float/double).
* The element type T of the destination array must match the C type
* corresponding to the CTypeInfo (specified by CTypeInfoTraits).
* If the array length is larger than max_length or the array is of
* unsupported type, the operation will fail, returning false. Generally, an
* array which contains objects, undefined, null or anything not convertible
* to the requested destination type, is considered unsupported. The operation
* returns true on success. `type_info` will be used for conversions.
*/
template <CTypeInfo::Identifier type_info_id, typename T>
bool V8_EXPORT V8_WARN_UNUSED_RESULT TryToCopyAndConvertArrayToCppBuffer(
Local<Array> src, T* dst, uint32_t max_length);
template <>
bool V8_EXPORT V8_WARN_UNUSED_RESULT
TryToCopyAndConvertArrayToCppBuffer<CTypeInfoBuilder<int32_t>::Build().GetId(),
int32_t>(Local<Array> src, int32_t* dst,
uint32_t max_length);
template <>
bool V8_EXPORT V8_WARN_UNUSED_RESULT
TryToCopyAndConvertArrayToCppBuffer<CTypeInfoBuilder<uint32_t>::Build().GetId(),
uint32_t>(Local<Array> src, uint32_t* dst,
uint32_t max_length);
template <>
bool V8_EXPORT V8_WARN_UNUSED_RESULT
TryToCopyAndConvertArrayToCppBuffer<CTypeInfoBuilder<float>::Build().GetId(),
float>(Local<Array> src, float* dst,
uint32_t max_length);
template <>
bool V8_EXPORT V8_WARN_UNUSED_RESULT
TryToCopyAndConvertArrayToCppBuffer<CTypeInfoBuilder<double>::Build().GetId(),
double>(Local<Array> src, double* dst,
uint32_t max_length);
} // namespace v8
#endif // INCLUDE_V8_FAST_API_CALLS_H_

View File

@ -0,0 +1,81 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_FORWARD_H_
#define INCLUDE_V8_FORWARD_H_
// This header is intended to be used by headers that pass around V8 types,
// either by pointer or using Local<Type>. The full definitions can be included
// either via v8.h or the more fine-grained headers.
#include "v8-local-handle.h" // NOLINT(build/include_directory)
namespace v8 {
class AccessorSignature;
class Array;
class ArrayBuffer;
class ArrayBufferView;
class BigInt;
class BigInt64Array;
class BigIntObject;
class BigUint64Array;
class Boolean;
class BooleanObject;
class Context;
class DataView;
class Data;
class Date;
class Extension;
class External;
class FixedArray;
class Float32Array;
class Float64Array;
class Function;
template <class F>
class FunctionCallbackInfo;
class FunctionTemplate;
class Int16Array;
class Int32;
class Int32Array;
class Int8Array;
class Integer;
class Isolate;
class Map;
class Module;
class Name;
class Number;
class NumberObject;
class Object;
class ObjectTemplate;
class Platform;
class Primitive;
class Private;
class Promise;
class Proxy;
class RegExp;
class Script;
class Set;
class SharedArrayBuffer;
class Signature;
class String;
class StringObject;
class Symbol;
class SymbolObject;
class Template;
class TryCatch;
class TypedArray;
class Uint16Array;
class Uint32;
class Uint32Array;
class Uint8Array;
class Uint8ClampedArray;
class UnboundModuleScript;
class Value;
class WasmMemoryObject;
class WasmModuleObject;
} // namespace v8
#endif // INCLUDE_V8_FORWARD_H_

View File

@ -0,0 +1,499 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_FUNCTION_CALLBACK_H_
#define INCLUDE_V8_FUNCTION_CALLBACK_H_
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-primitive.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
template <typename T>
class BasicTracedReference;
template <typename T>
class Global;
class Object;
class Value;
namespace internal {
class FunctionCallbackArguments;
class PropertyCallbackArguments;
class Builtins;
} // namespace internal
namespace debug {
class ConsoleCallArguments;
} // namespace debug
template <typename T>
class ReturnValue {
public:
template <class S>
V8_INLINE ReturnValue(const ReturnValue<S>& that) : value_(that.value_) {
static_assert(std::is_base_of<T, S>::value, "type check");
}
// Local setters
template <typename S>
V8_INLINE void Set(const Global<S>& handle);
template <typename S>
V8_INLINE void Set(const BasicTracedReference<S>& handle);
template <typename S>
V8_INLINE void Set(const Local<S> handle);
// Fast primitive setters
V8_INLINE void Set(bool value);
V8_INLINE void Set(double i);
V8_INLINE void Set(int32_t i);
V8_INLINE void Set(uint32_t i);
// Fast JS primitive setters
V8_INLINE void SetNull();
V8_INLINE void SetUndefined();
V8_INLINE void SetEmptyString();
// Convenience getter for Isolate
V8_INLINE Isolate* GetIsolate() const;
// Pointer setter: Uncompilable to prevent inadvertent misuse.
template <typename S>
V8_INLINE void Set(S* whatever);
// Getter. Creates a new Local<> so it comes with a certain performance
// hit. If the ReturnValue was not yet set, this will return the undefined
// value.
V8_INLINE Local<Value> Get() const;
private:
template <class F>
friend class ReturnValue;
template <class F>
friend class FunctionCallbackInfo;
template <class F>
friend class PropertyCallbackInfo;
template <class F, class G, class H>
friend class PersistentValueMapBase;
V8_INLINE void SetInternal(internal::Address value) { *value_ = value; }
V8_INLINE internal::Address GetDefaultValue();
V8_INLINE explicit ReturnValue(internal::Address* slot);
// See FunctionCallbackInfo.
static constexpr int kIsolateValueIndex = -2;
internal::Address* value_;
};
/**
* The argument information given to function call callbacks. This
* class provides access to information about the context of the call,
* including the receiver, the number and values of arguments, and
* the holder of the function.
*/
template <typename T>
class FunctionCallbackInfo {
public:
/** The number of available arguments. */
V8_INLINE int Length() const;
/**
* Accessor for the available arguments. Returns `undefined` if the index
* is out of bounds.
*/
V8_INLINE Local<Value> operator[](int i) const;
/** Returns the receiver. This corresponds to the "this" value. */
V8_INLINE Local<Object> This() const;
/**
* If the callback was created without a Signature, this is the same
* value as This(). If there is a signature, and the signature didn't match
* This() but one of its hidden prototypes, this will be the respective
* hidden prototype.
*
* Note that this is not the prototype of This() on which the accessor
* referencing this callback was found (which in V8 internally is often
* referred to as holder [sic]).
*/
V8_INLINE Local<Object> Holder() const;
/** For construct calls, this returns the "new.target" value. */
V8_INLINE Local<Value> NewTarget() const;
/** Indicates whether this is a regular call or a construct call. */
V8_INLINE bool IsConstructCall() const;
/** The data argument specified when creating the callback. */
V8_INLINE Local<Value> Data() const;
/** The current Isolate. */
V8_INLINE Isolate* GetIsolate() const;
/** The ReturnValue for the call. */
V8_INLINE ReturnValue<T> GetReturnValue() const;
private:
friend class internal::FunctionCallbackArguments;
friend class internal::CustomArguments<FunctionCallbackInfo>;
friend class debug::ConsoleCallArguments;
friend class internal::Builtins;
static constexpr int kHolderIndex = 0;
static constexpr int kIsolateIndex = 1;
static constexpr int kUnusedIndex = 2;
static constexpr int kReturnValueIndex = 3;
static constexpr int kDataIndex = 4;
static constexpr int kNewTargetIndex = 5;
static constexpr int kArgsLength = 6;
static constexpr int kArgsLengthWithReceiver = kArgsLength + 1;
// Codegen constants:
static constexpr int kSize = 3 * internal::kApiSystemPointerSize;
static constexpr int kImplicitArgsOffset = 0;
static constexpr int kValuesOffset =
kImplicitArgsOffset + internal::kApiSystemPointerSize;
static constexpr int kLengthOffset =
kValuesOffset + internal::kApiSystemPointerSize;
static constexpr int kThisValuesIndex = -1;
static_assert(ReturnValue<Value>::kIsolateValueIndex ==
kIsolateIndex - kReturnValueIndex);
V8_INLINE FunctionCallbackInfo(internal::Address* implicit_args,
internal::Address* values, int length);
internal::Address* implicit_args_;
internal::Address* values_;
int length_;
};
/**
* The information passed to a property callback about the context
* of the property access.
*/
template <typename T>
class PropertyCallbackInfo {
public:
/**
* \return The isolate of the property access.
*/
V8_INLINE Isolate* GetIsolate() const;
/**
* \return The data set in the configuration, i.e., in
* `NamedPropertyHandlerConfiguration` or
* `IndexedPropertyHandlerConfiguration.`
*/
V8_INLINE Local<Value> Data() const;
/**
* \return The receiver. In many cases, this is the object on which the
* property access was intercepted. When using
* `Reflect.get`, `Function.prototype.call`, or similar functions, it is the
* object passed in as receiver or thisArg.
*
* \code
* void GetterCallback(Local<Name> name,
* const v8::PropertyCallbackInfo<v8::Value>& info) {
* auto context = info.GetIsolate()->GetCurrentContext();
*
* v8::Local<v8::Value> a_this =
* info.This()
* ->GetRealNamedProperty(context, v8_str("a"))
* .ToLocalChecked();
* v8::Local<v8::Value> a_holder =
* info.Holder()
* ->GetRealNamedProperty(context, v8_str("a"))
* .ToLocalChecked();
*
* CHECK(v8_str("r")->Equals(context, a_this).FromJust());
* CHECK(v8_str("obj")->Equals(context, a_holder).FromJust());
*
* info.GetReturnValue().Set(name);
* }
*
* v8::Local<v8::FunctionTemplate> templ =
* v8::FunctionTemplate::New(isolate);
* templ->InstanceTemplate()->SetHandler(
* v8::NamedPropertyHandlerConfiguration(GetterCallback));
* LocalContext env;
* env->Global()
* ->Set(env.local(), v8_str("obj"), templ->GetFunction(env.local())
* .ToLocalChecked()
* ->NewInstance(env.local())
* .ToLocalChecked())
* .FromJust();
*
* CompileRun("obj.a = 'obj'; var r = {a: 'r'}; Reflect.get(obj, 'x', r)");
* \endcode
*/
V8_INLINE Local<Object> This() const;
/**
* \return The object in the prototype chain of the receiver that has the
* interceptor. Suppose you have `x` and its prototype is `y`, and `y`
* has an interceptor. Then `info.This()` is `x` and `info.Holder()` is `y`.
* The Holder() could be a hidden object (the global object, rather
* than the global proxy).
*
* \note For security reasons, do not pass the object back into the runtime.
*/
V8_INLINE Local<Object> Holder() const;
/**
* \return The return value of the callback.
* Can be changed by calling Set().
* \code
* info.GetReturnValue().Set(...)
* \endcode
*
*/
V8_INLINE ReturnValue<T> GetReturnValue() const;
/**
* \return True if the intercepted function should throw if an error occurs.
* Usually, `true` corresponds to `'use strict'`.
*
* \note Always `false` when intercepting `Reflect.set()`
* independent of the language mode.
*/
V8_INLINE bool ShouldThrowOnError() const;
private:
friend class MacroAssembler;
friend class internal::PropertyCallbackArguments;
friend class internal::CustomArguments<PropertyCallbackInfo>;
static constexpr int kShouldThrowOnErrorIndex = 0;
static constexpr int kHolderIndex = 1;
static constexpr int kIsolateIndex = 2;
static constexpr int kUnusedIndex = 3;
static constexpr int kReturnValueIndex = 4;
static constexpr int kDataIndex = 5;
static constexpr int kThisIndex = 6;
static constexpr int kArgsLength = 7;
static constexpr int kSize = 1 * internal::kApiSystemPointerSize;
V8_INLINE explicit PropertyCallbackInfo(internal::Address* args)
: args_(args) {}
internal::Address* args_;
};
using FunctionCallback = void (*)(const FunctionCallbackInfo<Value>& info);
// --- Implementation ---
template <typename T>
ReturnValue<T>::ReturnValue(internal::Address* slot) : value_(slot) {}
template <typename T>
template <typename S>
void ReturnValue<T>::Set(const Global<S>& handle) {
static_assert(std::is_base_of<T, S>::value, "type check");
if (V8_UNLIKELY(handle.IsEmpty())) {
*value_ = GetDefaultValue();
} else {
*value_ = handle.ptr();
}
}
template <typename T>
template <typename S>
void ReturnValue<T>::Set(const BasicTracedReference<S>& handle) {
static_assert(std::is_base_of<T, S>::value, "type check");
if (V8_UNLIKELY(handle.IsEmpty())) {
*value_ = GetDefaultValue();
} else {
*value_ = handle.ptr();
}
}
template <typename T>
template <typename S>
void ReturnValue<T>::Set(const Local<S> handle) {
static_assert(std::is_void<T>::value || std::is_base_of<T, S>::value,
"type check");
if (V8_UNLIKELY(handle.IsEmpty())) {
*value_ = GetDefaultValue();
} else {
*value_ = handle.ptr();
}
}
template <typename T>
void ReturnValue<T>::Set(double i) {
static_assert(std::is_base_of<T, Number>::value, "type check");
Set(Number::New(GetIsolate(), i));
}
template <typename T>
void ReturnValue<T>::Set(int32_t i) {
static_assert(std::is_base_of<T, Integer>::value, "type check");
using I = internal::Internals;
if (V8_LIKELY(I::IsValidSmi(i))) {
*value_ = I::IntToSmi(i);
return;
}
Set(Integer::New(GetIsolate(), i));
}
template <typename T>
void ReturnValue<T>::Set(uint32_t i) {
static_assert(std::is_base_of<T, Integer>::value, "type check");
// Can't simply use INT32_MAX here for whatever reason.
bool fits_into_int32_t = (i & (1U << 31)) == 0;
if (V8_LIKELY(fits_into_int32_t)) {
Set(static_cast<int32_t>(i));
return;
}
Set(Integer::NewFromUnsigned(GetIsolate(), i));
}
template <typename T>
void ReturnValue<T>::Set(bool value) {
static_assert(std::is_base_of<T, Boolean>::value, "type check");
using I = internal::Internals;
int root_index;
if (value) {
root_index = I::kTrueValueRootIndex;
} else {
root_index = I::kFalseValueRootIndex;
}
*value_ = I::GetRoot(GetIsolate(), root_index);
}
template <typename T>
void ReturnValue<T>::SetNull() {
static_assert(std::is_base_of<T, Primitive>::value, "type check");
using I = internal::Internals;
*value_ = I::GetRoot(GetIsolate(), I::kNullValueRootIndex);
}
template <typename T>
void ReturnValue<T>::SetUndefined() {
static_assert(std::is_base_of<T, Primitive>::value, "type check");
using I = internal::Internals;
*value_ = I::GetRoot(GetIsolate(), I::kUndefinedValueRootIndex);
}
template <typename T>
void ReturnValue<T>::SetEmptyString() {
static_assert(std::is_base_of<T, String>::value, "type check");
using I = internal::Internals;
*value_ = I::GetRoot(GetIsolate(), I::kEmptyStringRootIndex);
}
template <typename T>
Isolate* ReturnValue<T>::GetIsolate() const {
return *reinterpret_cast<Isolate**>(&value_[kIsolateValueIndex]);
}
template <typename T>
Local<Value> ReturnValue<T>::Get() const {
using I = internal::Internals;
#if V8_STATIC_ROOTS_BOOL
if (I::is_identical(*value_, I::StaticReadOnlyRoot::kTheHoleValue)) {
#else
if (*value_ == I::GetRoot(GetIsolate(), I::kTheHoleValueRootIndex)) {
#endif
return Undefined(GetIsolate());
}
return Local<Value>::New(GetIsolate(), reinterpret_cast<Value*>(value_));
}
template <typename T>
template <typename S>
void ReturnValue<T>::Set(S* whatever) {
static_assert(sizeof(S) < 0, "incompilable to prevent inadvertent misuse");
}
template <typename T>
internal::Address ReturnValue<T>::GetDefaultValue() {
using I = internal::Internals;
return I::GetRoot(GetIsolate(), I::kTheHoleValueRootIndex);
}
template <typename T>
FunctionCallbackInfo<T>::FunctionCallbackInfo(internal::Address* implicit_args,
internal::Address* values,
int length)
: implicit_args_(implicit_args), values_(values), length_(length) {}
template <typename T>
Local<Value> FunctionCallbackInfo<T>::operator[](int i) const {
// values_ points to the first argument (not the receiver).
if (i < 0 || length_ <= i) return Undefined(GetIsolate());
return Local<Value>::FromSlot(values_ + i);
}
template <typename T>
Local<Object> FunctionCallbackInfo<T>::This() const {
// values_ points to the first argument (not the receiver).
return Local<Object>::FromSlot(values_ + kThisValuesIndex);
}
template <typename T>
Local<Object> FunctionCallbackInfo<T>::Holder() const {
return Local<Object>::FromSlot(&implicit_args_[kHolderIndex]);
}
template <typename T>
Local<Value> FunctionCallbackInfo<T>::NewTarget() const {
return Local<Value>::FromSlot(&implicit_args_[kNewTargetIndex]);
}
template <typename T>
Local<Value> FunctionCallbackInfo<T>::Data() const {
return Local<Value>::FromSlot(&implicit_args_[kDataIndex]);
}
template <typename T>
Isolate* FunctionCallbackInfo<T>::GetIsolate() const {
return *reinterpret_cast<Isolate**>(&implicit_args_[kIsolateIndex]);
}
template <typename T>
ReturnValue<T> FunctionCallbackInfo<T>::GetReturnValue() const {
return ReturnValue<T>(&implicit_args_[kReturnValueIndex]);
}
template <typename T>
bool FunctionCallbackInfo<T>::IsConstructCall() const {
return !NewTarget()->IsUndefined();
}
template <typename T>
int FunctionCallbackInfo<T>::Length() const {
return length_;
}
template <typename T>
Isolate* PropertyCallbackInfo<T>::GetIsolate() const {
return *reinterpret_cast<Isolate**>(&args_[kIsolateIndex]);
}
template <typename T>
Local<Value> PropertyCallbackInfo<T>::Data() const {
return Local<Value>::FromSlot(&args_[kDataIndex]);
}
template <typename T>
Local<Object> PropertyCallbackInfo<T>::This() const {
return Local<Object>::FromSlot(&args_[kThisIndex]);
}
template <typename T>
Local<Object> PropertyCallbackInfo<T>::Holder() const {
return Local<Object>::FromSlot(&args_[kHolderIndex]);
}
template <typename T>
ReturnValue<T> PropertyCallbackInfo<T>::GetReturnValue() const {
return ReturnValue<T>(&args_[kReturnValueIndex]);
}
template <typename T>
bool PropertyCallbackInfo<T>::ShouldThrowOnError() const {
using I = internal::Internals;
if (args_[kShouldThrowOnErrorIndex] !=
I::IntToSmi(I::kInferShouldThrowMode)) {
return args_[kShouldThrowOnErrorIndex] != I::IntToSmi(I::kDontThrow);
}
return v8::internal::ShouldThrowOnError(
reinterpret_cast<v8::internal::Isolate*>(GetIsolate()));
}
} // namespace v8
#endif // INCLUDE_V8_FUNCTION_CALLBACK_H_

View File

@ -0,0 +1,134 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_FUNCTION_H_
#define INCLUDE_V8_FUNCTION_H_
#include <stddef.h>
#include <stdint.h>
#include "v8-function-callback.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-message.h" // NOLINT(build/include_directory)
#include "v8-object.h" // NOLINT(build/include_directory)
#include "v8-template.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Context;
class UnboundScript;
/**
* A JavaScript function object (ECMA-262, 15.3).
*/
class V8_EXPORT Function : public Object {
public:
/**
* Create a function in the current execution context
* for a given FunctionCallback.
*/
static MaybeLocal<Function> New(
Local<Context> context, FunctionCallback callback,
Local<Value> data = Local<Value>(), int length = 0,
ConstructorBehavior behavior = ConstructorBehavior::kAllow,
SideEffectType side_effect_type = SideEffectType::kHasSideEffect);
V8_WARN_UNUSED_RESULT MaybeLocal<Object> NewInstance(
Local<Context> context, int argc, Local<Value> argv[]) const;
V8_WARN_UNUSED_RESULT MaybeLocal<Object> NewInstance(
Local<Context> context) const {
return NewInstance(context, 0, nullptr);
}
/**
* When side effect checks are enabled, passing kHasNoSideEffect allows the
* constructor to be invoked without throwing. Calls made within the
* constructor are still checked.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Object> NewInstanceWithSideEffectType(
Local<Context> context, int argc, Local<Value> argv[],
SideEffectType side_effect_type = SideEffectType::kHasSideEffect) const;
V8_WARN_UNUSED_RESULT MaybeLocal<Value> Call(Local<Context> context,
Local<Value> recv, int argc,
Local<Value> argv[]);
void SetName(Local<String> name);
Local<Value> GetName() const;
V8_DEPRECATED("No direct replacement")
MaybeLocal<UnboundScript> GetUnboundScript() const;
/**
* Name inferred from variable or property assignment of this function.
* Used to facilitate debugging and profiling of JavaScript code written
* in an OO style, where many functions are anonymous but are assigned
* to object properties.
*/
Local<Value> GetInferredName() const;
/**
* displayName if it is set, otherwise name if it is configured, otherwise
* function name, otherwise inferred name.
*/
Local<Value> GetDebugName() const;
/**
* Returns zero based line number of function body and
* kLineOffsetNotFound if no information available.
*/
int GetScriptLineNumber() const;
/**
* Returns zero based column number of function body and
* kLineOffsetNotFound if no information available.
*/
int GetScriptColumnNumber() const;
/**
* Returns scriptId.
*/
int ScriptId() const;
/**
* Returns the original function if this function is bound, else returns
* v8::Undefined.
*/
Local<Value> GetBoundFunction() const;
/**
* Calls builtin Function.prototype.toString on this function.
* This is different from Value::ToString() that may call a user-defined
* toString() function, and different than Object::ObjectProtoToString() which
* always serializes "[object Function]".
*/
V8_WARN_UNUSED_RESULT MaybeLocal<String> FunctionProtoToString(
Local<Context> context);
/**
* Returns true if the function does nothing.
* The function returns false on error.
* Note that this function is experimental. Embedders should not rely on
* this existing. We may remove this function in the future.
*/
V8_WARN_UNUSED_RESULT bool Experimental_IsNopFunction() const;
ScriptOrigin GetScriptOrigin() const;
V8_INLINE static Function* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Function*>(value);
}
static const int kLineOffsetNotFound;
private:
Function();
static void CheckCast(Value* obj);
};
} // namespace v8
#endif // INCLUDE_V8_FUNCTION_H_

View File

@ -0,0 +1,180 @@
// Copyright 2023 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_HANDLE_BASE_H_
#define INCLUDE_V8_HANDLE_BASE_H_
#include "v8-internal.h" // NOLINT(build/include_directory)
namespace v8 {
namespace internal {
// Helper functions about values contained in handles.
// A value is either an indirect pointer or a direct pointer, depending on
// whether direct local support is enabled.
class ValueHelper final {
public:
#ifdef V8_ENABLE_DIRECT_LOCAL
static constexpr Address kTaggedNullAddress = 1;
static constexpr Address kEmpty = kTaggedNullAddress;
#else
static constexpr Address kEmpty = kNullAddress;
#endif // V8_ENABLE_DIRECT_LOCAL
template <typename T>
V8_INLINE static bool IsEmpty(T* value) {
return reinterpret_cast<Address>(value) == kEmpty;
}
// Returns a handle's "value" for all kinds of abstract handles. For Local,
// it is equivalent to `*handle`. The variadic parameters support handle
// types with extra type parameters, like `Persistent<T, M>`.
template <template <typename T, typename... Ms> typename H, typename T,
typename... Ms>
V8_INLINE static T* HandleAsValue(const H<T, Ms...>& handle) {
return handle.template value<T>();
}
#ifdef V8_ENABLE_DIRECT_LOCAL
template <typename T>
V8_INLINE static Address ValueAsAddress(const T* value) {
return reinterpret_cast<Address>(value);
}
template <typename T, typename S>
V8_INLINE static T* SlotAsValue(S* slot) {
return *reinterpret_cast<T**>(slot);
}
#else // !V8_ENABLE_DIRECT_LOCAL
template <typename T>
V8_INLINE static Address ValueAsAddress(const T* value) {
return *reinterpret_cast<const Address*>(value);
}
template <typename T, typename S>
V8_INLINE static T* SlotAsValue(S* slot) {
return reinterpret_cast<T*>(slot);
}
#endif // V8_ENABLE_DIRECT_LOCAL
};
/**
* Helper functions about handles.
*/
class HandleHelper final {
public:
/**
* Checks whether two handles are equal.
* They are equal iff they are both empty or they are both non-empty and the
* objects to which they refer are physically equal.
*
* If both handles refer to JS objects, this is the same as strict equality.
* For primitives, such as numbers or strings, a `false` return value does not
* indicate that the values aren't equal in the JavaScript sense.
* Use `Value::StrictEquals()` to check primitives for equality.
*/
template <typename T1, typename T2>
V8_INLINE static bool EqualHandles(const T1& lhs, const T2& rhs) {
if (lhs.IsEmpty()) return rhs.IsEmpty();
if (rhs.IsEmpty()) return false;
return lhs.ptr() == rhs.ptr();
}
};
} // namespace internal
/**
* A base class for abstract handles containing indirect pointers.
* These are useful regardless of whether direct local support is enabled.
*/
class IndirectHandleBase {
public:
// Returns true if the handle is empty.
V8_INLINE bool IsEmpty() const { return location_ == nullptr; }
// Sets the handle to be empty. IsEmpty() will then return true.
V8_INLINE void Clear() { location_ = nullptr; }
protected:
friend class internal::ValueHelper;
friend class internal::HandleHelper;
V8_INLINE IndirectHandleBase() = default;
V8_INLINE IndirectHandleBase(const IndirectHandleBase& other) = default;
V8_INLINE IndirectHandleBase& operator=(const IndirectHandleBase& that) =
default;
V8_INLINE explicit IndirectHandleBase(internal::Address* location)
: location_(location) {}
// Returns the address of the actual heap object (tagged).
// This method must be called only if the handle is not empty, otherwise it
// will crash.
V8_INLINE internal::Address ptr() const { return *location_; }
// Returns a reference to the slot (indirect pointer).
V8_INLINE internal::Address* const& slot() const { return location_; }
V8_INLINE internal::Address*& slot() { return location_; }
// Returns the handler's "value" (direct or indirect pointer, depending on
// whether direct local support is enabled).
template <typename T>
V8_INLINE T* value() const {
return internal::ValueHelper::SlotAsValue<T>(slot());
}
private:
internal::Address* location_ = nullptr;
};
#ifdef V8_ENABLE_DIRECT_LOCAL
/**
* A base class for abstract handles containing direct pointers.
* These are only possible when conservative stack scanning is enabled.
*/
class DirectHandleBase {
public:
// Returns true if the handle is empty.
V8_INLINE bool IsEmpty() const {
return ptr_ == internal::ValueHelper::kEmpty;
}
// Sets the handle to be empty. IsEmpty() will then return true.
V8_INLINE void Clear() { ptr_ = internal::ValueHelper::kEmpty; }
protected:
friend class internal::ValueHelper;
friend class internal::HandleHelper;
V8_INLINE DirectHandleBase() = default;
V8_INLINE DirectHandleBase(const DirectHandleBase& other) = default;
V8_INLINE DirectHandleBase& operator=(const DirectHandleBase& that) = default;
V8_INLINE explicit DirectHandleBase(internal::Address ptr) : ptr_(ptr) {}
// Returns the address of the referenced object.
V8_INLINE internal::Address ptr() const { return ptr_; }
// Returns the handler's "value" (direct pointer, as direct local support
// is guaranteed to be enabled here).
template <typename T>
V8_INLINE T* value() const {
return reinterpret_cast<T*>(ptr_);
}
private:
internal::Address ptr_ = internal::ValueHelper::kEmpty;
};
#endif // V8_ENABLE_DIRECT_LOCAL
} // namespace v8
#endif // INCLUDE_V8_HANDLE_BASE_H_

View File

@ -0,0 +1,289 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_INITIALIZATION_H_
#define INCLUDE_V8_INITIALIZATION_H_
#include <stddef.h>
#include <stdint.h>
#include "v8-callbacks.h" // NOLINT(build/include_directory)
#include "v8-internal.h" // NOLINT(build/include_directory)
#include "v8-isolate.h" // NOLINT(build/include_directory)
#include "v8-platform.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
// We reserve the V8_* prefix for macros defined in V8 public API and
// assume there are no name conflicts with the embedder's code.
/**
* The v8 JavaScript engine.
*/
namespace v8 {
class PageAllocator;
class Platform;
template <class K, class V, class T>
class PersistentValueMapBase;
/**
* EntropySource is used as a callback function when v8 needs a source
* of entropy.
*/
using EntropySource = bool (*)(unsigned char* buffer, size_t length);
/**
* ReturnAddressLocationResolver is used as a callback function when v8 is
* resolving the location of a return address on the stack. Profilers that
* change the return address on the stack can use this to resolve the stack
* location to wherever the profiler stashed the original return address.
*
* \param return_addr_location A location on stack where a machine
* return address resides.
* \returns Either return_addr_location, or else a pointer to the profiler's
* copy of the original return address.
*
* \note The resolver function must not cause garbage collection.
*/
using ReturnAddressLocationResolver =
uintptr_t (*)(uintptr_t return_addr_location);
using DcheckErrorCallback = void (*)(const char* file, int line,
const char* message);
/**
* Container class for static utility functions.
*/
class V8_EXPORT V8 {
public:
/**
* Hand startup data to V8, in case the embedder has chosen to build
* V8 with external startup data.
*
* Note:
* - By default the startup data is linked into the V8 library, in which
* case this function is not meaningful.
* - If this needs to be called, it needs to be called before V8
* tries to make use of its built-ins.
* - To avoid unnecessary copies of data, V8 will point directly into the
* given data blob, so pretty please keep it around until V8 exit.
* - Compression of the startup blob might be useful, but needs to
* handled entirely on the embedders' side.
* - The call will abort if the data is invalid.
*/
static void SetSnapshotDataBlob(StartupData* startup_blob);
/** Set the callback to invoke in case of Dcheck failures. */
static void SetDcheckErrorHandler(DcheckErrorCallback that);
/**
* Sets V8 flags from a string.
*/
static void SetFlagsFromString(const char* str);
static void SetFlagsFromString(const char* str, size_t length);
/**
* Sets V8 flags from the command line.
*/
static void SetFlagsFromCommandLine(int* argc, char** argv,
bool remove_flags);
/** Get the version string. */
static const char* GetVersion();
/**
* Initializes V8. This function needs to be called before the first Isolate
* is created. It always returns true.
*/
V8_INLINE static bool Initialize() {
const int kBuildConfiguration =
(internal::PointerCompressionIsEnabled() ? kPointerCompression : 0) |
(internal::SmiValuesAre31Bits() ? k31BitSmis : 0) |
(internal::SandboxIsEnabled() ? kSandbox : 0);
return Initialize(kBuildConfiguration);
}
/**
* Allows the host application to provide a callback which can be used
* as a source of entropy for random number generators.
*/
static void SetEntropySource(EntropySource source);
/**
* Allows the host application to provide a callback that allows v8 to
* cooperate with a profiler that rewrites return addresses on stack.
*/
static void SetReturnAddressLocationResolver(
ReturnAddressLocationResolver return_address_resolver);
/**
* Releases any resources used by v8 and stops any utility threads
* that may be running. Note that disposing v8 is permanent, it
* cannot be reinitialized.
*
* It should generally not be necessary to dispose v8 before exiting
* a process, this should happen automatically. It is only necessary
* to use if the process needs the resources taken up by v8.
*/
static bool Dispose();
/**
* Initialize the ICU library bundled with V8. The embedder should only
* invoke this method when using the bundled ICU. Returns true on success.
*
* If V8 was compiled with the ICU data in an external file, the location
* of the data file has to be provided.
*/
static bool InitializeICU(const char* icu_data_file = nullptr);
/**
* Initialize the ICU library bundled with V8. The embedder should only
* invoke this method when using the bundled ICU. If V8 was compiled with
* the ICU data in an external file and when the default location of that
* file should be used, a path to the executable must be provided.
* Returns true on success.
*
* The default is a file called icudtl.dat side-by-side with the executable.
*
* Optionally, the location of the data file can be provided to override the
* default.
*/
static bool InitializeICUDefaultLocation(const char* exec_path,
const char* icu_data_file = nullptr);
/**
* Initialize the external startup data. The embedder only needs to
* invoke this method when external startup data was enabled in a build.
*
* If V8 was compiled with the startup data in an external file, then
* V8 needs to be given those external files during startup. There are
* three ways to do this:
* - InitializeExternalStartupData(const char*)
* This will look in the given directory for the file "snapshot_blob.bin".
* - InitializeExternalStartupDataFromFile(const char*)
* As above, but will directly use the given file name.
* - Call SetSnapshotDataBlob.
* This will read the blobs from the given data structure and will
* not perform any file IO.
*/
static void InitializeExternalStartupData(const char* directory_path);
static void InitializeExternalStartupDataFromFile(const char* snapshot_blob);
/**
* Sets the v8::Platform to use. This should be invoked before V8 is
* initialized.
*/
static void InitializePlatform(Platform* platform);
/**
* Clears all references to the v8::Platform. This should be invoked after
* V8 was disposed.
*/
static void DisposePlatform();
#if defined(V8_ENABLE_SANDBOX)
/**
* Returns true if the sandbox is configured securely.
*
* If V8 cannot create a regular sandbox during initialization, for example
* because not enough virtual address space can be reserved, it will instead
* create a fallback sandbox that still allows it to function normally but
* does not have the same security properties as a regular sandbox. This API
* can be used to determine if such a fallback sandbox is being used, in
* which case it will return false.
*/
static bool IsSandboxConfiguredSecurely();
/**
* Provides access to the virtual address subspace backing the sandbox.
*
* This can be used to allocate pages inside the sandbox, for example to
* obtain virtual memory for ArrayBuffer backing stores, which must be
* located inside the sandbox.
*
* It should be assumed that an attacker can corrupt data inside the sandbox,
* and so in particular the contents of pages allocagted in this virtual
* address space, arbitrarily and concurrently. Due to this, it is
* recommended to to only place pure data buffers in them.
*/
static VirtualAddressSpace* GetSandboxAddressSpace();
/**
* Returns the size of the sandbox in bytes.
*
* This represents the size of the address space that V8 can directly address
* and in which it allocates its objects.
*/
static size_t GetSandboxSizeInBytes();
/**
* Returns the size of the address space reservation backing the sandbox.
*
* This may be larger than the sandbox (i.e. |GetSandboxSizeInBytes()|) due
* to surrounding guard regions, or may be smaller than the sandbox in case a
* fallback sandbox is being used, which will use a smaller virtual address
* space reservation. In the latter case this will also be different from
* |GetSandboxAddressSpace()->size()| as that will cover a larger part of the
* address space than what has actually been reserved.
*/
static size_t GetSandboxReservationSizeInBytes();
#endif // V8_ENABLE_SANDBOX
/**
* Activate trap-based bounds checking for WebAssembly.
*
* \param use_v8_signal_handler Whether V8 should install its own signal
* handler or rely on the embedder's.
*/
static bool EnableWebAssemblyTrapHandler(bool use_v8_signal_handler);
#if defined(V8_OS_WIN)
/**
* On Win64, by default V8 does not emit unwinding data for jitted code,
* which means the OS cannot walk the stack frames and the system Structured
* Exception Handling (SEH) cannot unwind through V8-generated code:
* https://code.google.com/p/v8/issues/detail?id=3598.
*
* This function allows embedders to register a custom exception handler for
* exceptions in V8-generated code.
*/
static void SetUnhandledExceptionCallback(
UnhandledExceptionCallback callback);
#endif
/**
* Allows the host application to provide a callback that will be called when
* v8 has encountered a fatal failure to allocate memory and is about to
* terminate.
*/
static void SetFatalMemoryErrorCallback(OOMErrorCallback callback);
/**
* Get statistics about the shared memory usage.
*/
static void GetSharedMemoryStatistics(SharedMemoryStatistics* statistics);
private:
V8();
enum BuildConfigurationFeatures {
kPointerCompression = 1 << 0,
k31BitSmis = 1 << 1,
kSandbox = 1 << 2,
};
/**
* Checks that the embedder build configuration is compatible with
* the V8 binary and if so initializes V8.
*/
static bool Initialize(int build_config);
friend class Context;
template <class K, class V, class T>
friend class PersistentValueMapBase;
};
} // namespace v8
#endif // INCLUDE_V8_INITIALIZATION_H_

View File

@ -6,33 +6,45 @@
#define V8_V8_INSPECTOR_H_
#include <stdint.h>
#include <cctype>
#include <memory>
#include <unordered_map>
#include "v8.h" // NOLINT(build/include_directory)
#include "v8-isolate.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
namespace v8 {
class Context;
class Name;
class Object;
class StackTrace;
class Value;
} // namespace v8
namespace v8_inspector {
namespace internal {
class V8DebuggerId;
} // namespace internal
namespace protocol {
namespace Debugger {
namespace API {
class SearchMatch;
}
}
} // namespace Debugger
namespace Runtime {
namespace API {
class RemoteObject;
class StackTrace;
class StackTraceId;
}
}
} // namespace API
} // namespace Runtime
namespace Schema {
namespace API {
class Domain;
}
}
} // namespace Schema
} // namespace protocol
class V8_EXPORT StringView {
@ -98,6 +110,37 @@ class V8_EXPORT V8ContextInfo {
V8ContextInfo& operator=(const V8ContextInfo&) = delete;
};
// This debugger id tries to be unique by generating two random
// numbers, which should most likely avoid collisions.
// Debugger id has a 1:1 mapping to context group. It is used to
// attribute stack traces to a particular debugging, when doing any
// cross-debugger operations (e.g. async step in).
// See also Runtime.UniqueDebuggerId in the protocol.
class V8_EXPORT V8DebuggerId {
public:
V8DebuggerId() = default;
V8DebuggerId(const V8DebuggerId&) = default;
V8DebuggerId& operator=(const V8DebuggerId&) = default;
std::unique_ptr<StringBuffer> toString() const;
bool isValid() const;
std::pair<int64_t, int64_t> pair() const;
private:
friend class internal::V8DebuggerId;
explicit V8DebuggerId(std::pair<int64_t, int64_t>);
int64_t m_first = 0;
int64_t m_second = 0;
};
struct V8_EXPORT V8StackFrame {
StringView sourceURL;
StringView functionName;
int lineNumber;
int columnNumber;
};
class V8_EXPORT V8StackTrace {
public:
virtual StringView firstNonEmptySourceURL() const = 0;
@ -105,19 +148,18 @@ class V8_EXPORT V8StackTrace {
virtual StringView topSourceURL() const = 0;
virtual int topLineNumber() const = 0;
virtual int topColumnNumber() const = 0;
virtual StringView topScriptId() const = 0;
virtual int topScriptIdAsInteger() const = 0;
virtual int topScriptId() const = 0;
virtual StringView topFunctionName() const = 0;
virtual ~V8StackTrace() = default;
virtual std::unique_ptr<protocol::Runtime::API::StackTrace>
buildInspectorObject() const = 0;
virtual std::unique_ptr<protocol::Runtime::API::StackTrace>
buildInspectorObject(int maxAsyncDepth) const = 0;
virtual std::unique_ptr<StringBuffer> toString() const = 0;
// Safe to pass between threads, drops async chain.
virtual std::unique_ptr<V8StackTrace> clone() = 0;
virtual std::vector<V8StackFrame> frames() const = 0;
};
class V8_EXPORT V8InspectorSession {
@ -130,6 +172,10 @@ class V8_EXPORT V8InspectorSession {
virtual v8::Local<v8::Value> get(v8::Local<v8::Context>) = 0;
virtual ~Inspectable() = default;
};
class V8_EXPORT CommandLineAPIScope {
public:
virtual ~CommandLineAPIScope() = default;
};
virtual void addInspectedObject(std::unique_ptr<Inspectable>) = 0;
// Dispatching protocol messages.
@ -139,6 +185,9 @@ class V8_EXPORT V8InspectorSession {
virtual std::vector<std::unique_ptr<protocol::Schema::API::Domain>>
supportedDomains() = 0;
virtual std::unique_ptr<V8InspectorSession::CommandLineAPIScope>
initializeCommandLineAPIScope(int executionContextId) = 0;
// Debugger actions.
virtual void schedulePauseOnNextStatement(StringView breakReason,
StringView breakDetails) = 0;
@ -162,7 +211,19 @@ class V8_EXPORT V8InspectorSession {
v8::Local<v8::Context>*,
std::unique_ptr<StringBuffer>* objectGroup) = 0;
virtual void releaseObjectGroup(StringView) = 0;
virtual void triggerPreciseCoverageDeltaUpdate(StringView occassion) = 0;
virtual void triggerPreciseCoverageDeltaUpdate(StringView occasion) = 0;
// Prepare for shutdown (disables debugger pausing, etc.).
virtual void stop() = 0;
};
class V8_EXPORT WebDriverValue {
public:
explicit WebDriverValue(std::unique_ptr<StringBuffer> type,
v8::MaybeLocal<v8::Value> value = {})
: type(std::move(type)), value(value) {}
std::unique_ptr<StringBuffer> type;
v8::MaybeLocal<v8::Value> value;
};
class V8_EXPORT V8InspectorClient {
@ -170,6 +231,9 @@ class V8_EXPORT V8InspectorClient {
virtual ~V8InspectorClient() = default;
virtual void runMessageLoopOnPause(int contextGroupId) {}
virtual void runMessageLoopOnInstrumentationPause(int contextGroupId) {
runMessageLoopOnPause(contextGroupId);
}
virtual void quitMessageLoopOnPause() {}
virtual void runIfWaitingForDebugger(int contextGroupId) {}
@ -179,6 +243,10 @@ class V8_EXPORT V8InspectorClient {
virtual void beginUserGesture() {}
virtual void endUserGesture() {}
virtual std::unique_ptr<WebDriverValue> serializeToWebDriverValue(
v8::Local<v8::Value> v8Value, int maxDepth) {
return nullptr;
}
virtual std::unique_ptr<StringBuffer> valueSubtype(v8::Local<v8::Value>) {
return nullptr;
}
@ -186,9 +254,6 @@ class V8_EXPORT V8InspectorClient {
v8::Local<v8::Context>, v8::Local<v8::Value>) {
return nullptr;
}
virtual bool formatAccessorsAsProperties(v8::Local<v8::Value>) {
return false;
}
virtual bool isInspectableHeapObject(v8::Local<v8::Object>) { return true; }
virtual v8::Local<v8::Context> ensureDefaultContextInGroup(
@ -233,6 +298,9 @@ class V8_EXPORT V8InspectorClient {
// The caller would defer to generating a random 64 bit integer if
// this method returns 0.
virtual int64_t generateUniqueId() { return 0; }
virtual void dispatchError(v8::Local<v8::Context>, v8::Local<v8::Message>,
v8::Local<v8::Value>) {}
};
// These stack trace ids are intended to be passed between debuggers and be
@ -267,6 +335,7 @@ class V8_EXPORT V8Inspector {
virtual void contextDestroyed(v8::Local<v8::Context>) = 0;
virtual void resetContextGroup(int contextGroupId) = 0;
virtual v8::MaybeLocal<v8::Context> contextById(int contextId) = 0;
virtual V8DebuggerId uniqueDebuggerId(int contextId) = 0;
// Various instrumentation.
virtual void idleStarted() = 0;
@ -293,6 +362,10 @@ class V8_EXPORT V8Inspector {
int scriptId) = 0;
virtual void exceptionRevoked(v8::Local<v8::Context>, unsigned exceptionId,
StringView message) = 0;
virtual bool associateExceptionData(v8::Local<v8::Context>,
v8::Local<v8::Value> exception,
v8::Local<v8::Name> key,
v8::Local<v8::Value> value) = 0;
// Connection.
class V8_EXPORT Channel {
@ -303,32 +376,20 @@ class V8_EXPORT V8Inspector {
virtual void sendNotification(std::unique_ptr<StringBuffer> message) = 0;
virtual void flushProtocolNotifications() = 0;
};
virtual std::unique_ptr<V8InspectorSession> connect(int contextGroupId,
Channel*,
StringView state) = 0;
enum ClientTrustLevel { kUntrusted, kFullyTrusted };
enum SessionPauseState { kWaitingForDebugger, kNotWaitingForDebugger };
// TODO(chromium:1352175): remove default value once downstream change lands.
virtual std::unique_ptr<V8InspectorSession> connect(
int contextGroupId, Channel*, StringView state,
ClientTrustLevel client_trust_level,
SessionPauseState = kNotWaitingForDebugger) {
return nullptr;
}
// API methods.
virtual std::unique_ptr<V8StackTrace> createStackTrace(
v8::Local<v8::StackTrace>) = 0;
virtual std::unique_ptr<V8StackTrace> captureStackTrace(bool fullStack) = 0;
// Performance counters.
class V8_EXPORT Counters : public std::enable_shared_from_this<Counters> {
public:
explicit Counters(v8::Isolate* isolate);
~Counters();
const std::unordered_map<std::string, int>& getCountersMap() const {
return m_countersMap;
}
private:
static int* getCounterPtr(const char* name);
v8::Isolate* m_isolate;
std::unordered_map<std::string, int> m_countersMap;
};
virtual std::shared_ptr<Counters> enableCounters() = 0;
};
} // namespace v8_inspector

View File

@ -8,13 +8,15 @@
#include <stddef.h>
#include <stdint.h>
#include <string.h>
#include <atomic>
#include <type_traits>
#include "v8-version.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Array;
class Context;
class Data;
class Isolate;
@ -24,7 +26,14 @@ namespace internal {
class Isolate;
typedef uintptr_t Address;
static const Address kNullAddress = 0;
static constexpr Address kNullAddress = 0;
constexpr int KB = 1024;
constexpr int MB = KB * 1024;
constexpr int GB = MB * 1024;
#ifdef V8_TARGET_ARCH_X64
constexpr size_t TB = size_t{GB} * 1024;
#endif
/**
* Configuration of tagging scheme.
@ -33,12 +42,21 @@ const int kApiSystemPointerSize = sizeof(void*);
const int kApiDoubleSize = sizeof(double);
const int kApiInt32Size = sizeof(int32_t);
const int kApiInt64Size = sizeof(int64_t);
const int kApiSizetSize = sizeof(size_t);
// Tag information for HeapObject.
const int kHeapObjectTag = 1;
const int kWeakHeapObjectTag = 3;
const int kHeapObjectTagSize = 2;
const intptr_t kHeapObjectTagMask = (1 << kHeapObjectTagSize) - 1;
const intptr_t kHeapObjectReferenceTagMask = 1 << (kHeapObjectTagSize - 1);
// Tag information for fowarding pointers stored in object headers.
// 0b00 at the lowest 2 bits in the header indicates that the map word is a
// forwarding pointer.
const int kForwardingTag = 0;
const int kForwardingTagSize = 2;
const intptr_t kForwardingTagMask = (1 << kForwardingTagSize) - 1;
// Tag information for Smi.
const int kSmiTag = 0;
@ -61,7 +79,7 @@ struct SmiTagging<4> {
static_cast<intptr_t>(kUintptrAllBitsSet << (kSmiValueSize - 1));
static constexpr intptr_t kSmiMaxValue = -(kSmiMinValue + 1);
V8_INLINE static int SmiToInt(const internal::Address value) {
V8_INLINE static constexpr int SmiToInt(Address value) {
int shift_bits = kSmiTagSize + kSmiShiftSize;
// Truncate and shift down (requires >> to be sign extending).
return static_cast<int32_t>(static_cast<uint32_t>(value)) >> shift_bits;
@ -86,7 +104,7 @@ struct SmiTagging<8> {
static_cast<intptr_t>(kUintptrAllBitsSet << (kSmiValueSize - 1));
static constexpr intptr_t kSmiMaxValue = -(kSmiMinValue + 1);
V8_INLINE static int SmiToInt(const internal::Address value) {
V8_INLINE static constexpr int SmiToInt(Address value) {
int shift_bits = kSmiTagSize + kSmiShiftSize;
// Shift down and throw away top 32 bits.
return static_cast<int>(static_cast<intptr_t>(value) >> shift_bits);
@ -98,6 +116,11 @@ struct SmiTagging<8> {
};
#ifdef V8_COMPRESS_POINTERS
// See v8:7703 or src/common/ptr-compr-inl.h for details about pointer
// compression.
constexpr size_t kPtrComprCageReservationSize = size_t{1} << 32;
constexpr size_t kPtrComprCageBaseAlignment = size_t{1} << 32;
static_assert(
kApiSystemPointerSize == kApiInt64Size,
"Pointer compression can be enabled only for 64-bit architectures");
@ -110,33 +133,6 @@ constexpr bool PointerCompressionIsEnabled() {
return kApiTaggedSize != kApiSystemPointerSize;
}
constexpr bool HeapSandboxIsEnabled() {
#ifdef V8_HEAP_SANDBOX
return true;
#else
return false;
#endif
}
using ExternalPointer_t = Address;
// If the heap sandbox is enabled, these tag values will be XORed with the
// external pointers in the external pointer table to prevent use of pointers of
// the wrong type.
enum ExternalPointerTag : Address {
kExternalPointerNullTag = static_cast<Address>(0ULL),
kArrayBufferBackingStoreTag = static_cast<Address>(1ULL << 48),
kTypedArrayExternalPointerTag = static_cast<Address>(2ULL << 48),
kDataViewDataPointerTag = static_cast<Address>(3ULL << 48),
kExternalStringResourceTag = static_cast<Address>(4ULL << 48),
kExternalStringResourceDataTag = static_cast<Address>(5ULL << 48),
kForeignForeignAddressTag = static_cast<Address>(6ULL << 48),
kNativeContextMicrotaskQueueTag = static_cast<Address>(7ULL << 48),
// TODO(v8:10391, saelo): Currently has to be zero so that raw zero values are
// also nullptr
kEmbedderDataSlotPayloadTag = static_cast<Address>(0ULL << 48),
};
#ifdef V8_31BIT_SMIS_ON_64BIT_ARCH
using PlatformSmiTagging = SmiTagging<kApiInt32Size>;
#else
@ -151,16 +147,369 @@ const int kSmiMinValue = static_cast<int>(PlatformSmiTagging::kSmiMinValue);
const int kSmiMaxValue = static_cast<int>(PlatformSmiTagging::kSmiMaxValue);
constexpr bool SmiValuesAre31Bits() { return kSmiValueSize == 31; }
constexpr bool SmiValuesAre32Bits() { return kSmiValueSize == 32; }
constexpr bool Is64() { return kApiSystemPointerSize == sizeof(int64_t); }
V8_INLINE static constexpr internal::Address IntToSmi(int value) {
V8_INLINE static constexpr Address IntToSmi(int value) {
return (static_cast<Address>(value) << (kSmiTagSize + kSmiShiftSize)) |
kSmiTag;
}
// Converts encoded external pointer to address.
V8_EXPORT Address DecodeExternalPointerImpl(const Isolate* isolate,
ExternalPointer_t pointer,
ExternalPointerTag tag);
/*
* Sandbox related types, constants, and functions.
*/
constexpr bool SandboxIsEnabled() {
#ifdef V8_ENABLE_SANDBOX
return true;
#else
return false;
#endif
}
// SandboxedPointers are guaranteed to point into the sandbox. This is achieved
// for example by storing them as offset rather than as raw pointers.
using SandboxedPointer_t = Address;
#ifdef V8_ENABLE_SANDBOX
// Size of the sandbox, excluding the guard regions surrounding it.
#ifdef V8_TARGET_OS_ANDROID
// On Android, most 64-bit devices seem to be configured with only 39 bits of
// virtual address space for userspace. As such, limit the sandbox to 128GB (a
// quarter of the total available address space).
constexpr size_t kSandboxSizeLog2 = 37; // 128 GB
#else
// Everywhere else use a 1TB sandbox.
constexpr size_t kSandboxSizeLog2 = 40; // 1 TB
#endif // V8_TARGET_OS_ANDROID
constexpr size_t kSandboxSize = 1ULL << kSandboxSizeLog2;
// Required alignment of the sandbox. For simplicity, we require the
// size of the guard regions to be a multiple of this, so that this specifies
// the alignment of the sandbox including and excluding surrounding guard
// regions. The alignment requirement is due to the pointer compression cage
// being located at the start of the sandbox.
constexpr size_t kSandboxAlignment = kPtrComprCageBaseAlignment;
// Sandboxed pointers are stored inside the heap as offset from the sandbox
// base shifted to the left. This way, it is guaranteed that the offset is
// smaller than the sandbox size after shifting it to the right again. This
// constant specifies the shift amount.
constexpr uint64_t kSandboxedPointerShift = 64 - kSandboxSizeLog2;
// Size of the guard regions surrounding the sandbox. This assumes a worst-case
// scenario of a 32-bit unsigned index used to access an array of 64-bit
// values.
constexpr size_t kSandboxGuardRegionSize = 32ULL * GB;
static_assert((kSandboxGuardRegionSize % kSandboxAlignment) == 0,
"The size of the guard regions around the sandbox must be a "
"multiple of its required alignment.");
// On OSes where reserving virtual memory is too expensive to reserve the
// entire address space backing the sandbox, notably Windows pre 8.1, we create
// a partially reserved sandbox that doesn't actually reserve most of the
// memory, and so doesn't have the desired security properties as unrelated
// memory allocations could end up inside of it, but which still ensures that
// objects that should be located inside the sandbox are allocated within
// kSandboxSize bytes from the start of the sandbox. The minimum size of the
// region that is actually reserved for such a sandbox is specified by this
// constant and should be big enough to contain the pointer compression cage as
// well as the ArrayBuffer partition.
constexpr size_t kSandboxMinimumReservationSize = 8ULL * GB;
static_assert(kSandboxMinimumReservationSize > kPtrComprCageReservationSize,
"The minimum reservation size for a sandbox must be larger than "
"the pointer compression cage contained within it.");
// The maximum buffer size allowed inside the sandbox. This is mostly dependent
// on the size of the guard regions around the sandbox: an attacker must not be
// able to construct a buffer that appears larger than the guard regions and
// thereby "reach out of" the sandbox.
constexpr size_t kMaxSafeBufferSizeForSandbox = 32ULL * GB - 1;
static_assert(kMaxSafeBufferSizeForSandbox <= kSandboxGuardRegionSize,
"The maximum allowed buffer size must not be larger than the "
"sandbox's guard regions");
constexpr size_t kBoundedSizeShift = 29;
static_assert(1ULL << (64 - kBoundedSizeShift) ==
kMaxSafeBufferSizeForSandbox + 1,
"The maximum size of a BoundedSize must be synchronized with the "
"kMaxSafeBufferSizeForSandbox");
#endif // V8_ENABLE_SANDBOX
#ifdef V8_COMPRESS_POINTERS
#ifdef V8_TARGET_OS_ANDROID
// The size of the virtual memory reservation for an external pointer table.
// This determines the maximum number of entries in a table. Using a maximum
// size allows omitting bounds checks on table accesses if the indices are
// guaranteed (e.g. through shifting) to be below the maximum index. This
// value must be a power of two.
static const size_t kExternalPointerTableReservationSize = 512 * MB;
// The external pointer table indices stored in HeapObjects as external
// pointers are shifted to the left by this amount to guarantee that they are
// smaller than the maximum table size.
static const uint32_t kExternalPointerIndexShift = 6;
#else
static const size_t kExternalPointerTableReservationSize = 1024 * MB;
static const uint32_t kExternalPointerIndexShift = 5;
#endif // V8_TARGET_OS_ANDROID
// The maximum number of entries in an external pointer table.
static const int kExternalPointerTableEntrySize = 8;
static const int kExternalPointerTableEntrySizeLog2 = 3;
static const size_t kMaxExternalPointers =
kExternalPointerTableReservationSize / kExternalPointerTableEntrySize;
static_assert((1 << (32 - kExternalPointerIndexShift)) == kMaxExternalPointers,
"kExternalPointerTableReservationSize and "
"kExternalPointerIndexShift don't match");
#else // !V8_COMPRESS_POINTERS
// Needed for the V8.SandboxedExternalPointersCount histogram.
static const size_t kMaxExternalPointers = 0;
#endif // V8_COMPRESS_POINTERS
// A ExternalPointerHandle represents a (opaque) reference to an external
// pointer that can be stored inside the sandbox. A ExternalPointerHandle has
// meaning only in combination with an (active) Isolate as it references an
// external pointer stored in the currently active Isolate's
// ExternalPointerTable. Internally, an ExternalPointerHandles is simply an
// index into an ExternalPointerTable that is shifted to the left to guarantee
// that it is smaller than the size of the table.
using ExternalPointerHandle = uint32_t;
// ExternalPointers point to objects located outside the sandbox. When the V8
// sandbox is enabled, these are stored on heap as ExternalPointerHandles,
// otherwise they are simply raw pointers.
#ifdef V8_ENABLE_SANDBOX
using ExternalPointer_t = ExternalPointerHandle;
#else
using ExternalPointer_t = Address;
#endif
constexpr ExternalPointer_t kNullExternalPointer = 0;
constexpr ExternalPointerHandle kNullExternalPointerHandle = 0;
// When the sandbox is enabled, external pointers are stored in an external
// pointer table and are referenced from HeapObjects through an index (a
// "handle"). When stored in the table, the pointers are tagged with per-type
// tags to prevent type confusion attacks between different external objects.
// Besides type information bits, these tags also contain the GC marking bit
// which indicates whether the pointer table entry is currently alive. When a
// pointer is written into the table, the tag is ORed into the top bits. When
// that pointer is later loaded from the table, it is ANDed with the inverse of
// the expected tag. If the expected and actual type differ, this will leave
// some of the top bits of the pointer set, rendering the pointer inaccessible.
// The AND operation also removes the GC marking bit from the pointer.
//
// The tags are constructed such that UNTAG(TAG(0, T1), T2) != 0 for any two
// (distinct) tags T1 and T2. In practice, this is achieved by generating tags
// that all have the same number of zeroes and ones but different bit patterns.
// With N type tag bits, this allows for (N choose N/2) possible type tags.
// Besides the type tag bits, the tags also have the GC marking bit set so that
// the marking bit is automatically set when a pointer is written into the
// external pointer table (in which case it is clearly alive) and is cleared
// when the pointer is loaded. The exception to this is the free entry tag,
// which doesn't have the mark bit set, as the entry is not alive. This
// construction allows performing the type check and removing GC marking bits
// from the pointer in one efficient operation (bitwise AND). The number of
// available bits is limited in the following way: on x64, bits [47, 64) are
// generally available for tagging (userspace has 47 address bits available).
// On Arm64, userspace typically has a 40 or 48 bit address space. However, due
// to top-byte ignore (TBI) and memory tagging (MTE), the top byte is unusable
// for type checks as type-check failures would go unnoticed or collide with
// MTE bits. Some bits of the top byte can, however, still be used for the GC
// marking bit. The bits available for the type tags are therefore limited to
// [48, 56), i.e. (8 choose 4) = 70 different types.
// The following options exist to increase the number of possible types:
// - Using multiple ExternalPointerTables since tags can safely be reused
// across different tables
// - Using "extended" type checks, where additional type information is stored
// either in an adjacent pointer table entry or at the pointed-to location
// - Using a different tagging scheme, for example based on XOR which would
// allow for 2**8 different tags but require a separate operation to remove
// the marking bit
//
// The external pointer sandboxing mechanism ensures that every access to an
// external pointer field will result in a valid pointer of the expected type
// even in the presence of an attacker able to corrupt memory inside the
// sandbox. However, if any data related to the external object is stored
// inside the sandbox it may still be corrupted and so must be validated before
// use or moved into the external object. Further, an attacker will always be
// able to substitute different external pointers of the same type for each
// other. Therefore, code using external pointers must be written in a
// "substitution-safe" way, i.e. it must always be possible to substitute
// external pointers of the same type without causing memory corruption outside
// of the sandbox. Generally this is achieved by referencing any group of
// related external objects through a single external pointer.
//
// Currently we use bit 62 for the marking bit which should always be unused as
// it's part of the non-canonical address range. When Arm's top-byte ignore
// (TBI) is enabled, this bit will be part of the ignored byte, and we assume
// that the Embedder is not using this byte (really only this one bit) for any
// other purpose. This bit also does not collide with the memory tagging
// extension (MTE) which would use bits [56, 60).
//
// External pointer tables are also available even when the sandbox is off but
// pointer compression is on. In that case, the mechanism can be used to easy
// alignment requirements as it turns unaligned 64-bit raw pointers into
// aligned 32-bit indices. To "opt-in" to the external pointer table mechanism
// for this purpose, instead of using the ExternalPointer accessors one needs to
// use ExternalPointerHandles directly and use them to access the pointers in an
// ExternalPointerTable.
constexpr uint64_t kExternalPointerMarkBit = 1ULL << 62;
constexpr uint64_t kExternalPointerTagMask = 0x40ff000000000000;
constexpr uint64_t kExternalPointerTagShift = 48;
// All possible 8-bit type tags.
// These are sorted so that tags can be grouped together and it can efficiently
// be checked if a tag belongs to a given group. See for example the
// IsSharedExternalPointerType routine.
constexpr uint64_t kAllExternalPointerTypeTags[] = {
0b00001111, 0b00010111, 0b00011011, 0b00011101, 0b00011110, 0b00100111,
0b00101011, 0b00101101, 0b00101110, 0b00110011, 0b00110101, 0b00110110,
0b00111001, 0b00111010, 0b00111100, 0b01000111, 0b01001011, 0b01001101,
0b01001110, 0b01010011, 0b01010101, 0b01010110, 0b01011001, 0b01011010,
0b01011100, 0b01100011, 0b01100101, 0b01100110, 0b01101001, 0b01101010,
0b01101100, 0b01110001, 0b01110010, 0b01110100, 0b01111000, 0b10000111,
0b10001011, 0b10001101, 0b10001110, 0b10010011, 0b10010101, 0b10010110,
0b10011001, 0b10011010, 0b10011100, 0b10100011, 0b10100101, 0b10100110,
0b10101001, 0b10101010, 0b10101100, 0b10110001, 0b10110010, 0b10110100,
0b10111000, 0b11000011, 0b11000101, 0b11000110, 0b11001001, 0b11001010,
0b11001100, 0b11010001, 0b11010010, 0b11010100, 0b11011000, 0b11100001,
0b11100010, 0b11100100, 0b11101000, 0b11110000};
#define TAG(i) \
((kAllExternalPointerTypeTags[i] << kExternalPointerTagShift) | \
kExternalPointerMarkBit)
// clang-format off
// When adding new tags, please ensure that the code using these tags is
// "substitution-safe", i.e. still operate safely if external pointers of the
// same type are swapped by an attacker. See comment above for more details.
// Shared external pointers are owned by the shared Isolate and stored in the
// shared external pointer table associated with that Isolate, where they can
// be accessed from multiple threads at the same time. The objects referenced
// in this way must therefore always be thread-safe.
#define SHARED_EXTERNAL_POINTER_TAGS(V) \
V(kFirstSharedTag, TAG(0)) \
V(kWaiterQueueNodeTag, TAG(0)) \
V(kExternalStringResourceTag, TAG(1)) \
V(kExternalStringResourceDataTag, TAG(2)) \
V(kLastSharedTag, TAG(2))
// External pointers using these tags are kept in a per-Isolate external
// pointer table and can only be accessed when this Isolate is active.
#define PER_ISOLATE_EXTERNAL_POINTER_TAGS(V) \
V(kForeignForeignAddressTag, TAG(10)) \
V(kNativeContextMicrotaskQueueTag, TAG(11)) \
V(kEmbedderDataSlotPayloadTag, TAG(12)) \
/* This tag essentially stands for a `void*` pointer in the V8 API, and */ \
/* it is the Embedder's responsibility to ensure type safety (against */ \
/* substitution) and lifetime validity of these objects. */ \
V(kExternalObjectValueTag, TAG(13)) \
V(kCallHandlerInfoCallbackTag, TAG(14)) \
V(kAccessorInfoGetterTag, TAG(15)) \
V(kAccessorInfoSetterTag, TAG(16)) \
V(kWasmInternalFunctionCallTargetTag, TAG(17)) \
V(kWasmTypeInfoNativeTypeTag, TAG(18)) \
V(kWasmExportedFunctionDataSignatureTag, TAG(19)) \
V(kWasmContinuationJmpbufTag, TAG(20)) \
V(kArrayBufferExtensionTag, TAG(21))
// All external pointer tags.
#define ALL_EXTERNAL_POINTER_TAGS(V) \
SHARED_EXTERNAL_POINTER_TAGS(V) \
PER_ISOLATE_EXTERNAL_POINTER_TAGS(V)
#define EXTERNAL_POINTER_TAG_ENUM(Name, Tag) Name = Tag,
#define MAKE_TAG(HasMarkBit, TypeTag) \
((static_cast<uint64_t>(TypeTag) << kExternalPointerTagShift) | \
(HasMarkBit ? kExternalPointerMarkBit : 0))
enum ExternalPointerTag : uint64_t {
// Empty tag value. Mostly used as placeholder.
kExternalPointerNullTag = MAKE_TAG(1, 0b00000000),
// External pointer tag that will match any external pointer. Use with care!
kAnyExternalPointerTag = MAKE_TAG(1, 0b11111111),
// The free entry tag has all type bits set so every type check with a
// different type fails. It also doesn't have the mark bit set as free
// entries are (by definition) not alive.
kExternalPointerFreeEntryTag = MAKE_TAG(0, 0b11111111),
// Evacuation entries are used during external pointer table compaction.
kExternalPointerEvacuationEntryTag = MAKE_TAG(1, 0b11100111),
ALL_EXTERNAL_POINTER_TAGS(EXTERNAL_POINTER_TAG_ENUM)
};
#undef MAKE_TAG
#undef TAG
#undef EXTERNAL_POINTER_TAG_ENUM
// clang-format on
// True if the external pointer must be accessed from the shared isolate's
// external pointer table.
V8_INLINE static constexpr bool IsSharedExternalPointerType(
ExternalPointerTag tag) {
return tag >= kFirstSharedTag && tag <= kLastSharedTag;
}
// Sanity checks.
#define CHECK_SHARED_EXTERNAL_POINTER_TAGS(Tag, ...) \
static_assert(IsSharedExternalPointerType(Tag));
#define CHECK_NON_SHARED_EXTERNAL_POINTER_TAGS(Tag, ...) \
static_assert(!IsSharedExternalPointerType(Tag));
SHARED_EXTERNAL_POINTER_TAGS(CHECK_SHARED_EXTERNAL_POINTER_TAGS)
PER_ISOLATE_EXTERNAL_POINTER_TAGS(CHECK_NON_SHARED_EXTERNAL_POINTER_TAGS)
#undef CHECK_NON_SHARED_EXTERNAL_POINTER_TAGS
#undef CHECK_SHARED_EXTERNAL_POINTER_TAGS
#undef SHARED_EXTERNAL_POINTER_TAGS
#undef EXTERNAL_POINTER_TAGS
// A handle to a code pointer stored in a code pointer table.
using CodePointerHandle = uint32_t;
// CodePointers point to machine code (JIT or AOT compiled). When
// the V8 sandbox is enabled, these are stored as CodePointerHandles on the heap
// (i.e. as index into a code pointer table). Otherwise, they are simply raw
// pointers.
#ifdef V8_CODE_POINTER_SANDBOXING
using CodePointer_t = CodePointerHandle;
#else
using CodePointer_t = Address;
#endif
constexpr CodePointerHandle kNullCodePointerHandle = 0;
// The size of the virtual memory reservation for code pointer table.
// This determines the maximum number of entries in a table. Using a maximum
// size allows omitting bounds checks on table accesses if the indices are
// guaranteed (e.g. through shifting) to be below the maximum index. This
// value must be a power of two.
static const size_t kCodePointerTableReservationSize = 512 * MB;
// The code pointer table indices stored in HeapObjects as external
// pointers are shifted to the left by this amount to guarantee that they are
// smaller than the maximum table size.
static const uint32_t kCodePointerIndexShift = 6;
// The maximum number of entries in an external pointer table.
static const int kCodePointerTableEntrySize = 8;
static const int kCodePointerTableEntrySizeLog2 = 3;
static const size_t kMaxCodePointers =
kCodePointerTableReservationSize / kCodePointerTableEntrySize;
static_assert(
(1 << (32 - kCodePointerIndexShift)) == kMaxCodePointers,
"kCodePointerTableReservationSize and kCodePointerIndexShift don't match");
// {obj} must be the raw tagged pointer representation of a HeapObject
// that's guaranteed to never be in ReadOnlySpace.
@ -169,14 +518,20 @@ V8_EXPORT internal::Isolate* IsolateFromNeverReadOnlySpaceObject(Address obj);
// Returns if we need to throw when an error occurs. This infers the language
// mode based on the current context and the closure. This returns true if the
// language mode is strict.
V8_EXPORT bool ShouldThrowOnError(v8::internal::Isolate* isolate);
V8_EXPORT bool ShouldThrowOnError(internal::Isolate* isolate);
/**
* This class exports constants and functionality from within v8 that
* is necessary to implement inline functions in the v8 api. Don't
* depend on functions and constants defined here.
*/
class Internals {
#ifdef V8_MAP_PACKING
V8_INLINE static constexpr Address UnpackMapWord(Address mapword) {
// TODO(wenyuzhao): Clear header metadata.
return mapword ^ kMapWordXorMask;
}
#endif
public:
// These values match non-compiler-dependent values defined within
// the implementation of v8.
@ -190,35 +545,94 @@ class Internals {
static const int kFixedArrayHeaderSize = 2 * kApiTaggedSize;
static const int kEmbedderDataArrayHeaderSize = 2 * kApiTaggedSize;
static const int kEmbedderDataSlotSize = kApiSystemPointerSize;
#ifdef V8_HEAP_SANDBOX
static const int kEmbedderDataSlotRawPayloadOffset = kApiTaggedSize;
#ifdef V8_ENABLE_SANDBOX
static const int kEmbedderDataSlotExternalPointerOffset = kApiTaggedSize;
#else
static const int kEmbedderDataSlotExternalPointerOffset = 0;
#endif
static const int kNativeContextEmbedderDataOffset = 6 * kApiTaggedSize;
static const int kFullStringRepresentationMask = 0x0f;
static const int kStringRepresentationAndEncodingMask = 0x0f;
static const int kStringEncodingMask = 0x8;
static const int kExternalTwoByteRepresentationTag = 0x02;
static const int kExternalOneByteRepresentationTag = 0x0a;
static const uint32_t kNumIsolateDataSlots = 4;
static const int kStackGuardSize = 8 * kApiSystemPointerSize;
static const int kBuiltinTier0EntryTableSize = 7 * kApiSystemPointerSize;
static const int kBuiltinTier0TableSize = 7 * kApiSystemPointerSize;
static const int kLinearAllocationAreaSize = 3 * kApiSystemPointerSize;
static const int kThreadLocalTopSize = 25 * kApiSystemPointerSize;
static const int kHandleScopeDataSize =
2 * kApiSystemPointerSize + 2 * kApiInt32Size;
// ExternalPointerTable layout guarantees.
static const int kExternalPointerTableBufferOffset = 0;
static const int kExternalPointerTableSize = 4 * kApiSystemPointerSize;
// IsolateData layout guarantees.
static const int kIsolateEmbedderDataOffset = 0;
static const int kIsolateCageBaseOffset = 0;
static const int kIsolateStackGuardOffset =
kIsolateCageBaseOffset + kApiSystemPointerSize;
static const int kVariousBooleanFlagsOffset =
kIsolateStackGuardOffset + kStackGuardSize;
static const int kBuiltinTier0EntryTableOffset =
kVariousBooleanFlagsOffset + 8;
static const int kBuiltinTier0TableOffset =
kBuiltinTier0EntryTableOffset + kBuiltinTier0EntryTableSize;
static const int kNewAllocationInfoOffset =
kBuiltinTier0TableOffset + kBuiltinTier0TableSize;
static const int kOldAllocationInfoOffset =
kNewAllocationInfoOffset + kLinearAllocationAreaSize;
static const int kIsolateFastCCallCallerFpOffset =
kNumIsolateDataSlots * kApiSystemPointerSize;
kOldAllocationInfoOffset + kLinearAllocationAreaSize;
static const int kIsolateFastCCallCallerPcOffset =
kIsolateFastCCallCallerFpOffset + kApiSystemPointerSize;
static const int kIsolateFastApiCallTargetOffset =
kIsolateFastCCallCallerPcOffset + kApiSystemPointerSize;
static const int kIsolateStackGuardOffset =
static const int kIsolateLongTaskStatsCounterOffset =
kIsolateFastApiCallTargetOffset + kApiSystemPointerSize;
static const int kIsolateThreadLocalTopOffset =
kIsolateLongTaskStatsCounterOffset + kApiSizetSize;
static const int kIsolateHandleScopeDataOffset =
kIsolateThreadLocalTopOffset + kThreadLocalTopSize;
static const int kIsolateEmbedderDataOffset =
kIsolateHandleScopeDataOffset + kHandleScopeDataSize;
#ifdef V8_COMPRESS_POINTERS
static const int kIsolateExternalPointerTableOffset =
kIsolateEmbedderDataOffset + kNumIsolateDataSlots * kApiSystemPointerSize;
static const int kIsolateSharedExternalPointerTableAddressOffset =
kIsolateExternalPointerTableOffset + kExternalPointerTableSize;
static const int kIsolateApiCallbackThunkArgumentOffset =
kIsolateSharedExternalPointerTableAddressOffset + kApiSystemPointerSize;
#else
static const int kIsolateApiCallbackThunkArgumentOffset =
kIsolateEmbedderDataOffset + kNumIsolateDataSlots * kApiSystemPointerSize;
#endif
static const int kIsolateRootsOffset =
kIsolateStackGuardOffset + 7 * kApiSystemPointerSize;
kIsolateApiCallbackThunkArgumentOffset + kApiSystemPointerSize;
static const int kExternalPointerTableBufferOffset = 0;
static const int kExternalPointerTableLengthOffset =
kExternalPointerTableBufferOffset + kApiSystemPointerSize;
static const int kExternalPointerTableCapacityOffset =
kExternalPointerTableLengthOffset + kApiInt32Size;
#if V8_STATIC_ROOTS_BOOL
// These constants need to be initialized in api.cc.
#define EXPORTED_STATIC_ROOTS_PTR_LIST(V) \
V(UndefinedValue) \
V(NullValue) \
V(TrueValue) \
V(FalseValue) \
V(EmptyString) \
V(TheHoleValue)
using Tagged_t = uint32_t;
struct StaticReadOnlyRoot {
#define DEF_ROOT(name) V8_EXPORT static const Tagged_t k##name;
EXPORTED_STATIC_ROOTS_PTR_LIST(DEF_ROOT)
#undef DEF_ROOT
V8_EXPORT static const Tagged_t kFirstStringMap;
V8_EXPORT static const Tagged_t kLastStringMap;
};
#endif // V8_STATIC_ROOTS_BOOL
static const int kUndefinedValueRootIndex = 4;
static const int kTheHoleValueRootIndex = 5;
@ -229,16 +643,18 @@ class Internals {
static const int kNodeClassIdOffset = 1 * kApiSystemPointerSize;
static const int kNodeFlagsOffset = 1 * kApiSystemPointerSize + 3;
static const int kNodeStateMask = 0x7;
static const int kNodeStateMask = 0x3;
static const int kNodeStateIsWeakValue = 2;
static const int kNodeStateIsPendingValue = 3;
static const int kFirstNonstringType = 0x40;
static const int kOddballType = 0x43;
static const int kForeignType = 0x46;
static const int kTracedNodeClassIdOffset = kApiSystemPointerSize;
static const int kFirstNonstringType = 0x80;
static const int kOddballType = 0x83;
static const int kForeignType = 0xcc;
static const int kJSSpecialApiObjectType = 0x410;
static const int kJSApiObjectType = 0x420;
static const int kJSObjectType = 0x421;
static const int kFirstJSApiObjectType = 0x422;
static const int kLastJSApiObjectType = 0x80A;
static const int kUndefinedOddballKind = 5;
static const int kNullOddballKind = 3;
@ -253,6 +669,17 @@ class Internals {
// incremental GC once the external memory reaches this limit.
static constexpr int kExternalAllocationSoftLimit = 64 * 1024 * 1024;
#ifdef V8_MAP_PACKING
static const uintptr_t kMapWordMetadataMask = 0xffffULL << 48;
// The lowest two bits of mapwords are always `0b10`
static const uintptr_t kMapWordSignature = 0b10;
// XORing a (non-compressed) map with this mask ensures that the two
// low-order bits are 0b10. The 0 at the end makes this look like a Smi,
// although real Smis have all lower 32 bits unset. We only rely on these
// values passing as Smis in very few places.
static const int kMapWordXorMask = 0b11;
#endif
V8_EXPORT static void CheckInitializedImpl(v8::Isolate* isolate);
V8_INLINE static void CheckInitialized(v8::Isolate* isolate) {
#ifdef V8_ENABLE_CHECKS
@ -260,15 +687,15 @@ class Internals {
#endif
}
V8_INLINE static bool HasHeapObjectTag(const internal::Address value) {
V8_INLINE static constexpr bool HasHeapObjectTag(Address value) {
return (value & kHeapObjectTagMask) == static_cast<Address>(kHeapObjectTag);
}
V8_INLINE static int SmiValue(const internal::Address value) {
V8_INLINE static constexpr int SmiValue(Address value) {
return PlatformSmiTagging::SmiToInt(value);
}
V8_INLINE static constexpr internal::Address IntToSmi(int value) {
V8_INLINE static constexpr Address IntToSmi(int value) {
return internal::IntToSmi(value);
}
@ -276,70 +703,136 @@ class Internals {
return PlatformSmiTagging::IsValidSmi(value);
}
V8_INLINE static int GetInstanceType(const internal::Address obj) {
typedef internal::Address A;
A map = ReadTaggedPointerField(obj, kHeapObjectMapOffset);
#if V8_STATIC_ROOTS_BOOL
V8_INLINE static bool is_identical(Address obj, Tagged_t constant) {
return static_cast<Tagged_t>(obj) == constant;
}
V8_INLINE static bool CheckInstanceMapRange(Address obj, Tagged_t first_map,
Tagged_t last_map) {
auto map = ReadRawField<Tagged_t>(obj, kHeapObjectMapOffset);
#ifdef V8_MAP_PACKING
map = UnpackMapWord(map);
#endif
return map >= first_map && map <= last_map;
}
#endif
V8_INLINE static int GetInstanceType(Address obj) {
Address map = ReadTaggedPointerField(obj, kHeapObjectMapOffset);
#ifdef V8_MAP_PACKING
map = UnpackMapWord(map);
#endif
return ReadRawField<uint16_t>(map, kMapInstanceTypeOffset);
}
V8_INLINE static int GetOddballKind(const internal::Address obj) {
V8_INLINE static int GetOddballKind(Address obj) {
return SmiValue(ReadTaggedSignedField(obj, kOddballKindOffset));
}
V8_INLINE static bool IsExternalTwoByteString(int instance_type) {
int representation = (instance_type & kFullStringRepresentationMask);
int representation = (instance_type & kStringRepresentationAndEncodingMask);
return representation == kExternalTwoByteRepresentationTag;
}
V8_INLINE static uint8_t GetNodeFlag(internal::Address* obj, int shift) {
V8_INLINE static constexpr bool CanHaveInternalField(int instance_type) {
static_assert(kJSObjectType + 1 == kFirstJSApiObjectType);
static_assert(kJSObjectType < kLastJSApiObjectType);
static_assert(kFirstJSApiObjectType < kLastJSApiObjectType);
// Check for IsJSObject() || IsJSSpecialApiObject() || IsJSApiObject()
return instance_type == kJSSpecialApiObjectType ||
// inlined version of base::IsInRange
(static_cast<unsigned>(static_cast<unsigned>(instance_type) -
static_cast<unsigned>(kJSObjectType)) <=
static_cast<unsigned>(kLastJSApiObjectType - kJSObjectType));
}
V8_INLINE static uint8_t GetNodeFlag(Address* obj, int shift) {
uint8_t* addr = reinterpret_cast<uint8_t*>(obj) + kNodeFlagsOffset;
return *addr & static_cast<uint8_t>(1U << shift);
}
V8_INLINE static void UpdateNodeFlag(internal::Address* obj, bool value,
int shift) {
V8_INLINE static void UpdateNodeFlag(Address* obj, bool value, int shift) {
uint8_t* addr = reinterpret_cast<uint8_t*>(obj) + kNodeFlagsOffset;
uint8_t mask = static_cast<uint8_t>(1U << shift);
*addr = static_cast<uint8_t>((*addr & ~mask) | (value << shift));
}
V8_INLINE static uint8_t GetNodeState(internal::Address* obj) {
V8_INLINE static uint8_t GetNodeState(Address* obj) {
uint8_t* addr = reinterpret_cast<uint8_t*>(obj) + kNodeFlagsOffset;
return *addr & kNodeStateMask;
}
V8_INLINE static void UpdateNodeState(internal::Address* obj, uint8_t value) {
V8_INLINE static void UpdateNodeState(Address* obj, uint8_t value) {
uint8_t* addr = reinterpret_cast<uint8_t*>(obj) + kNodeFlagsOffset;
*addr = static_cast<uint8_t>((*addr & ~kNodeStateMask) | value);
}
V8_INLINE static void SetEmbedderData(v8::Isolate* isolate, uint32_t slot,
void* data) {
internal::Address addr = reinterpret_cast<internal::Address>(isolate) +
kIsolateEmbedderDataOffset +
slot * kApiSystemPointerSize;
Address addr = reinterpret_cast<Address>(isolate) +
kIsolateEmbedderDataOffset + slot * kApiSystemPointerSize;
*reinterpret_cast<void**>(addr) = data;
}
V8_INLINE static void* GetEmbedderData(const v8::Isolate* isolate,
uint32_t slot) {
internal::Address addr = reinterpret_cast<internal::Address>(isolate) +
kIsolateEmbedderDataOffset +
slot * kApiSystemPointerSize;
Address addr = reinterpret_cast<Address>(isolate) +
kIsolateEmbedderDataOffset + slot * kApiSystemPointerSize;
return *reinterpret_cast<void* const*>(addr);
}
V8_INLINE static internal::Address* GetRoot(v8::Isolate* isolate, int index) {
internal::Address addr = reinterpret_cast<internal::Address>(isolate) +
kIsolateRootsOffset +
index * kApiSystemPointerSize;
return reinterpret_cast<internal::Address*>(addr);
V8_INLINE static void IncrementLongTasksStatsCounter(v8::Isolate* isolate) {
Address addr =
reinterpret_cast<Address>(isolate) + kIsolateLongTaskStatsCounterOffset;
++(*reinterpret_cast<size_t*>(addr));
}
V8_INLINE static Address* GetRootSlot(v8::Isolate* isolate, int index) {
Address addr = reinterpret_cast<Address>(isolate) + kIsolateRootsOffset +
index * kApiSystemPointerSize;
return reinterpret_cast<Address*>(addr);
}
V8_INLINE static Address GetRoot(v8::Isolate* isolate, int index) {
#if V8_STATIC_ROOTS_BOOL
Address base = *reinterpret_cast<Address*>(
reinterpret_cast<uintptr_t>(isolate) + kIsolateCageBaseOffset);
switch (index) {
#define DECOMPRESS_ROOT(name) \
case k##name##RootIndex: \
return base + StaticReadOnlyRoot::k##name;
EXPORTED_STATIC_ROOTS_PTR_LIST(DECOMPRESS_ROOT)
#undef DECOMPRESS_ROOT
default:
break;
}
#undef EXPORTED_STATIC_ROOTS_PTR_LIST
#endif // V8_STATIC_ROOTS_BOOL
return *GetRootSlot(isolate, index);
}
#ifdef V8_ENABLE_SANDBOX
V8_INLINE static Address* GetExternalPointerTableBase(v8::Isolate* isolate) {
Address addr = reinterpret_cast<Address>(isolate) +
kIsolateExternalPointerTableOffset +
kExternalPointerTableBufferOffset;
return *reinterpret_cast<Address**>(addr);
}
V8_INLINE static Address* GetSharedExternalPointerTableBase(
v8::Isolate* isolate) {
Address addr = reinterpret_cast<Address>(isolate) +
kIsolateSharedExternalPointerTableAddressOffset;
addr = *reinterpret_cast<Address*>(addr);
addr += kExternalPointerTableBufferOffset;
return *reinterpret_cast<Address**>(addr);
}
#endif
template <typename T>
V8_INLINE static T ReadRawField(internal::Address heap_object_ptr,
int offset) {
internal::Address addr = heap_object_ptr + offset - kHeapObjectTag;
V8_INLINE static T ReadRawField(Address heap_object_ptr, int offset) {
Address addr = heap_object_ptr + offset - kHeapObjectTag;
#ifdef V8_COMPRESS_POINTERS
if (sizeof(T) > kApiTaggedSize) {
// TODO(ishell, v8:8875): When pointer compression is enabled 8-byte size
@ -354,77 +847,69 @@ class Internals {
return *reinterpret_cast<const T*>(addr);
}
V8_INLINE static internal::Address ReadTaggedPointerField(
internal::Address heap_object_ptr, int offset) {
V8_INLINE static Address ReadTaggedPointerField(Address heap_object_ptr,
int offset) {
#ifdef V8_COMPRESS_POINTERS
uint32_t value = ReadRawField<uint32_t>(heap_object_ptr, offset);
internal::Address base =
GetPtrComprCageBaseFromOnHeapAddress(heap_object_ptr);
return base + static_cast<internal::Address>(static_cast<uintptr_t>(value));
Address base = GetPtrComprCageBaseFromOnHeapAddress(heap_object_ptr);
return base + static_cast<Address>(static_cast<uintptr_t>(value));
#else
return ReadRawField<internal::Address>(heap_object_ptr, offset);
return ReadRawField<Address>(heap_object_ptr, offset);
#endif
}
V8_INLINE static internal::Address ReadTaggedSignedField(
internal::Address heap_object_ptr, int offset) {
V8_INLINE static Address ReadTaggedSignedField(Address heap_object_ptr,
int offset) {
#ifdef V8_COMPRESS_POINTERS
uint32_t value = ReadRawField<uint32_t>(heap_object_ptr, offset);
return static_cast<internal::Address>(static_cast<uintptr_t>(value));
return static_cast<Address>(static_cast<uintptr_t>(value));
#else
return ReadRawField<internal::Address>(heap_object_ptr, offset);
return ReadRawField<Address>(heap_object_ptr, offset);
#endif
}
V8_INLINE static internal::Isolate* GetIsolateForHeapSandbox(
internal::Address obj) {
#ifdef V8_HEAP_SANDBOX
return internal::IsolateFromNeverReadOnlySpaceObject(obj);
V8_INLINE static v8::Isolate* GetIsolateForSandbox(Address obj) {
#ifdef V8_ENABLE_SANDBOX
return reinterpret_cast<v8::Isolate*>(
internal::IsolateFromNeverReadOnlySpaceObject(obj));
#else
// Not used in non-sandbox mode.
return nullptr;
#endif
}
V8_INLINE static Address DecodeExternalPointer(
const Isolate* isolate, ExternalPointer_t encoded_pointer,
ExternalPointerTag tag) {
#ifdef V8_HEAP_SANDBOX
return internal::DecodeExternalPointerImpl(isolate, encoded_pointer, tag);
#else
return encoded_pointer;
#endif
}
V8_INLINE static internal::Address ReadExternalPointerField(
internal::Isolate* isolate, internal::Address heap_object_ptr, int offset,
ExternalPointerTag tag) {
#ifdef V8_HEAP_SANDBOX
internal::ExternalPointer_t encoded_value =
ReadRawField<uint32_t>(heap_object_ptr, offset);
// We currently have to treat zero as nullptr in embedder slots.
return encoded_value ? DecodeExternalPointer(isolate, encoded_value, tag)
: 0;
template <ExternalPointerTag tag>
V8_INLINE static Address ReadExternalPointerField(v8::Isolate* isolate,
Address heap_object_ptr,
int offset) {
#ifdef V8_ENABLE_SANDBOX
static_assert(tag != kExternalPointerNullTag);
// See src/sandbox/external-pointer-table-inl.h. Logic duplicated here so
// it can be inlined and doesn't require an additional call.
Address* table = IsSharedExternalPointerType(tag)
? GetSharedExternalPointerTableBase(isolate)
: GetExternalPointerTableBase(isolate);
internal::ExternalPointerHandle handle =
ReadRawField<ExternalPointerHandle>(heap_object_ptr, offset);
uint32_t index = handle >> kExternalPointerIndexShift;
std::atomic<Address>* ptr =
reinterpret_cast<std::atomic<Address>*>(&table[index]);
Address entry = std::atomic_load_explicit(ptr, std::memory_order_relaxed);
return entry & ~tag;
#else
return ReadRawField<Address>(heap_object_ptr, offset);
#endif
#endif // V8_ENABLE_SANDBOX
}
#ifdef V8_COMPRESS_POINTERS
// See v8:7703 or src/ptr-compr.* for details about pointer compression.
static constexpr size_t kPtrComprCageReservationSize = size_t{1} << 32;
static constexpr size_t kPtrComprCageBaseAlignment = size_t{1} << 32;
V8_INLINE static internal::Address GetPtrComprCageBaseFromOnHeapAddress(
internal::Address addr) {
V8_INLINE static Address GetPtrComprCageBaseFromOnHeapAddress(Address addr) {
return addr & -static_cast<intptr_t>(kPtrComprCageBaseAlignment);
}
V8_INLINE static internal::Address DecompressTaggedAnyField(
internal::Address heap_object_ptr, uint32_t value) {
internal::Address base =
GetPtrComprCageBaseFromOnHeapAddress(heap_object_ptr);
return base + static_cast<internal::Address>(static_cast<uintptr_t>(value));
V8_INLINE static Address DecompressTaggedField(Address heap_object_ptr,
uint32_t value) {
Address base = GetPtrComprCageBaseFromOnHeapAddress(heap_object_ptr);
return base + static_cast<Address>(static_cast<uintptr_t>(value));
}
#endif // V8_COMPRESS_POINTERS
@ -458,6 +943,10 @@ V8_INLINE void PerformCastCheck(T* data) {
// how static casts work with std::shared_ptr.
class BackingStoreBase {};
// The maximum value in enum GarbageCollectionReason, defined in heap.h.
// This is needed for histograms sampling garbage collection reasons.
constexpr int kGarbageCollectionReasonMaxValue = 27;
} // namespace internal
} // namespace v8

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,47 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_JSON_H_
#define INCLUDE_V8_JSON_H_
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Context;
class Value;
class String;
/**
* A JSON Parser and Stringifier.
*/
class V8_EXPORT JSON {
public:
/**
* Tries to parse the string |json_string| and returns it as value if
* successful.
*
* \param the context in which to parse and create the value.
* \param json_string The string to parse.
* \return The corresponding value if successfully parsed.
*/
static V8_WARN_UNUSED_RESULT MaybeLocal<Value> Parse(
Local<Context> context, Local<String> json_string);
/**
* Tries to stringify the JSON-serializable object |json_object| and returns
* it as string if successful.
*
* \param json_object The JSON-serializable object to stringify.
* \return The corresponding string if successfully stringified.
*/
static V8_WARN_UNUSED_RESULT MaybeLocal<String> Stringify(
Local<Context> context, Local<Value> json_object,
Local<String> gap = Local<String>());
};
} // namespace v8
#endif // INCLUDE_V8_JSON_H_

View File

@ -0,0 +1,527 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_LOCAL_HANDLE_H_
#define INCLUDE_V8_LOCAL_HANDLE_H_
#include <stddef.h>
#include <type_traits>
#include "v8-handle-base.h" // NOLINT(build/include_directory)
namespace v8 {
template <class T>
class LocalBase;
template <class T>
class Local;
template <class F>
class MaybeLocal;
template <class T>
class Eternal;
template <class T>
class Global;
template <class T>
class NonCopyablePersistentTraits;
template <class T>
class PersistentBase;
template <class T, class M = NonCopyablePersistentTraits<T>>
class Persistent;
class TracedReferenceBase;
template <class T>
class BasicTracedReference;
template <class F>
class TracedReference;
class Boolean;
class Context;
class EscapableHandleScope;
template <class F>
class FunctionCallbackInfo;
class Isolate;
class Object;
template <class F1, class F2, class F3>
class PersistentValueMapBase;
template <class F1, class F2>
class PersistentValueVector;
class Primitive;
class Private;
template <class F>
class PropertyCallbackInfo;
template <class F>
class ReturnValue;
class String;
template <class F>
class Traced;
class Utils;
namespace debug {
class ConsoleCallArguments;
}
namespace internal {
template <typename T>
class CustomArguments;
class SamplingHeapProfiler;
} // namespace internal
namespace api_internal {
// Called when ToLocalChecked is called on an empty Local.
V8_EXPORT void ToLocalEmpty();
} // namespace api_internal
/**
* A stack-allocated class that governs a number of local handles.
* After a handle scope has been created, all local handles will be
* allocated within that handle scope until either the handle scope is
* deleted or another handle scope is created. If there is already a
* handle scope and a new one is created, all allocations will take
* place in the new handle scope until it is deleted. After that,
* new handles will again be allocated in the original handle scope.
*
* After the handle scope of a local handle has been deleted the
* garbage collector will no longer track the object stored in the
* handle and may deallocate it. The behavior of accessing a handle
* for which the handle scope has been deleted is undefined.
*/
class V8_EXPORT V8_NODISCARD HandleScope {
public:
explicit HandleScope(Isolate* isolate);
~HandleScope();
/**
* Counts the number of allocated handles.
*/
static int NumberOfHandles(Isolate* isolate);
V8_INLINE Isolate* GetIsolate() const {
return reinterpret_cast<Isolate*>(i_isolate_);
}
HandleScope(const HandleScope&) = delete;
void operator=(const HandleScope&) = delete;
static internal::Address* CreateHandleForCurrentIsolate(
internal::Address value);
protected:
V8_INLINE HandleScope() = default;
void Initialize(Isolate* isolate);
static internal::Address* CreateHandle(internal::Isolate* i_isolate,
internal::Address value);
private:
// Declaring operator new and delete as deleted is not spec compliant.
// Therefore declare them private instead to disable dynamic alloc
void* operator new(size_t size);
void* operator new[](size_t size);
void operator delete(void*, size_t);
void operator delete[](void*, size_t);
internal::Isolate* i_isolate_;
internal::Address* prev_next_;
internal::Address* prev_limit_;
// LocalBase<T>::New uses CreateHandle with an Isolate* parameter.
template <typename T>
friend class LocalBase;
// Object::GetInternalField and Context::GetEmbedderData use CreateHandle with
// a HeapObject in their shortcuts.
friend class Object;
friend class Context;
};
/**
* A base class for local handles.
* Its implementation depends on whether direct local support is enabled.
* When it is, a local handle contains a direct pointer to the referenced
* object, otherwise it contains an indirect pointer.
*/
#ifdef V8_ENABLE_DIRECT_LOCAL
template <typename T>
class LocalBase : public DirectHandleBase {
protected:
template <class F>
friend class Local;
V8_INLINE LocalBase() = default;
V8_INLINE explicit LocalBase(internal::Address ptr) : DirectHandleBase(ptr) {}
template <typename S>
V8_INLINE LocalBase(const LocalBase<S>& other) : DirectHandleBase(other) {}
V8_INLINE static LocalBase<T> New(Isolate* isolate, internal::Address value) {
return LocalBase<T>(value);
}
V8_INLINE static LocalBase<T> New(Isolate* isolate, T* that) {
return LocalBase<T>::New(isolate,
internal::ValueHelper::ValueAsAddress(that));
}
V8_INLINE static LocalBase<T> FromSlot(internal::Address* slot) {
return LocalBase<T>(*slot);
}
};
#else // !V8_ENABLE_DIRECT_LOCAL
template <typename T>
class LocalBase : public IndirectHandleBase {
protected:
template <class F>
friend class Local;
V8_INLINE LocalBase() = default;
V8_INLINE explicit LocalBase(internal::Address* location)
: IndirectHandleBase(location) {}
template <typename S>
V8_INLINE LocalBase(const LocalBase<S>& other) : IndirectHandleBase(other) {}
V8_INLINE static LocalBase<T> New(Isolate* isolate, internal::Address value) {
return LocalBase(HandleScope::CreateHandle(
reinterpret_cast<internal::Isolate*>(isolate), value));
}
V8_INLINE static LocalBase<T> New(Isolate* isolate, T* that) {
if (internal::ValueHelper::IsEmpty(that)) return LocalBase<T>();
return LocalBase<T>::New(isolate,
internal::ValueHelper::ValueAsAddress(that));
}
V8_INLINE static LocalBase<T> FromSlot(internal::Address* slot) {
return LocalBase<T>(slot);
}
};
#endif // V8_ENABLE_DIRECT_LOCAL
/**
* An object reference managed by the v8 garbage collector.
*
* All objects returned from v8 have to be tracked by the garbage collector so
* that it knows that the objects are still alive. Also, because the garbage
* collector may move objects, it is unsafe to point directly to an object.
* Instead, all objects are stored in handles which are known by the garbage
* collector and updated whenever an object moves. Handles should always be
* passed by value (except in cases like out-parameters) and they should never
* be allocated on the heap.
*
* There are two types of handles: local and persistent handles.
*
* Local handles are light-weight and transient and typically used in local
* operations. They are managed by HandleScopes. That means that a HandleScope
* must exist on the stack when they are created and that they are only valid
* inside of the HandleScope active during their creation. For passing a local
* handle to an outer HandleScope, an EscapableHandleScope and its Escape()
* method must be used.
*
* Persistent handles can be used when storing objects across several
* independent operations and have to be explicitly deallocated when they're no
* longer used.
*
* It is safe to extract the object stored in the handle by dereferencing the
* handle (for instance, to extract the Object* from a Local<Object>); the value
* will still be governed by a handle behind the scenes and the same rules apply
* to these values as to their handles.
*/
template <class T>
class Local : public LocalBase<T> {
public:
V8_INLINE Local() = default;
template <class S>
V8_INLINE Local(Local<S> that) : LocalBase<T>(that) {
/**
* This check fails when trying to convert between incompatible
* handles. For example, converting from a Local<String> to a
* Local<Number>.
*/
static_assert(std::is_base_of<T, S>::value, "type check");
}
V8_INLINE T* operator->() const { return this->template value<T>(); }
V8_INLINE T* operator*() const { return this->operator->(); }
/**
* Checks whether two handles are equal or different.
* They are equal iff they are both empty or they are both non-empty and the
* objects to which they refer are physically equal.
*
* If both handles refer to JS objects, this is the same as strict
* non-equality. For primitives, such as numbers or strings, a `true` return
* value does not indicate that the values aren't equal in the JavaScript
* sense. Use `Value::StrictEquals()` to check primitives for equality.
*/
template <class S>
V8_INLINE bool operator==(const Local<S>& that) const {
return internal::HandleHelper::EqualHandles(*this, that);
}
template <class S>
V8_INLINE bool operator==(const PersistentBase<S>& that) const {
return internal::HandleHelper::EqualHandles(*this, that);
}
template <class S>
V8_INLINE bool operator!=(const Local<S>& that) const {
return !operator==(that);
}
template <class S>
V8_INLINE bool operator!=(const Persistent<S>& that) const {
return !operator==(that);
}
/**
* Cast a handle to a subclass, e.g. Local<Value> to Local<Object>.
* This is only valid if the handle actually refers to a value of the
* target type.
*/
template <class S>
V8_INLINE static Local<T> Cast(Local<S> that) {
#ifdef V8_ENABLE_CHECKS
// If we're going to perform the type check then we have to check
// that the handle isn't empty before doing the checked cast.
if (that.IsEmpty()) return Local<T>();
T::Cast(that.template value<S>());
#endif
return Local<T>(LocalBase<T>(that));
}
/**
* Calling this is equivalent to Local<S>::Cast().
* In particular, this is only valid if the handle actually refers to a value
* of the target type.
*/
template <class S>
V8_INLINE Local<S> As() const {
return Local<S>::Cast(*this);
}
/**
* Create a local handle for the content of another handle.
* The referee is kept alive by the local handle even when
* the original handle is destroyed/disposed.
*/
V8_INLINE static Local<T> New(Isolate* isolate, Local<T> that) {
return New(isolate, that.template value<T>());
}
V8_INLINE static Local<T> New(Isolate* isolate,
const PersistentBase<T>& that) {
return New(isolate, that.template value<T>());
}
V8_INLINE static Local<T> New(Isolate* isolate,
const BasicTracedReference<T>& that) {
return New(isolate, that.template value<T>());
}
private:
friend class TracedReferenceBase;
friend class Utils;
template <class F>
friend class Eternal;
template <class F>
friend class Global;
template <class F>
friend class Local;
template <class F>
friend class MaybeLocal;
template <class F, class M>
friend class Persistent;
template <class F>
friend class FunctionCallbackInfo;
template <class F>
friend class PropertyCallbackInfo;
friend class String;
friend class Object;
friend class Context;
friend class Isolate;
friend class Private;
template <class F>
friend class internal::CustomArguments;
friend Local<Primitive> Undefined(Isolate* isolate);
friend Local<Primitive> Null(Isolate* isolate);
friend Local<Boolean> True(Isolate* isolate);
friend Local<Boolean> False(Isolate* isolate);
friend class HandleScope;
friend class EscapableHandleScope;
template <class F1, class F2, class F3>
friend class PersistentValueMapBase;
template <class F1, class F2>
friend class PersistentValueVector;
template <class F>
friend class ReturnValue;
template <class F>
friend class Traced;
friend class internal::SamplingHeapProfiler;
friend class internal::HandleHelper;
friend class debug::ConsoleCallArguments;
V8_INLINE explicit Local<T>(const LocalBase<T>& other)
: LocalBase<T>(other) {}
V8_INLINE static Local<T> FromSlot(internal::Address* slot) {
return Local<T>(LocalBase<T>::FromSlot(slot));
}
V8_INLINE static Local<T> New(Isolate* isolate, internal::Address value) {
return Local<T>(LocalBase<T>::New(isolate, value));
}
V8_INLINE static Local<T> New(Isolate* isolate, T* that) {
return Local<T>(LocalBase<T>::New(isolate, that));
}
// Unsafe cast, should be avoided.
template <class S>
V8_INLINE Local<S> UnsafeAs() const {
return Local<S>(LocalBase<S>(*this));
}
};
#if !defined(V8_IMMINENT_DEPRECATION_WARNINGS)
// Handle is an alias for Local for historical reasons.
template <class T>
using Handle = Local<T>;
#endif
/**
* A MaybeLocal<> is a wrapper around Local<> that enforces a check whether
* the Local<> is empty before it can be used.
*
* If an API method returns a MaybeLocal<>, the API method can potentially fail
* either because an exception is thrown, or because an exception is pending,
* e.g. because a previous API call threw an exception that hasn't been caught
* yet, or because a TerminateExecution exception was thrown. In that case, an
* empty MaybeLocal is returned.
*/
template <class T>
class MaybeLocal {
public:
V8_INLINE MaybeLocal() : local_() {}
template <class S>
V8_INLINE MaybeLocal(Local<S> that) : local_(that) {}
V8_INLINE bool IsEmpty() const { return local_.IsEmpty(); }
/**
* Converts this MaybeLocal<> to a Local<>. If this MaybeLocal<> is empty,
* |false| is returned and |out| is assigned with nullptr.
*/
template <class S>
V8_WARN_UNUSED_RESULT V8_INLINE bool ToLocal(Local<S>* out) const {
*out = local_;
return !IsEmpty();
}
/**
* Converts this MaybeLocal<> to a Local<>. If this MaybeLocal<> is empty,
* V8 will crash the process.
*/
V8_INLINE Local<T> ToLocalChecked() {
if (V8_UNLIKELY(IsEmpty())) api_internal::ToLocalEmpty();
return local_;
}
/**
* Converts this MaybeLocal<> to a Local<>, using a default value if this
* MaybeLocal<> is empty.
*/
template <class S>
V8_INLINE Local<S> FromMaybe(Local<S> default_value) const {
return IsEmpty() ? default_value : Local<S>(local_);
}
private:
Local<T> local_;
};
/**
* A HandleScope which first allocates a handle in the current scope
* which will be later filled with the escape value.
*/
class V8_EXPORT V8_NODISCARD EscapableHandleScope : public HandleScope {
public:
explicit EscapableHandleScope(Isolate* isolate);
V8_INLINE ~EscapableHandleScope() = default;
/**
* Pushes the value into the previous scope and returns a handle to it.
* Cannot be called twice.
*/
template <class T>
V8_INLINE Local<T> Escape(Local<T> value) {
#ifdef V8_ENABLE_DIRECT_LOCAL
return value;
#else
return Local<T>::FromSlot(Escape(value.slot()));
#endif
}
template <class T>
V8_INLINE MaybeLocal<T> EscapeMaybe(MaybeLocal<T> value) {
return Escape(value.FromMaybe(Local<T>()));
}
EscapableHandleScope(const EscapableHandleScope&) = delete;
void operator=(const EscapableHandleScope&) = delete;
private:
// Declaring operator new and delete as deleted is not spec compliant.
// Therefore declare them private instead to disable dynamic alloc
void* operator new(size_t size);
void* operator new[](size_t size);
void operator delete(void*, size_t);
void operator delete[](void*, size_t);
internal::Address* Escape(internal::Address* escape_value);
internal::Address* escape_slot_;
};
/**
* A SealHandleScope acts like a handle scope in which no handle allocations
* are allowed. It can be useful for debugging handle leaks.
* Handles can be allocated within inner normal HandleScopes.
*/
class V8_EXPORT V8_NODISCARD SealHandleScope {
public:
explicit SealHandleScope(Isolate* isolate);
~SealHandleScope();
SealHandleScope(const SealHandleScope&) = delete;
void operator=(const SealHandleScope&) = delete;
private:
// Declaring operator new and delete as deleted is not spec compliant.
// Therefore declare them private instead to disable dynamic alloc
void* operator new(size_t size);
void* operator new[](size_t size);
void operator delete(void*, size_t);
void operator delete[](void*, size_t);
internal::Isolate* const i_isolate_;
internal::Address* prev_limit_;
int prev_sealed_level_;
};
} // namespace v8
#endif // INCLUDE_V8_LOCAL_HANDLE_H_

View File

@ -0,0 +1,138 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_LOCKER_H_
#define INCLUDE_V8_LOCKER_H_
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
namespace internal {
class Isolate;
} // namespace internal
class Isolate;
/**
* Multiple threads in V8 are allowed, but only one thread at a time is allowed
* to use any given V8 isolate, see the comments in the Isolate class. The
* definition of 'using a V8 isolate' includes accessing handles or holding onto
* object pointers obtained from V8 handles while in the particular V8 isolate.
* It is up to the user of V8 to ensure, perhaps with locking, that this
* constraint is not violated. In addition to any other synchronization
* mechanism that may be used, the v8::Locker and v8::Unlocker classes must be
* used to signal thread switches to V8.
*
* v8::Locker is a scoped lock object. While it's active, i.e. between its
* construction and destruction, the current thread is allowed to use the locked
* isolate. V8 guarantees that an isolate can be locked by at most one thread at
* any time. In other words, the scope of a v8::Locker is a critical section.
*
* Sample usage:
* \code
* ...
* {
* v8::Locker locker(isolate);
* v8::Isolate::Scope isolate_scope(isolate);
* ...
* // Code using V8 and isolate goes here.
* ...
* } // Destructor called here
* \endcode
*
* If you wish to stop using V8 in a thread A you can do this either by
* destroying the v8::Locker object as above or by constructing a v8::Unlocker
* object:
*
* \code
* {
* isolate->Exit();
* v8::Unlocker unlocker(isolate);
* ...
* // Code not using V8 goes here while V8 can run in another thread.
* ...
* } // Destructor called here.
* isolate->Enter();
* \endcode
*
* The Unlocker object is intended for use in a long-running callback from V8,
* where you want to release the V8 lock for other threads to use.
*
* The v8::Locker is a recursive lock, i.e. you can lock more than once in a
* given thread. This can be useful if you have code that can be called either
* from code that holds the lock or from code that does not. The Unlocker is
* not recursive so you can not have several Unlockers on the stack at once, and
* you cannot use an Unlocker in a thread that is not inside a Locker's scope.
*
* An unlocker will unlock several lockers if it has to and reinstate the
* correct depth of locking on its destruction, e.g.:
*
* \code
* // V8 not locked.
* {
* v8::Locker locker(isolate);
* Isolate::Scope isolate_scope(isolate);
* // V8 locked.
* {
* v8::Locker another_locker(isolate);
* // V8 still locked (2 levels).
* {
* isolate->Exit();
* v8::Unlocker unlocker(isolate);
* // V8 not locked.
* }
* isolate->Enter();
* // V8 locked again (2 levels).
* }
* // V8 still locked (1 level).
* }
* // V8 Now no longer locked.
* \endcode
*/
class V8_EXPORT Unlocker {
public:
/**
* Initialize Unlocker for a given Isolate.
*/
V8_INLINE explicit Unlocker(Isolate* isolate) { Initialize(isolate); }
~Unlocker();
private:
void Initialize(Isolate* isolate);
internal::Isolate* isolate_;
};
class V8_EXPORT Locker {
public:
/**
* Initialize Locker for a given Isolate.
*/
V8_INLINE explicit Locker(Isolate* isolate) { Initialize(isolate); }
~Locker();
/**
* Returns whether or not the locker for a given isolate, is locked by the
* current thread.
*/
static bool IsLocked(Isolate* isolate);
// Disallow copying and assigning.
Locker(const Locker&) = delete;
void operator=(const Locker&) = delete;
private:
void Initialize(Isolate* isolate);
bool has_lock_;
bool top_level_;
internal::Isolate* isolate_;
};
} // namespace v8
#endif // INCLUDE_V8_LOCKER_H_

View File

@ -0,0 +1,160 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_MAYBE_H_
#define INCLUDE_V8_MAYBE_H_
#include <type_traits>
#include <utility>
#include "v8-internal.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
namespace api_internal {
// Called when ToChecked is called on an empty Maybe.
V8_EXPORT void FromJustIsNothing();
} // namespace api_internal
/**
* A simple Maybe type, representing an object which may or may not have a
* value, see https://hackage.haskell.org/package/base/docs/Data-Maybe.html.
*
* If an API method returns a Maybe<>, the API method can potentially fail
* either because an exception is thrown, or because an exception is pending,
* e.g. because a previous API call threw an exception that hasn't been caught
* yet, or because a TerminateExecution exception was thrown. In that case, a
* "Nothing" value is returned.
*/
template <class T>
class Maybe {
public:
V8_INLINE bool IsNothing() const { return !has_value_; }
V8_INLINE bool IsJust() const { return has_value_; }
/**
* An alias for |FromJust|. Will crash if the Maybe<> is nothing.
*/
V8_INLINE T ToChecked() const { return FromJust(); }
/**
* Short-hand for ToChecked(), which doesn't return a value. To be used, where
* the actual value of the Maybe is not needed like Object::Set.
*/
V8_INLINE void Check() const {
if (V8_UNLIKELY(!IsJust())) api_internal::FromJustIsNothing();
}
/**
* Converts this Maybe<> to a value of type T. If this Maybe<> is
* nothing (empty), |false| is returned and |out| is left untouched.
*/
V8_WARN_UNUSED_RESULT V8_INLINE bool To(T* out) const {
if (V8_LIKELY(IsJust())) *out = value_;
return IsJust();
}
/**
* Converts this Maybe<> to a value of type T. If this Maybe<> is
* nothing (empty), V8 will crash the process.
*/
V8_INLINE T FromJust() const& {
if (V8_UNLIKELY(!IsJust())) api_internal::FromJustIsNothing();
return value_;
}
/**
* Converts this Maybe<> to a value of type T. If this Maybe<> is
* nothing (empty), V8 will crash the process.
*/
V8_INLINE T FromJust() && {
if (V8_UNLIKELY(!IsJust())) api_internal::FromJustIsNothing();
return std::move(value_);
}
/**
* Converts this Maybe<> to a value of type T, using a default value if this
* Maybe<> is nothing (empty).
*/
V8_INLINE T FromMaybe(const T& default_value) const {
return has_value_ ? value_ : default_value;
}
V8_INLINE bool operator==(const Maybe& other) const {
return (IsJust() == other.IsJust()) &&
(!IsJust() || FromJust() == other.FromJust());
}
V8_INLINE bool operator!=(const Maybe& other) const {
return !operator==(other);
}
private:
Maybe() : has_value_(false) {}
explicit Maybe(const T& t) : has_value_(true), value_(t) {}
explicit Maybe(T&& t) : has_value_(true), value_(std::move(t)) {}
bool has_value_;
T value_;
template <class U>
friend Maybe<U> Nothing();
template <class U>
friend Maybe<U> Just(const U& u);
template <class U, std::enable_if_t<!std::is_lvalue_reference_v<U>>*>
friend Maybe<U> Just(U&& u);
};
template <class T>
inline Maybe<T> Nothing() {
return Maybe<T>();
}
template <class T>
inline Maybe<T> Just(const T& t) {
return Maybe<T>(t);
}
// Don't use forwarding references here but instead use two overloads.
// Forwarding references only work when type deduction takes place, which is not
// the case for callsites such as Just<Type>(t).
template <class T, std::enable_if_t<!std::is_lvalue_reference_v<T>>* = nullptr>
inline Maybe<T> Just(T&& t) {
return Maybe<T>(std::move(t));
}
// A template specialization of Maybe<T> for the case of T = void.
template <>
class Maybe<void> {
public:
V8_INLINE bool IsNothing() const { return !is_valid_; }
V8_INLINE bool IsJust() const { return is_valid_; }
V8_INLINE bool operator==(const Maybe& other) const {
return IsJust() == other.IsJust();
}
V8_INLINE bool operator!=(const Maybe& other) const {
return !operator==(other);
}
private:
struct JustTag {};
Maybe() : is_valid_(false) {}
explicit Maybe(JustTag) : is_valid_(true) {}
bool is_valid_;
template <class U>
friend Maybe<U> Nothing();
friend Maybe<void> JustVoid();
};
inline Maybe<void> JustVoid() { return Maybe<void>(Maybe<void>::JustTag()); }
} // namespace v8
#endif // INCLUDE_V8_MAYBE_H_

View File

@ -0,0 +1,43 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_MEMORY_SPAN_H_
#define INCLUDE_V8_MEMORY_SPAN_H_
#include <stddef.h>
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
/**
* Points to an unowned continous buffer holding a known number of elements.
*
* This is similar to std::span (under consideration for C++20), but does not
* require advanced C++ support. In the (far) future, this may be replaced with
* or aliased to std::span.
*
* To facilitate future migration, this class exposes a subset of the interface
* implemented by std::span.
*/
template <typename T>
class V8_EXPORT MemorySpan {
public:
/** The default constructor creates an empty span. */
constexpr MemorySpan() = default;
constexpr MemorySpan(T* data, size_t size) : data_(data), size_(size) {}
/** Returns a pointer to the beginning of the buffer. */
constexpr T* data() const { return data_; }
/** Returns the number of elements that the buffer holds. */
constexpr size_t size() const { return size_; }
private:
T* data_ = nullptr;
size_t size_ = 0;
};
} // namespace v8
#endif // INCLUDE_V8_MEMORY_SPAN_H_

View File

@ -0,0 +1,214 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_MESSAGE_H_
#define INCLUDE_V8_MESSAGE_H_
#include <stdio.h>
#include <iosfwd>
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-maybe.h" // NOLINT(build/include_directory)
#include "v8-primitive.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Integer;
class PrimitiveArray;
class StackTrace;
class String;
class Value;
/**
* The optional attributes of ScriptOrigin.
*/
class ScriptOriginOptions {
public:
V8_INLINE ScriptOriginOptions(bool is_shared_cross_origin = false,
bool is_opaque = false, bool is_wasm = false,
bool is_module = false)
: flags_((is_shared_cross_origin ? kIsSharedCrossOrigin : 0) |
(is_wasm ? kIsWasm : 0) | (is_opaque ? kIsOpaque : 0) |
(is_module ? kIsModule : 0)) {}
V8_INLINE ScriptOriginOptions(int flags)
: flags_(flags &
(kIsSharedCrossOrigin | kIsOpaque | kIsWasm | kIsModule)) {}
bool IsSharedCrossOrigin() const {
return (flags_ & kIsSharedCrossOrigin) != 0;
}
bool IsOpaque() const { return (flags_ & kIsOpaque) != 0; }
bool IsWasm() const { return (flags_ & kIsWasm) != 0; }
bool IsModule() const { return (flags_ & kIsModule) != 0; }
int Flags() const { return flags_; }
private:
enum {
kIsSharedCrossOrigin = 1,
kIsOpaque = 1 << 1,
kIsWasm = 1 << 2,
kIsModule = 1 << 3
};
const int flags_;
};
/**
* The origin, within a file, of a script.
*/
class V8_EXPORT ScriptOrigin {
public:
V8_INLINE ScriptOrigin(Isolate* isolate, Local<Value> resource_name,
int resource_line_offset = 0,
int resource_column_offset = 0,
bool resource_is_shared_cross_origin = false,
int script_id = -1,
Local<Value> source_map_url = Local<Value>(),
bool resource_is_opaque = false, bool is_wasm = false,
bool is_module = false,
Local<Data> host_defined_options = Local<Data>())
: v8_isolate_(isolate),
resource_name_(resource_name),
resource_line_offset_(resource_line_offset),
resource_column_offset_(resource_column_offset),
options_(resource_is_shared_cross_origin, resource_is_opaque, is_wasm,
is_module),
script_id_(script_id),
source_map_url_(source_map_url),
host_defined_options_(host_defined_options) {
VerifyHostDefinedOptions();
}
V8_INLINE Local<Value> ResourceName() const;
V8_INLINE int LineOffset() const;
V8_INLINE int ColumnOffset() const;
V8_INLINE int ScriptId() const;
V8_INLINE Local<Value> SourceMapUrl() const;
V8_INLINE Local<Data> GetHostDefinedOptions() const;
V8_INLINE ScriptOriginOptions Options() const { return options_; }
private:
void VerifyHostDefinedOptions() const;
Isolate* v8_isolate_;
Local<Value> resource_name_;
int resource_line_offset_;
int resource_column_offset_;
ScriptOriginOptions options_;
int script_id_;
Local<Value> source_map_url_;
Local<Data> host_defined_options_;
};
/**
* An error message.
*/
class V8_EXPORT Message {
public:
Local<String> Get() const;
/**
* Return the isolate to which the Message belongs.
*/
Isolate* GetIsolate() const;
V8_WARN_UNUSED_RESULT MaybeLocal<String> GetSource(
Local<Context> context) const;
V8_WARN_UNUSED_RESULT MaybeLocal<String> GetSourceLine(
Local<Context> context) const;
/**
* Returns the origin for the script from where the function causing the
* error originates.
*/
ScriptOrigin GetScriptOrigin() const;
/**
* Returns the resource name for the script from where the function causing
* the error originates.
*/
Local<Value> GetScriptResourceName() const;
/**
* Exception stack trace. By default stack traces are not captured for
* uncaught exceptions. SetCaptureStackTraceForUncaughtExceptions allows
* to change this option.
*/
Local<StackTrace> GetStackTrace() const;
/**
* Returns the number, 1-based, of the line where the error occurred.
*/
V8_WARN_UNUSED_RESULT Maybe<int> GetLineNumber(Local<Context> context) const;
/**
* Returns the index within the script of the first character where
* the error occurred.
*/
int GetStartPosition() const;
/**
* Returns the index within the script of the last character where
* the error occurred.
*/
int GetEndPosition() const;
/**
* Returns the Wasm function index where the error occurred. Returns -1 if
* message is not from a Wasm script.
*/
int GetWasmFunctionIndex() const;
/**
* Returns the error level of the message.
*/
int ErrorLevel() const;
/**
* Returns the index within the line of the first character where
* the error occurred.
*/
int GetStartColumn() const;
V8_WARN_UNUSED_RESULT Maybe<int> GetStartColumn(Local<Context> context) const;
/**
* Returns the index within the line of the last character where
* the error occurred.
*/
int GetEndColumn() const;
V8_WARN_UNUSED_RESULT Maybe<int> GetEndColumn(Local<Context> context) const;
/**
* Passes on the value set by the embedder when it fed the script from which
* this Message was generated to V8.
*/
bool IsSharedCrossOrigin() const;
bool IsOpaque() const;
static void PrintCurrentStackTrace(Isolate* isolate, std::ostream& out);
static const int kNoLineNumberInfo = 0;
static const int kNoColumnInfo = 0;
static const int kNoScriptIdInfo = 0;
static const int kNoWasmFunctionIndexInfo = -1;
};
Local<Value> ScriptOrigin::ResourceName() const { return resource_name_; }
Local<Data> ScriptOrigin::GetHostDefinedOptions() const {
return host_defined_options_;
}
int ScriptOrigin::LineOffset() const { return resource_line_offset_; }
int ScriptOrigin::ColumnOffset() const { return resource_column_offset_; }
int ScriptOrigin::ScriptId() const { return script_id_; }
Local<Value> ScriptOrigin::SourceMapUrl() const { return source_map_url_; }
} // namespace v8
#endif // INCLUDE_V8_MESSAGE_H_

View File

@ -5,12 +5,24 @@
#ifndef V8_METRICS_H_
#define V8_METRICS_H_
#include "v8.h" // NOLINT(build/include_directory)
#include <stddef.h>
#include <stdint.h>
#include <vector>
#include "v8-internal.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Context;
class Isolate;
namespace metrics {
struct GarbageCollectionPhases {
int64_t total_wall_clock_duration_in_us = -1;
int64_t compact_wall_clock_duration_in_us = -1;
int64_t mark_wall_clock_duration_in_us = -1;
int64_t sweep_wall_clock_duration_in_us = -1;
@ -24,6 +36,7 @@ struct GarbageCollectionSizes {
};
struct GarbageCollectionFullCycle {
int reason = -1;
GarbageCollectionPhases total;
GarbageCollectionPhases total_cpp;
GarbageCollectionPhases main_thread;
@ -36,12 +49,12 @@ struct GarbageCollectionFullCycle {
GarbageCollectionSizes objects_cpp;
GarbageCollectionSizes memory;
GarbageCollectionSizes memory_cpp;
double collection_rate_in_percent;
double collection_rate_cpp_in_percent;
double efficiency_in_bytes_per_us;
double efficiency_cpp_in_bytes_per_us;
double main_thread_efficiency_in_bytes_per_us;
double main_thread_efficiency_cpp_in_bytes_per_us;
double collection_rate_in_percent = -1.0;
double collection_rate_cpp_in_percent = -1.0;
double efficiency_in_bytes_per_us = -1.0;
double efficiency_cpp_in_bytes_per_us = -1.0;
double main_thread_efficiency_in_bytes_per_us = -1.0;
double main_thread_efficiency_cpp_in_bytes_per_us = -1.0;
};
struct GarbageCollectionFullMainThreadIncrementalMark {
@ -54,15 +67,47 @@ struct GarbageCollectionFullMainThreadIncrementalSweep {
int64_t cpp_wall_clock_duration_in_us = -1;
};
template <typename EventType>
struct GarbageCollectionBatchedEvents {
std::vector<EventType> events;
};
using GarbageCollectionFullMainThreadBatchedIncrementalMark =
GarbageCollectionBatchedEvents<
GarbageCollectionFullMainThreadIncrementalMark>;
using GarbageCollectionFullMainThreadBatchedIncrementalSweep =
GarbageCollectionBatchedEvents<
GarbageCollectionFullMainThreadIncrementalSweep>;
struct GarbageCollectionYoungCycle {
int reason = -1;
int64_t total_wall_clock_duration_in_us = -1;
int64_t main_thread_wall_clock_duration_in_us = -1;
double collection_rate_in_percent;
double efficiency_in_bytes_per_us;
double main_thread_efficiency_in_bytes_per_us;
double collection_rate_in_percent = -1.0;
double efficiency_in_bytes_per_us = -1.0;
double main_thread_efficiency_in_bytes_per_us = -1.0;
#if defined(CPPGC_YOUNG_GENERATION)
GarbageCollectionPhases total_cpp;
GarbageCollectionSizes objects_cpp;
GarbageCollectionSizes memory_cpp;
double collection_rate_cpp_in_percent = -1.0;
double efficiency_cpp_in_bytes_per_us = -1.0;
double main_thread_efficiency_cpp_in_bytes_per_us = -1.0;
#endif // defined(CPPGC_YOUNG_GENERATION)
};
struct WasmModuleDecoded {
WasmModuleDecoded() = default;
WasmModuleDecoded(bool async, bool streamed, bool success,
size_t module_size_in_bytes, size_t function_count,
int64_t wall_clock_duration_in_us)
: async(async),
streamed(streamed),
success(success),
module_size_in_bytes(module_size_in_bytes),
function_count(function_count),
wall_clock_duration_in_us(wall_clock_duration_in_us) {}
bool async = false;
bool streamed = false;
bool success = false;
@ -72,6 +117,22 @@ struct WasmModuleDecoded {
};
struct WasmModuleCompiled {
WasmModuleCompiled() = default;
WasmModuleCompiled(bool async, bool streamed, bool cached, bool deserialized,
bool lazy, bool success, size_t code_size_in_bytes,
size_t liftoff_bailout_count,
int64_t wall_clock_duration_in_us)
: async(async),
streamed(streamed),
cached(cached),
deserialized(deserialized),
lazy(lazy),
success(success),
code_size_in_bytes(code_size_in_bytes),
liftoff_bailout_count(liftoff_bailout_count),
wall_clock_duration_in_us(wall_clock_duration_in_us) {}
bool async = false;
bool streamed = false;
bool cached = false;
@ -90,28 +151,10 @@ struct WasmModuleInstantiated {
int64_t wall_clock_duration_in_us = -1;
};
struct WasmModuleTieredUp {
bool lazy = false;
size_t code_size_in_bytes = 0;
int64_t wall_clock_duration_in_us = -1;
};
struct WasmModulesPerIsolate {
size_t count = 0;
};
#define V8_MAIN_THREAD_METRICS_EVENTS(V) \
V(GarbageCollectionFullCycle) \
V(GarbageCollectionFullMainThreadIncrementalMark) \
V(GarbageCollectionFullMainThreadIncrementalSweep) \
V(GarbageCollectionYoungCycle) \
V(WasmModuleDecoded) \
V(WasmModuleCompiled) \
V(WasmModuleInstantiated) \
V(WasmModuleTieredUp)
#define V8_THREAD_SAFE_METRICS_EVENTS(V) V(WasmModulesPerIsolate)
/**
* This class serves as a base class for recording event-based metrics in V8.
* There a two kinds of metrics, those which are expected to be thread-safe and
@ -121,19 +164,6 @@ struct WasmModulesPerIsolate {
* background thread, it will be delayed and executed by the foreground task
* runner.
*
* The thread-safe events are listed in the V8_THREAD_SAFE_METRICS_EVENTS
* macro above while the main thread event are listed in
* V8_MAIN_THREAD_METRICS_EVENTS above. For the former, a virtual method
* AddMainThreadEvent(const E& event, v8::Context::Token token) will be
* generated and for the latter AddThreadSafeEvent(const E& event).
*
* Thread-safe events are not allowed to access the context and therefore do
* not carry a context ID with them. These IDs can be generated using
* Recorder::GetContextId() and the ID will be valid throughout the lifetime
* of the isolate. It is not guaranteed that the ID will still resolve to
* a valid context using Recorder::GetContext() at the time the metric is
* recorded. In this case, an empty handle will be returned.
*
* The embedder is expected to call v8::Isolate::SetMetricsRecorder()
* providing its implementation and have the virtual methods overwritten
* for the events it cares about.
@ -164,14 +194,30 @@ class V8_EXPORT Recorder {
virtual ~Recorder() = default;
// Main thread events. Those are only triggered on the main thread, and hence
// can access the context.
#define ADD_MAIN_THREAD_EVENT(E) \
virtual void AddMainThreadEvent(const E& event, ContextId context_id) {}
V8_MAIN_THREAD_METRICS_EVENTS(ADD_MAIN_THREAD_EVENT)
virtual void AddMainThreadEvent(const E&, ContextId) {}
ADD_MAIN_THREAD_EVENT(GarbageCollectionFullCycle)
ADD_MAIN_THREAD_EVENT(GarbageCollectionFullMainThreadIncrementalMark)
ADD_MAIN_THREAD_EVENT(GarbageCollectionFullMainThreadBatchedIncrementalMark)
ADD_MAIN_THREAD_EVENT(GarbageCollectionFullMainThreadIncrementalSweep)
ADD_MAIN_THREAD_EVENT(GarbageCollectionFullMainThreadBatchedIncrementalSweep)
ADD_MAIN_THREAD_EVENT(GarbageCollectionYoungCycle)
ADD_MAIN_THREAD_EVENT(WasmModuleDecoded)
ADD_MAIN_THREAD_EVENT(WasmModuleCompiled)
ADD_MAIN_THREAD_EVENT(WasmModuleInstantiated)
#undef ADD_MAIN_THREAD_EVENT
// Thread-safe events are not allowed to access the context and therefore do
// not carry a context ID with them. These IDs can be generated using
// Recorder::GetContextId() and the ID will be valid throughout the lifetime
// of the isolate. It is not guaranteed that the ID will still resolve to
// a valid context using Recorder::GetContext() at the time the metric is
// recorded. In this case, an empty handle will be returned.
#define ADD_THREAD_SAFE_EVENT(E) \
virtual void AddThreadSafeEvent(const E& event) {}
V8_THREAD_SAFE_METRICS_EVENTS(ADD_THREAD_SAFE_EVENT)
virtual void AddThreadSafeEvent(const E&) {}
ADD_THREAD_SAFE_EVENT(WasmModulesPerIsolate)
#undef ADD_THREAD_SAFE_EVENT
virtual void NotifyIsolateDisposal() {}
@ -183,6 +229,34 @@ class V8_EXPORT Recorder {
static ContextId GetContextId(Local<Context> context);
};
/**
* Experimental API intended for the LongTasks UKM (crbug.com/1173527).
* The Reset() method should be called at the start of a potential
* long task. The Get() method returns durations of V8 work that
* happened during the task.
*
* This API is experimental and may be removed/changed in the future.
*/
struct V8_EXPORT LongTaskStats {
/**
* Resets durations of V8 work for the new task.
*/
V8_INLINE static void Reset(Isolate* isolate) {
v8::internal::Internals::IncrementLongTasksStatsCounter(isolate);
}
/**
* Returns durations of V8 work that happened since the last Reset().
*/
static LongTaskStats Get(Isolate* isolate);
int64_t gc_full_atomic_wall_clock_duration_us = 0;
int64_t gc_full_incremental_wall_clock_duration_us = 0;
int64_t gc_young_wall_clock_duration_us = 0;
// Only collected with --slow-histograms
int64_t v8_execute_us = 0;
};
} // namespace metrics
} // namespace v8

View File

@ -0,0 +1,157 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_MICROTASKS_QUEUE_H_
#define INCLUDE_V8_MICROTASKS_QUEUE_H_
#include <stddef.h>
#include <memory>
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-microtask.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Function;
namespace internal {
class Isolate;
class MicrotaskQueue;
} // namespace internal
/**
* Represents the microtask queue, where microtasks are stored and processed.
* https://html.spec.whatwg.org/multipage/webappapis.html#microtask-queue
* https://html.spec.whatwg.org/multipage/webappapis.html#enqueuejob(queuename,-job,-arguments)
* https://html.spec.whatwg.org/multipage/webappapis.html#perform-a-microtask-checkpoint
*
* A MicrotaskQueue instance may be associated to multiple Contexts by passing
* it to Context::New(), and they can be detached by Context::DetachGlobal().
* The embedder must keep the MicrotaskQueue instance alive until all associated
* Contexts are gone or detached.
*
* Use the same instance of MicrotaskQueue for all Contexts that may access each
* other synchronously. E.g. for Web embedding, use the same instance for all
* origins that share the same URL scheme and eTLD+1.
*/
class V8_EXPORT MicrotaskQueue {
public:
/**
* Creates an empty MicrotaskQueue instance.
*/
static std::unique_ptr<MicrotaskQueue> New(
Isolate* isolate, MicrotasksPolicy policy = MicrotasksPolicy::kAuto);
virtual ~MicrotaskQueue() = default;
/**
* Enqueues the callback to the queue.
*/
virtual void EnqueueMicrotask(Isolate* isolate,
Local<Function> microtask) = 0;
/**
* Enqueues the callback to the queue.
*/
virtual void EnqueueMicrotask(v8::Isolate* isolate,
MicrotaskCallback callback,
void* data = nullptr) = 0;
/**
* Adds a callback to notify the embedder after microtasks were run. The
* callback is triggered by explicit RunMicrotasks call or automatic
* microtasks execution (see Isolate::SetMicrotasksPolicy).
*
* Callback will trigger even if microtasks were attempted to run,
* but the microtasks queue was empty and no single microtask was actually
* executed.
*
* Executing scripts inside the callback will not re-trigger microtasks and
* the callback.
*/
virtual void AddMicrotasksCompletedCallback(
MicrotasksCompletedCallbackWithData callback, void* data = nullptr) = 0;
/**
* Removes callback that was installed by AddMicrotasksCompletedCallback.
*/
virtual void RemoveMicrotasksCompletedCallback(
MicrotasksCompletedCallbackWithData callback, void* data = nullptr) = 0;
/**
* Runs microtasks if no microtask is running on this MicrotaskQueue instance.
*/
virtual void PerformCheckpoint(Isolate* isolate) = 0;
/**
* Returns true if a microtask is running on this MicrotaskQueue instance.
*/
virtual bool IsRunningMicrotasks() const = 0;
/**
* Returns the current depth of nested MicrotasksScope that has
* kRunMicrotasks.
*/
virtual int GetMicrotasksScopeDepth() const = 0;
MicrotaskQueue(const MicrotaskQueue&) = delete;
MicrotaskQueue& operator=(const MicrotaskQueue&) = delete;
private:
friend class internal::MicrotaskQueue;
MicrotaskQueue() = default;
};
/**
* This scope is used to control microtasks when MicrotasksPolicy::kScoped
* is used on Isolate. In this mode every non-primitive call to V8 should be
* done inside some MicrotasksScope.
* Microtasks are executed when topmost MicrotasksScope marked as kRunMicrotasks
* exits.
* kDoNotRunMicrotasks should be used to annotate calls not intended to trigger
* microtasks.
*/
class V8_EXPORT V8_NODISCARD MicrotasksScope {
public:
enum Type { kRunMicrotasks, kDoNotRunMicrotasks };
V8_DEPRECATE_SOON(
"May be incorrect if context was created with non-default microtask "
"queue")
MicrotasksScope(Isolate* isolate, Type type);
MicrotasksScope(Local<Context> context, Type type);
MicrotasksScope(Isolate* isolate, MicrotaskQueue* microtask_queue, Type type);
~MicrotasksScope();
/**
* Runs microtasks if no kRunMicrotasks scope is currently active.
*/
static void PerformCheckpoint(Isolate* isolate);
/**
* Returns current depth of nested kRunMicrotasks scopes.
*/
static int GetCurrentDepth(Isolate* isolate);
/**
* Returns true while microtasks are being executed.
*/
static bool IsRunningMicrotasks(Isolate* isolate);
// Prevent copying.
MicrotasksScope(const MicrotasksScope&) = delete;
MicrotasksScope& operator=(const MicrotasksScope&) = delete;
private:
internal::Isolate* const i_isolate_;
internal::MicrotaskQueue* const microtask_queue_;
bool run_;
};
} // namespace v8
#endif // INCLUDE_V8_MICROTASKS_QUEUE_H_

View File

@ -0,0 +1,28 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_MICROTASK_H_
#define INCLUDE_V8_MICROTASK_H_
namespace v8 {
class Isolate;
// --- Microtasks Callbacks ---
using MicrotasksCompletedCallbackWithData = void (*)(Isolate*, void*);
using MicrotaskCallback = void (*)(void* data);
/**
* Policy for running microtasks:
* - explicit: microtasks are invoked with the
* Isolate::PerformMicrotaskCheckpoint() method;
* - scoped: microtasks invocation is controlled by MicrotasksScope objects;
* - auto: microtasks are invoked when the script call depth decrements
* to zero.
*/
enum class MicrotasksPolicy { kExplicit, kScoped, kAuto };
} // namespace v8
#endif // INCLUDE_V8_MICROTASK_H_

View File

@ -0,0 +1,797 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_OBJECT_H_
#define INCLUDE_V8_OBJECT_H_
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-maybe.h" // NOLINT(build/include_directory)
#include "v8-persistent-handle.h" // NOLINT(build/include_directory)
#include "v8-primitive.h" // NOLINT(build/include_directory)
#include "v8-traced-handle.h" // NOLINT(build/include_directory)
#include "v8-value.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Array;
class Function;
class FunctionTemplate;
template <typename T>
class PropertyCallbackInfo;
/**
* A private symbol
*
* This is an experimental feature. Use at your own risk.
*/
class V8_EXPORT Private : public Data {
public:
/**
* Returns the print name string of the private symbol, or undefined if none.
*/
Local<Value> Name() const;
/**
* Create a private symbol. If name is not empty, it will be the description.
*/
static Local<Private> New(Isolate* isolate,
Local<String> name = Local<String>());
/**
* Retrieve a global private symbol. If a symbol with this name has not
* been retrieved in the same isolate before, it is created.
* Note that private symbols created this way are never collected, so
* they should only be used for statically fixed properties.
* Also, there is only one global name space for the names used as keys.
* To minimize the potential for clashes, use qualified names as keys,
* e.g., "Class#property".
*/
static Local<Private> ForApi(Isolate* isolate, Local<String> name);
V8_INLINE static Private* Cast(Data* data);
private:
Private();
static void CheckCast(Data* that);
};
/**
* An instance of a Property Descriptor, see Ecma-262 6.2.4.
*
* Properties in a descriptor are present or absent. If you do not set
* `enumerable`, `configurable`, and `writable`, they are absent. If `value`,
* `get`, or `set` are absent, but you must specify them in the constructor, use
* empty handles.
*
* Accessors `get` and `set` must be callable or undefined if they are present.
*
* \note Only query properties if they are present, i.e., call `x()` only if
* `has_x()` returns true.
*
* \code
* // var desc = {writable: false}
* v8::PropertyDescriptor d(Local<Value>()), false);
* d.value(); // error, value not set
* if (d.has_writable()) {
* d.writable(); // false
* }
*
* // var desc = {value: undefined}
* v8::PropertyDescriptor d(v8::Undefined(isolate));
*
* // var desc = {get: undefined}
* v8::PropertyDescriptor d(v8::Undefined(isolate), Local<Value>()));
* \endcode
*/
class V8_EXPORT PropertyDescriptor {
public:
// GenericDescriptor
PropertyDescriptor();
// DataDescriptor
explicit PropertyDescriptor(Local<Value> value);
// DataDescriptor with writable property
PropertyDescriptor(Local<Value> value, bool writable);
// AccessorDescriptor
PropertyDescriptor(Local<Value> get, Local<Value> set);
~PropertyDescriptor();
Local<Value> value() const;
bool has_value() const;
Local<Value> get() const;
bool has_get() const;
Local<Value> set() const;
bool has_set() const;
void set_enumerable(bool enumerable);
bool enumerable() const;
bool has_enumerable() const;
void set_configurable(bool configurable);
bool configurable() const;
bool has_configurable() const;
bool writable() const;
bool has_writable() const;
struct PrivateData;
PrivateData* get_private() const { return private_; }
PropertyDescriptor(const PropertyDescriptor&) = delete;
void operator=(const PropertyDescriptor&) = delete;
private:
PrivateData* private_;
};
/**
* PropertyAttribute.
*/
enum PropertyAttribute {
/** None. **/
None = 0,
/** ReadOnly, i.e., not writable. **/
ReadOnly = 1 << 0,
/** DontEnum, i.e., not enumerable. **/
DontEnum = 1 << 1,
/** DontDelete, i.e., not configurable. **/
DontDelete = 1 << 2
};
/**
* Accessor[Getter|Setter] are used as callback functions when
* setting|getting a particular property. See Object and ObjectTemplate's
* method SetAccessor.
*/
using AccessorGetterCallback =
void (*)(Local<String> property, const PropertyCallbackInfo<Value>& info);
using AccessorNameGetterCallback =
void (*)(Local<Name> property, const PropertyCallbackInfo<Value>& info);
using AccessorSetterCallback = void (*)(Local<String> property,
Local<Value> value,
const PropertyCallbackInfo<void>& info);
using AccessorNameSetterCallback =
void (*)(Local<Name> property, Local<Value> value,
const PropertyCallbackInfo<void>& info);
/**
* Access control specifications.
*
* Some accessors should be accessible across contexts. These
* accessors have an explicit access control parameter which specifies
* the kind of cross-context access that should be allowed.
*
* TODO(dcarney): Remove PROHIBITS_OVERWRITING as it is now unused.
*/
enum AccessControl {
DEFAULT = 0,
ALL_CAN_READ = 1,
ALL_CAN_WRITE = 1 << 1,
PROHIBITS_OVERWRITING = 1 << 2
};
/**
* Property filter bits. They can be or'ed to build a composite filter.
*/
enum PropertyFilter {
ALL_PROPERTIES = 0,
ONLY_WRITABLE = 1,
ONLY_ENUMERABLE = 2,
ONLY_CONFIGURABLE = 4,
SKIP_STRINGS = 8,
SKIP_SYMBOLS = 16
};
/**
* Options for marking whether callbacks may trigger JS-observable side effects.
* Side-effect-free callbacks are allowlisted during debug evaluation with
* throwOnSideEffect. It applies when calling a Function, FunctionTemplate,
* or an Accessor callback. For Interceptors, please see
* PropertyHandlerFlags's kHasNoSideEffect.
* Callbacks that only cause side effects to the receiver are allowlisted if
* invoked on receiver objects that are created within the same debug-evaluate
* call, as these objects are temporary and the side effect does not escape.
*/
enum class SideEffectType {
kHasSideEffect,
kHasNoSideEffect,
kHasSideEffectToReceiver
};
/**
* Keys/Properties filter enums:
*
* KeyCollectionMode limits the range of collected properties. kOwnOnly limits
* the collected properties to the given Object only. kIncludesPrototypes will
* include all keys of the objects's prototype chain as well.
*/
enum class KeyCollectionMode { kOwnOnly, kIncludePrototypes };
/**
* kIncludesIndices allows for integer indices to be collected, while
* kSkipIndices will exclude integer indices from being collected.
*/
enum class IndexFilter { kIncludeIndices, kSkipIndices };
/**
* kConvertToString will convert integer indices to strings.
* kKeepNumbers will return numbers for integer indices.
*/
enum class KeyConversionMode { kConvertToString, kKeepNumbers, kNoNumbers };
/**
* Integrity level for objects.
*/
enum class IntegrityLevel { kFrozen, kSealed };
/**
* A JavaScript object (ECMA-262, 4.3.3)
*/
class V8_EXPORT Object : public Value {
public:
/**
* Set only return Just(true) or Empty(), so if it should never fail, use
* result.Check().
*/
V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context,
Local<Value> key, Local<Value> value);
V8_WARN_UNUSED_RESULT Maybe<bool> Set(Local<Context> context, uint32_t index,
Local<Value> value);
/**
* Implements CreateDataProperty(O, P, V), see
* https://tc39.es/ecma262/#sec-createdataproperty.
*
* Defines a configurable, writable, enumerable property with the given value
* on the object unless the property already exists and is not configurable
* or the object is not extensible.
*
* Returns true on success.
*/
V8_WARN_UNUSED_RESULT Maybe<bool> CreateDataProperty(Local<Context> context,
Local<Name> key,
Local<Value> value);
V8_WARN_UNUSED_RESULT Maybe<bool> CreateDataProperty(Local<Context> context,
uint32_t index,
Local<Value> value);
/**
* Implements [[DefineOwnProperty]] for data property case, see
* https://tc39.es/ecma262/#table-essential-internal-methods.
*
* In general, CreateDataProperty will be faster, however, does not allow
* for specifying attributes.
*
* Returns true on success.
*/
V8_WARN_UNUSED_RESULT Maybe<bool> DefineOwnProperty(
Local<Context> context, Local<Name> key, Local<Value> value,
PropertyAttribute attributes = None);
/**
* Implements Object.defineProperty(O, P, Attributes), see
* https://tc39.es/ecma262/#sec-object.defineproperty.
*
* The defineProperty function is used to add an own property or
* update the attributes of an existing own property of an object.
*
* Both data and accessor descriptors can be used.
*
* In general, CreateDataProperty is faster, however, does not allow
* for specifying attributes or an accessor descriptor.
*
* The PropertyDescriptor can change when redefining a property.
*
* Returns true on success.
*/
V8_WARN_UNUSED_RESULT Maybe<bool> DefineProperty(
Local<Context> context, Local<Name> key, PropertyDescriptor& descriptor);
V8_WARN_UNUSED_RESULT MaybeLocal<Value> Get(Local<Context> context,
Local<Value> key);
V8_WARN_UNUSED_RESULT MaybeLocal<Value> Get(Local<Context> context,
uint32_t index);
/**
* Gets the property attributes of a property which can be None or
* any combination of ReadOnly, DontEnum and DontDelete. Returns
* None when the property doesn't exist.
*/
V8_WARN_UNUSED_RESULT Maybe<PropertyAttribute> GetPropertyAttributes(
Local<Context> context, Local<Value> key);
/**
* Implements Object.getOwnPropertyDescriptor(O, P), see
* https://tc39.es/ecma262/#sec-object.getownpropertydescriptor.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Value> GetOwnPropertyDescriptor(
Local<Context> context, Local<Name> key);
/**
* Object::Has() calls the abstract operation HasProperty(O, P), see
* https://tc39.es/ecma262/#sec-hasproperty. Has() returns
* true, if the object has the property, either own or on the prototype chain.
* Interceptors, i.e., PropertyQueryCallbacks, are called if present.
*
* Has() has the same side effects as JavaScript's `variable in object`.
* For example, calling Has() on a revoked proxy will throw an exception.
*
* \note Has() converts the key to a name, which possibly calls back into
* JavaScript.
*
* See also v8::Object::HasOwnProperty() and
* v8::Object::HasRealNamedProperty().
*/
V8_WARN_UNUSED_RESULT Maybe<bool> Has(Local<Context> context,
Local<Value> key);
V8_WARN_UNUSED_RESULT Maybe<bool> Delete(Local<Context> context,
Local<Value> key);
V8_WARN_UNUSED_RESULT Maybe<bool> Has(Local<Context> context, uint32_t index);
V8_WARN_UNUSED_RESULT Maybe<bool> Delete(Local<Context> context,
uint32_t index);
/**
* Note: SideEffectType affects the getter only, not the setter.
*/
V8_WARN_UNUSED_RESULT Maybe<bool> SetAccessor(
Local<Context> context, Local<Name> name,
AccessorNameGetterCallback getter,
AccessorNameSetterCallback setter = nullptr,
MaybeLocal<Value> data = MaybeLocal<Value>(),
AccessControl settings = DEFAULT, PropertyAttribute attribute = None,
SideEffectType getter_side_effect_type = SideEffectType::kHasSideEffect,
SideEffectType setter_side_effect_type = SideEffectType::kHasSideEffect);
void SetAccessorProperty(Local<Name> name, Local<Function> getter,
Local<Function> setter = Local<Function>(),
PropertyAttribute attributes = None,
AccessControl settings = DEFAULT);
/**
* Sets a native data property like Template::SetNativeDataProperty, but
* this method sets on this object directly.
*/
V8_WARN_UNUSED_RESULT Maybe<bool> SetNativeDataProperty(
Local<Context> context, Local<Name> name,
AccessorNameGetterCallback getter,
AccessorNameSetterCallback setter = nullptr,
Local<Value> data = Local<Value>(), PropertyAttribute attributes = None,
SideEffectType getter_side_effect_type = SideEffectType::kHasSideEffect,
SideEffectType setter_side_effect_type = SideEffectType::kHasSideEffect);
/**
* Attempts to create a property with the given name which behaves like a data
* property, except that the provided getter is invoked (and provided with the
* data value) to supply its value the first time it is read. After the
* property is accessed once, it is replaced with an ordinary data property.
*
* Analogous to Template::SetLazyDataProperty.
*/
V8_WARN_UNUSED_RESULT Maybe<bool> SetLazyDataProperty(
Local<Context> context, Local<Name> name,
AccessorNameGetterCallback getter, Local<Value> data = Local<Value>(),
PropertyAttribute attributes = None,
SideEffectType getter_side_effect_type = SideEffectType::kHasSideEffect,
SideEffectType setter_side_effect_type = SideEffectType::kHasSideEffect);
/**
* Functionality for private properties.
* This is an experimental feature, use at your own risk.
* Note: Private properties are not inherited. Do not rely on this, since it
* may change.
*/
Maybe<bool> HasPrivate(Local<Context> context, Local<Private> key);
Maybe<bool> SetPrivate(Local<Context> context, Local<Private> key,
Local<Value> value);
Maybe<bool> DeletePrivate(Local<Context> context, Local<Private> key);
MaybeLocal<Value> GetPrivate(Local<Context> context, Local<Private> key);
/**
* Returns an array containing the names of the enumerable properties
* of this object, including properties from prototype objects. The
* array returned by this method contains the same values as would
* be enumerated by a for-in statement over this object.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Array> GetPropertyNames(
Local<Context> context);
V8_WARN_UNUSED_RESULT MaybeLocal<Array> GetPropertyNames(
Local<Context> context, KeyCollectionMode mode,
PropertyFilter property_filter, IndexFilter index_filter,
KeyConversionMode key_conversion = KeyConversionMode::kKeepNumbers);
/**
* This function has the same functionality as GetPropertyNames but
* the returned array doesn't contain the names of properties from
* prototype objects.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Array> GetOwnPropertyNames(
Local<Context> context);
/**
* Returns an array containing the names of the filtered properties
* of this object, including properties from prototype objects. The
* array returned by this method contains the same values as would
* be enumerated by a for-in statement over this object.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Array> GetOwnPropertyNames(
Local<Context> context, PropertyFilter filter,
KeyConversionMode key_conversion = KeyConversionMode::kKeepNumbers);
/**
* Get the prototype object. This does not skip objects marked to
* be skipped by __proto__ and it does not consult the security
* handler.
*/
Local<Value> GetPrototype();
/**
* Set the prototype object. This does not skip objects marked to
* be skipped by __proto__ and it does not consult the security
* handler.
*/
V8_WARN_UNUSED_RESULT Maybe<bool> SetPrototype(Local<Context> context,
Local<Value> prototype);
/**
* Finds an instance of the given function template in the prototype
* chain.
*/
Local<Object> FindInstanceInPrototypeChain(Local<FunctionTemplate> tmpl);
/**
* Call builtin Object.prototype.toString on this object.
* This is different from Value::ToString() that may call
* user-defined toString function. This one does not.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<String> ObjectProtoToString(
Local<Context> context);
/**
* Returns the name of the function invoked as a constructor for this object.
*/
Local<String> GetConstructorName();
/**
* Sets the integrity level of the object.
*/
Maybe<bool> SetIntegrityLevel(Local<Context> context, IntegrityLevel level);
/** Gets the number of internal fields for this Object. */
int InternalFieldCount() const;
/** Same as above, but works for PersistentBase. */
V8_INLINE static int InternalFieldCount(
const PersistentBase<Object>& object) {
return object.template value<Object>()->InternalFieldCount();
}
/** Same as above, but works for BasicTracedReference. */
V8_INLINE static int InternalFieldCount(
const BasicTracedReference<Object>& object) {
return object.template value<Object>()->InternalFieldCount();
}
/** Gets the value from an internal field. */
V8_INLINE Local<Value> GetInternalField(int index);
/** Sets the value in an internal field. */
void SetInternalField(int index, Local<Value> value);
/**
* Gets a 2-byte-aligned native pointer from an internal field. This field
* must have been set by SetAlignedPointerInInternalField, everything else
* leads to undefined behavior.
*/
V8_INLINE void* GetAlignedPointerFromInternalField(int index);
/** Same as above, but works for PersistentBase. */
V8_INLINE static void* GetAlignedPointerFromInternalField(
const PersistentBase<Object>& object, int index) {
return object.template value<Object>()->GetAlignedPointerFromInternalField(
index);
}
/** Same as above, but works for TracedReference. */
V8_INLINE static void* GetAlignedPointerFromInternalField(
const BasicTracedReference<Object>& object, int index) {
return object.template value<Object>()->GetAlignedPointerFromInternalField(
index);
}
/**
* Sets a 2-byte-aligned native pointer in an internal field. To retrieve such
* a field, GetAlignedPointerFromInternalField must be used, everything else
* leads to undefined behavior.
*/
void SetAlignedPointerInInternalField(int index, void* value);
void SetAlignedPointerInInternalFields(int argc, int indices[],
void* values[]);
/**
* HasOwnProperty() is like JavaScript's Object.prototype.hasOwnProperty().
*
* See also v8::Object::Has() and v8::Object::HasRealNamedProperty().
*/
V8_WARN_UNUSED_RESULT Maybe<bool> HasOwnProperty(Local<Context> context,
Local<Name> key);
V8_WARN_UNUSED_RESULT Maybe<bool> HasOwnProperty(Local<Context> context,
uint32_t index);
/**
* Use HasRealNamedProperty() if you want to check if an object has an own
* property without causing side effects, i.e., without calling interceptors.
*
* This function is similar to v8::Object::HasOwnProperty(), but it does not
* call interceptors.
*
* \note Consider using non-masking interceptors, i.e., the interceptors are
* not called if the receiver has the real named property. See
* `v8::PropertyHandlerFlags::kNonMasking`.
*
* See also v8::Object::Has().
*/
V8_WARN_UNUSED_RESULT Maybe<bool> HasRealNamedProperty(Local<Context> context,
Local<Name> key);
V8_WARN_UNUSED_RESULT Maybe<bool> HasRealIndexedProperty(
Local<Context> context, uint32_t index);
V8_WARN_UNUSED_RESULT Maybe<bool> HasRealNamedCallbackProperty(
Local<Context> context, Local<Name> key);
/**
* If result.IsEmpty() no real property was located in the prototype chain.
* This means interceptors in the prototype chain are not called.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Value> GetRealNamedPropertyInPrototypeChain(
Local<Context> context, Local<Name> key);
/**
* Gets the property attributes of a real property in the prototype chain,
* which can be None or any combination of ReadOnly, DontEnum and DontDelete.
* Interceptors in the prototype chain are not called.
*/
V8_WARN_UNUSED_RESULT Maybe<PropertyAttribute>
GetRealNamedPropertyAttributesInPrototypeChain(Local<Context> context,
Local<Name> key);
/**
* If result.IsEmpty() no real property was located on the object or
* in the prototype chain.
* This means interceptors in the prototype chain are not called.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Value> GetRealNamedProperty(
Local<Context> context, Local<Name> key);
/**
* Gets the property attributes of a real property which can be
* None or any combination of ReadOnly, DontEnum and DontDelete.
* Interceptors in the prototype chain are not called.
*/
V8_WARN_UNUSED_RESULT Maybe<PropertyAttribute> GetRealNamedPropertyAttributes(
Local<Context> context, Local<Name> key);
/** Tests for a named lookup interceptor.*/
bool HasNamedLookupInterceptor() const;
/** Tests for an index lookup interceptor.*/
bool HasIndexedLookupInterceptor() const;
/**
* Returns the identity hash for this object. The current implementation
* uses a hidden property on the object to store the identity hash.
*
* The return value will never be 0. Also, it is not guaranteed to be
* unique.
*/
int GetIdentityHash();
/**
* Clone this object with a fast but shallow copy. Values will point
* to the same values as the original object.
*/
// TODO(dcarney): take an isolate and optionally bail out?
Local<Object> Clone();
/**
* Returns the context in which the object was created.
*/
MaybeLocal<Context> GetCreationContext();
/**
* Shortcut for GetCreationContext().ToLocalChecked().
**/
Local<Context> GetCreationContextChecked();
/** Same as above, but works for Persistents */
V8_INLINE static MaybeLocal<Context> GetCreationContext(
const PersistentBase<Object>& object) {
return object.template value<Object>()->GetCreationContext();
}
/**
* Gets the context in which the object was created (see GetCreationContext())
* and if it's available reads respective embedder field value.
* If the context can't be obtained nullptr is returned.
* Basically it's a shortcut for
* obj->GetCreationContext().GetAlignedPointerFromEmbedderData(index)
* which doesn't create a handle for Context object on the way and doesn't
* try to expand the embedder data attached to the context.
* In case the Local<Context> is already available because of other reasons,
* it's fine to keep using Context::GetAlignedPointerFromEmbedderData().
*/
void* GetAlignedPointerFromEmbedderDataInCreationContext(int index);
/**
* Checks whether a callback is set by the
* ObjectTemplate::SetCallAsFunctionHandler method.
* When an Object is callable this method returns true.
*/
bool IsCallable() const;
/**
* True if this object is a constructor.
*/
bool IsConstructor() const;
/**
* True if this object can carry information relevant to the embedder in its
* embedder fields, false otherwise. This is generally true for objects
* constructed through function templates but also holds for other types where
* V8 automatically adds internal fields at compile time, such as e.g.
* v8::ArrayBuffer.
*/
bool IsApiWrapper() const;
/**
* True if this object was created from an object template which was marked
* as undetectable. See v8::ObjectTemplate::MarkAsUndetectable for more
* information.
*/
bool IsUndetectable() const;
/**
* Call an Object as a function if a callback is set by the
* ObjectTemplate::SetCallAsFunctionHandler method.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Value> CallAsFunction(Local<Context> context,
Local<Value> recv,
int argc,
Local<Value> argv[]);
/**
* Call an Object as a constructor if a callback is set by the
* ObjectTemplate::SetCallAsFunctionHandler method.
* Note: This method behaves like the Function::NewInstance method.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Value> CallAsConstructor(
Local<Context> context, int argc, Local<Value> argv[]);
/**
* Return the isolate to which the Object belongs to.
*/
Isolate* GetIsolate();
V8_INLINE static Isolate* GetIsolate(const TracedReference<Object>& handle) {
return handle.template value<Object>()->GetIsolate();
}
/**
* If this object is a Set, Map, WeakSet or WeakMap, this returns a
* representation of the elements of this object as an array.
* If this object is a SetIterator or MapIterator, this returns all
* elements of the underlying collection, starting at the iterator's current
* position.
* For other types, this will return an empty MaybeLocal<Array> (without
* scheduling an exception).
*/
MaybeLocal<Array> PreviewEntries(bool* is_key_value);
static Local<Object> New(Isolate* isolate);
/**
* Creates a JavaScript object with the given properties, and
* a the given prototype_or_null (which can be any JavaScript
* value, and if it's null, the newly created object won't have
* a prototype at all). This is similar to Object.create().
* All properties will be created as enumerable, configurable
* and writable properties.
*/
static Local<Object> New(Isolate* isolate, Local<Value> prototype_or_null,
Local<Name>* names, Local<Value>* values,
size_t length);
V8_INLINE static Object* Cast(Value* obj);
/**
* Support for TC39 "dynamic code brand checks" proposal.
*
* This API allows to query whether an object was constructed from a
* "code like" ObjectTemplate.
*
* See also: v8::ObjectTemplate::SetCodeLike
*/
bool IsCodeLike(Isolate* isolate) const;
private:
Object();
static void CheckCast(Value* obj);
Local<Value> SlowGetInternalField(int index);
void* SlowGetAlignedPointerFromInternalField(int index);
};
// --- Implementation ---
Local<Value> Object::GetInternalField(int index) {
#ifndef V8_ENABLE_CHECKS
using A = internal::Address;
using I = internal::Internals;
A obj = internal::ValueHelper::ValueAsAddress(this);
// Fast path: If the object is a plain JSObject, which is the common case, we
// know where to find the internal fields and can return the value directly.
int instance_type = I::GetInstanceType(obj);
if (I::CanHaveInternalField(instance_type)) {
int offset = I::kJSObjectHeaderSize + (I::kEmbedderDataSlotSize * index);
A value = I::ReadRawField<A>(obj, offset);
#ifdef V8_COMPRESS_POINTERS
// We read the full pointer value and then decompress it in order to avoid
// dealing with potential endiannes issues.
value = I::DecompressTaggedField(obj, static_cast<uint32_t>(value));
#endif
auto isolate = reinterpret_cast<v8::Isolate*>(
internal::IsolateFromNeverReadOnlySpaceObject(obj));
return Local<Value>::New(isolate, value);
}
#endif
return SlowGetInternalField(index);
}
void* Object::GetAlignedPointerFromInternalField(int index) {
#if !defined(V8_ENABLE_CHECKS)
using A = internal::Address;
using I = internal::Internals;
A obj = internal::ValueHelper::ValueAsAddress(this);
// Fast path: If the object is a plain JSObject, which is the common case, we
// know where to find the internal fields and can return the value directly.
auto instance_type = I::GetInstanceType(obj);
if (I::CanHaveInternalField(instance_type)) {
int offset = I::kJSObjectHeaderSize + (I::kEmbedderDataSlotSize * index) +
I::kEmbedderDataSlotExternalPointerOffset;
Isolate* isolate = I::GetIsolateForSandbox(obj);
A value =
I::ReadExternalPointerField<internal::kEmbedderDataSlotPayloadTag>(
isolate, obj, offset);
return reinterpret_cast<void*>(value);
}
#endif
return SlowGetAlignedPointerFromInternalField(index);
}
Private* Private::Cast(Data* data) {
#ifdef V8_ENABLE_CHECKS
CheckCast(data);
#endif
return reinterpret_cast<Private*>(data);
}
Object* Object::Cast(v8::Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Object*>(value);
}
} // namespace v8
#endif // INCLUDE_V8_OBJECT_H_

View File

@ -0,0 +1,573 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_PERSISTENT_HANDLE_H_
#define INCLUDE_V8_PERSISTENT_HANDLE_H_
#include "v8-internal.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-weak-callback-info.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Isolate;
template <class K, class V, class T>
class PersistentValueMapBase;
template <class V, class T>
class PersistentValueVector;
template <class T>
class Global;
template <class T>
class PersistentBase;
template <class K, class V, class T>
class PersistentValueMap;
class Value;
namespace api_internal {
V8_EXPORT internal::Address* Eternalize(v8::Isolate* isolate, Value* handle);
V8_EXPORT internal::Address* CopyGlobalReference(internal::Address* from);
V8_EXPORT void DisposeGlobal(internal::Address* global_handle);
V8_EXPORT void MakeWeak(internal::Address** location_addr);
V8_EXPORT void* ClearWeak(internal::Address* location);
V8_EXPORT void AnnotateStrongRetainer(internal::Address* location,
const char* label);
V8_EXPORT internal::Address* GlobalizeReference(internal::Isolate* isolate,
internal::Address value);
V8_EXPORT void MoveGlobalReference(internal::Address** from,
internal::Address** to);
} // namespace api_internal
/**
* Eternal handles are set-once handles that live for the lifetime of the
* isolate.
*/
template <class T>
class Eternal : public IndirectHandleBase {
public:
V8_INLINE Eternal() = default;
template <class S>
V8_INLINE Eternal(Isolate* isolate, Local<S> handle) {
Set(isolate, handle);
}
// Can only be safely called if already set.
V8_INLINE Local<T> Get(Isolate* isolate) const {
// The eternal handle will never go away, so as with the roots, we don't
// even need to open a handle.
return Local<T>::FromSlot(slot());
}
template <class S>
void Set(Isolate* isolate, Local<S> handle) {
static_assert(std::is_base_of<T, S>::value, "type check");
slot() =
api_internal::Eternalize(isolate, *handle.template UnsafeAs<Value>());
}
};
namespace api_internal {
V8_EXPORT void MakeWeak(internal::Address* location, void* data,
WeakCallbackInfo<void>::Callback weak_callback,
WeakCallbackType type);
} // namespace api_internal
/**
* An object reference that is independent of any handle scope. Where
* a Local handle only lives as long as the HandleScope in which it was
* allocated, a PersistentBase handle remains valid until it is explicitly
* disposed using Reset().
*
* A persistent handle contains a reference to a storage cell within
* the V8 engine which holds an object value and which is updated by
* the garbage collector whenever the object is moved. A new storage
* cell can be created using the constructor or PersistentBase::Reset and
* existing handles can be disposed using PersistentBase::Reset.
*
*/
template <class T>
class PersistentBase : public IndirectHandleBase {
public:
/**
* If non-empty, destroy the underlying storage cell
* IsEmpty() will return true after this call.
*/
V8_INLINE void Reset();
/**
* If non-empty, destroy the underlying storage cell
* and create a new one with the contents of other if other is non empty
*/
template <class S>
V8_INLINE void Reset(Isolate* isolate, const Local<S>& other);
/**
* If non-empty, destroy the underlying storage cell
* and create a new one with the contents of other if other is non empty
*/
template <class S>
V8_INLINE void Reset(Isolate* isolate, const PersistentBase<S>& other);
V8_INLINE Local<T> Get(Isolate* isolate) const {
return Local<T>::New(isolate, *this);
}
template <class S>
V8_INLINE bool operator==(const PersistentBase<S>& that) const {
return internal::HandleHelper::EqualHandles(*this, that);
}
template <class S>
V8_INLINE bool operator==(const Local<S>& that) const {
return internal::HandleHelper::EqualHandles(*this, that);
}
template <class S>
V8_INLINE bool operator!=(const PersistentBase<S>& that) const {
return !operator==(that);
}
template <class S>
V8_INLINE bool operator!=(const Local<S>& that) const {
return !operator==(that);
}
/**
* Install a finalization callback on this object.
* NOTE: There is no guarantee as to *when* or even *if* the callback is
* invoked. The invocation is performed solely on a best effort basis.
* As always, GC-based finalization should *not* be relied upon for any
* critical form of resource management!
*
* The callback is supposed to reset the handle. No further V8 API may be
* called in this callback. In case additional work involving V8 needs to be
* done, a second callback can be scheduled using
* WeakCallbackInfo<void>::SetSecondPassCallback.
*/
template <typename P>
V8_INLINE void SetWeak(P* parameter,
typename WeakCallbackInfo<P>::Callback callback,
WeakCallbackType type);
/**
* Turns this handle into a weak phantom handle without finalization callback.
* The handle will be reset automatically when the garbage collector detects
* that the object is no longer reachable.
*/
V8_INLINE void SetWeak();
template <typename P>
V8_INLINE P* ClearWeak();
// TODO(dcarney): remove this.
V8_INLINE void ClearWeak() { ClearWeak<void>(); }
/**
* Annotates the strong handle with the given label, which is then used by the
* heap snapshot generator as a name of the edge from the root to the handle.
* The function does not take ownership of the label and assumes that the
* label is valid as long as the handle is valid.
*/
V8_INLINE void AnnotateStrongRetainer(const char* label);
/** Returns true if the handle's reference is weak. */
V8_INLINE bool IsWeak() const;
/**
* Assigns a wrapper class ID to the handle.
*/
V8_INLINE void SetWrapperClassId(uint16_t class_id);
/**
* Returns the class ID previously assigned to this handle or 0 if no class ID
* was previously assigned.
*/
V8_INLINE uint16_t WrapperClassId() const;
PersistentBase(const PersistentBase& other) = delete;
void operator=(const PersistentBase&) = delete;
private:
friend class Isolate;
friend class Utils;
template <class F>
friend class Local;
template <class F1, class F2>
friend class Persistent;
template <class F>
friend class Global;
template <class F>
friend class PersistentBase;
template <class F>
friend class ReturnValue;
template <class F1, class F2, class F3>
friend class PersistentValueMapBase;
template <class F1, class F2>
friend class PersistentValueVector;
friend class Object;
friend class internal::ValueHelper;
V8_INLINE PersistentBase() = default;
V8_INLINE explicit PersistentBase(internal::Address* location)
: IndirectHandleBase(location) {}
V8_INLINE static internal::Address* New(Isolate* isolate, T* that);
};
/**
* Default traits for Persistent. This class does not allow
* use of the copy constructor or assignment operator.
* At present kResetInDestructor is not set, but that will change in a future
* version.
*/
template <class T>
class NonCopyablePersistentTraits {
public:
using NonCopyablePersistent = Persistent<T, NonCopyablePersistentTraits<T>>;
static const bool kResetInDestructor = false;
template <class S, class M>
V8_INLINE static void Copy(const Persistent<S, M>& source,
NonCopyablePersistent* dest) {
static_assert(sizeof(S) < 0,
"NonCopyablePersistentTraits::Copy is not instantiable");
}
};
/**
* Helper class traits to allow copying and assignment of Persistent.
* This will clone the contents of storage cell, but not any of the flags, etc.
*/
template <class T>
struct V8_DEPRECATED("Use v8::Global instead") CopyablePersistentTraits {
using CopyablePersistent = Persistent<T, CopyablePersistentTraits<T>>;
static const bool kResetInDestructor = true;
template <class S, class M>
static V8_INLINE void Copy(const Persistent<S, M>& source,
CopyablePersistent* dest) {
// do nothing, just allow copy
}
};
/**
* A PersistentBase which allows copy and assignment.
*
* Copy, assignment and destructor behavior is controlled by the traits
* class M.
*
* Note: Persistent class hierarchy is subject to future changes.
*/
template <class T, class M>
class Persistent : public PersistentBase<T> {
public:
/**
* A Persistent with no storage cell.
*/
V8_INLINE Persistent() = default;
/**
* Construct a Persistent from a Local.
* When the Local is non-empty, a new storage cell is created
* pointing to the same object, and no flags are set.
*/
template <class S>
V8_INLINE Persistent(Isolate* isolate, Local<S> that)
: PersistentBase<T>(
PersistentBase<T>::New(isolate, that.template value<S>())) {
static_assert(std::is_base_of<T, S>::value, "type check");
}
/**
* Construct a Persistent from a Persistent.
* When the Persistent is non-empty, a new storage cell is created
* pointing to the same object, and no flags are set.
*/
template <class S, class M2>
V8_INLINE Persistent(Isolate* isolate, const Persistent<S, M2>& that)
: PersistentBase<T>(
PersistentBase<T>::New(isolate, that.template value<S>())) {
static_assert(std::is_base_of<T, S>::value, "type check");
}
/**
* The copy constructors and assignment operator create a Persistent
* exactly as the Persistent constructor, but the Copy function from the
* traits class is called, allowing the setting of flags based on the
* copied Persistent.
*/
V8_INLINE Persistent(const Persistent& that) : PersistentBase<T>() {
Copy(that);
}
template <class S, class M2>
V8_INLINE Persistent(const Persistent<S, M2>& that) : PersistentBase<T>() {
Copy(that);
}
V8_INLINE Persistent& operator=(const Persistent& that) {
Copy(that);
return *this;
}
template <class S, class M2>
V8_INLINE Persistent& operator=(const Persistent<S, M2>& that) {
Copy(that);
return *this;
}
/**
* The destructor will dispose the Persistent based on the
* kResetInDestructor flags in the traits class. Since not calling dispose
* can result in a memory leak, it is recommended to always set this flag.
*/
V8_INLINE ~Persistent() {
if (M::kResetInDestructor) this->Reset();
}
// TODO(dcarney): this is pretty useless, fix or remove
template <class S, class M2>
V8_INLINE static Persistent<T, M>& Cast(const Persistent<S, M2>& that) {
#ifdef V8_ENABLE_CHECKS
// If we're going to perform the type check then we have to check
// that the handle isn't empty before doing the checked cast.
if (!that.IsEmpty()) T::Cast(that.template value<S>());
#endif
return reinterpret_cast<Persistent<T, M>&>(
const_cast<Persistent<S, M2>&>(that));
}
// TODO(dcarney): this is pretty useless, fix or remove
template <class S, class M2>
V8_INLINE Persistent<S, M2>& As() const {
return Persistent<S, M2>::Cast(*this);
}
private:
friend class Isolate;
friend class Utils;
template <class F>
friend class Local;
template <class F1, class F2>
friend class Persistent;
template <class F>
friend class ReturnValue;
template <class S, class M2>
V8_INLINE void Copy(const Persistent<S, M2>& that);
};
/**
* A PersistentBase which has move semantics.
*
* Note: Persistent class hierarchy is subject to future changes.
*/
template <class T>
class Global : public PersistentBase<T> {
public:
/**
* A Global with no storage cell.
*/
V8_INLINE Global() = default;
/**
* Construct a Global from a Local.
* When the Local is non-empty, a new storage cell is created
* pointing to the same object, and no flags are set.
*/
template <class S>
V8_INLINE Global(Isolate* isolate, Local<S> that)
: PersistentBase<T>(
PersistentBase<T>::New(isolate, that.template value<S>())) {
static_assert(std::is_base_of<T, S>::value, "type check");
}
/**
* Construct a Global from a PersistentBase.
* When the Persistent is non-empty, a new storage cell is created
* pointing to the same object, and no flags are set.
*/
template <class S>
V8_INLINE Global(Isolate* isolate, const PersistentBase<S>& that)
: PersistentBase<T>(
PersistentBase<T>::New(isolate, that.template value<S>())) {
static_assert(std::is_base_of<T, S>::value, "type check");
}
/**
* Move constructor.
*/
V8_INLINE Global(Global&& other);
V8_INLINE ~Global() { this->Reset(); }
/**
* Move via assignment.
*/
template <class S>
V8_INLINE Global& operator=(Global<S>&& rhs);
/**
* Pass allows returning uniques from functions, etc.
*/
Global Pass() { return static_cast<Global&&>(*this); }
/*
* For compatibility with Chromium's base::Bind (base::Passed).
*/
using MoveOnlyTypeForCPP03 = void;
Global(const Global&) = delete;
void operator=(const Global&) = delete;
private:
template <class F>
friend class ReturnValue;
};
// UniquePersistent is an alias for Global for historical reason.
template <class T>
using UniquePersistent = Global<T>;
/**
* Interface for iterating through all the persistent handles in the heap.
*/
class V8_EXPORT PersistentHandleVisitor {
public:
virtual ~PersistentHandleVisitor() = default;
virtual void VisitPersistentHandle(Persistent<Value>* value,
uint16_t class_id) {}
};
template <class T>
internal::Address* PersistentBase<T>::New(Isolate* isolate, T* that) {
if (internal::ValueHelper::IsEmpty(that)) return nullptr;
return api_internal::GlobalizeReference(
reinterpret_cast<internal::Isolate*>(isolate),
internal::ValueHelper::ValueAsAddress(that));
}
template <class T, class M>
template <class S, class M2>
void Persistent<T, M>::Copy(const Persistent<S, M2>& that) {
static_assert(std::is_base_of<T, S>::value, "type check");
this->Reset();
if (that.IsEmpty()) return;
this->slot() = api_internal::CopyGlobalReference(that.slot());
M::Copy(that, this);
}
template <class T>
bool PersistentBase<T>::IsWeak() const {
using I = internal::Internals;
if (this->IsEmpty()) return false;
return I::GetNodeState(this->slot()) == I::kNodeStateIsWeakValue;
}
template <class T>
void PersistentBase<T>::Reset() {
if (this->IsEmpty()) return;
api_internal::DisposeGlobal(this->slot());
this->Clear();
}
/**
* If non-empty, destroy the underlying storage cell
* and create a new one with the contents of other if other is non empty
*/
template <class T>
template <class S>
void PersistentBase<T>::Reset(Isolate* isolate, const Local<S>& other) {
static_assert(std::is_base_of<T, S>::value, "type check");
Reset();
if (other.IsEmpty()) return;
this->slot() = New(isolate, *other);
}
/**
* If non-empty, destroy the underlying storage cell
* and create a new one with the contents of other if other is non empty
*/
template <class T>
template <class S>
void PersistentBase<T>::Reset(Isolate* isolate,
const PersistentBase<S>& other) {
static_assert(std::is_base_of<T, S>::value, "type check");
Reset();
if (other.IsEmpty()) return;
this->slot() = New(isolate, other.template value<S>());
}
template <class T>
template <typename P>
V8_INLINE void PersistentBase<T>::SetWeak(
P* parameter, typename WeakCallbackInfo<P>::Callback callback,
WeakCallbackType type) {
using Callback = WeakCallbackInfo<void>::Callback;
#if (__GNUC__ >= 8) && !defined(__clang__)
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wcast-function-type"
#endif
api_internal::MakeWeak(this->slot(), parameter,
reinterpret_cast<Callback>(callback), type);
#if (__GNUC__ >= 8) && !defined(__clang__)
#pragma GCC diagnostic pop
#endif
}
template <class T>
void PersistentBase<T>::SetWeak() {
api_internal::MakeWeak(&this->slot());
}
template <class T>
template <typename P>
P* PersistentBase<T>::ClearWeak() {
return reinterpret_cast<P*>(api_internal::ClearWeak(this->slot()));
}
template <class T>
void PersistentBase<T>::AnnotateStrongRetainer(const char* label) {
api_internal::AnnotateStrongRetainer(this->slot(), label);
}
template <class T>
void PersistentBase<T>::SetWrapperClassId(uint16_t class_id) {
using I = internal::Internals;
if (this->IsEmpty()) return;
uint8_t* addr = reinterpret_cast<uint8_t*>(slot()) + I::kNodeClassIdOffset;
*reinterpret_cast<uint16_t*>(addr) = class_id;
}
template <class T>
uint16_t PersistentBase<T>::WrapperClassId() const {
using I = internal::Internals;
if (this->IsEmpty()) return 0;
uint8_t* addr = reinterpret_cast<uint8_t*>(slot()) + I::kNodeClassIdOffset;
return *reinterpret_cast<uint16_t*>(addr);
}
template <class T>
Global<T>::Global(Global&& other) : PersistentBase<T>(other.slot()) {
if (!other.IsEmpty()) {
api_internal::MoveGlobalReference(&other.slot(), &this->slot());
other.Clear();
}
}
template <class T>
template <class S>
Global<T>& Global<T>::operator=(Global<S>&& rhs) {
static_assert(std::is_base_of<T, S>::value, "type check");
if (this != &rhs) {
this->Reset();
if (!rhs.IsEmpty()) {
this->slot() = rhs.slot();
api_internal::MoveGlobalReference(&rhs.slot(), &this->slot());
rhs.Clear();
}
}
return *this;
}
} // namespace v8
#endif // INCLUDE_V8_PERSISTENT_HANDLE_H_

View File

@ -5,12 +5,15 @@
#ifndef V8_V8_PLATFORM_H_
#define V8_V8_PLATFORM_H_
#include <math.h>
#include <stddef.h>
#include <stdint.h>
#include <stdlib.h> // For abort.
#include <memory>
#include <string>
#include "v8-source-location.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
@ -158,9 +161,10 @@ class TaskRunner {
class JobDelegate {
public:
/**
* Returns true if this thread should return from the worker task on the
* Returns true if this thread *must* return from the worker task on the
* current thread ASAP. Workers should periodically invoke ShouldYield (or
* YieldIfNeeded()) as often as is reasonable.
* After this method returned true, ShouldYield must not be called again.
*/
virtual bool ShouldYield() = 0;
@ -258,12 +262,48 @@ class JobTask {
* Controls the maximum number of threads calling Run() concurrently, given
* the number of threads currently assigned to this job and executing Run().
* Run() is only invoked if the number of threads previously running Run() was
* less than the value returned. Since GetMaxConcurrency() is a leaf function,
* it must not call back any JobHandle methods.
* less than the value returned. In general, this should return the latest
* number of incomplete work items (smallest unit of work) left to process,
* including items that are currently in progress. |worker_count| is the
* number of threads currently assigned to this job which some callers may
* need to determine their return value. Since GetMaxConcurrency() is a leaf
* function, it must not call back any JobHandle methods.
*/
virtual size_t GetMaxConcurrency(size_t worker_count) const = 0;
};
/**
* A "blocking call" refers to any call that causes the calling thread to wait
* off-CPU. It includes but is not limited to calls that wait on synchronous
* file I/O operations: read or write a file from disk, interact with a pipe or
* a socket, rename or delete a file, enumerate files in a directory, etc.
* Acquiring a low contention lock is not considered a blocking call.
*/
/**
* BlockingType indicates the likelihood that a blocking call will actually
* block.
*/
enum class BlockingType {
// The call might block (e.g. file I/O that might hit in memory cache).
kMayBlock,
// The call will definitely block (e.g. cache already checked and now pinging
// server synchronously).
kWillBlock
};
/**
* This class is instantiated with CreateBlockingScope() in every scope where a
* blocking call is made and serves as a precise annotation of the scope that
* may/will block. May be implemented by an embedder to adjust the thread count.
* CPU usage should be minimal within that scope. ScopedBlockingCalls can be
* nested.
*/
class ScopedBlockingCall {
public:
virtual ~ScopedBlockingCall() = default;
};
/**
* The interface represents complex arguments to trace events.
*/
@ -284,6 +324,8 @@ class ConvertableToTraceFormat {
* V8 Tracing controller.
*
* Can be implemented by an embedder to record trace events from V8.
*
* Will become obsolete in Perfetto SDK build (v8_use_perfetto = true).
*/
class TracingController {
public:
@ -347,10 +389,16 @@ class TracingController {
virtual void OnTraceDisabled() = 0;
};
/** Adds tracing state change observer. */
/**
* Adds tracing state change observer.
* Does nothing in Perfetto SDK build (v8_use_perfetto = true).
*/
virtual void AddTraceStateObserver(TraceStateObserver*) {}
/** Removes tracing state change observer. */
/**
* Removes tracing state change observer.
* Does nothing in Perfetto SDK build (v8_use_perfetto = true).
*/
virtual void RemoveTraceStateObserver(TraceStateObserver*) {}
};
@ -401,6 +449,8 @@ class PageAllocator {
// this is used to set the MAP_JIT flag on Apple Silicon.
// TODO(jkummerow): Remove this when Wasm has a platform-independent
// w^x implementation.
// TODO(saelo): Remove this once all JIT pages are allocated through the
// VirtualAddressSpace API.
kNoAccessWillJitLater
};
@ -427,13 +477,36 @@ class PageAllocator {
virtual bool SetPermissions(void* address, size_t length,
Permission permissions) = 0;
/**
* Recommits discarded pages in the given range with given permissions.
* Discarded pages must be recommitted with their original permissions
* before they are used again.
*/
virtual bool RecommitPages(void* address, size_t length,
Permission permissions) {
// TODO(v8:12797): make it pure once it's implemented on Chromium side.
return false;
}
/**
* Frees memory in the given [address, address + size) range. address and size
* should be operating system page-aligned. The next write to this
* memory area brings the memory transparently back.
* memory area brings the memory transparently back. This should be treated as
* a hint to the OS that the pages are no longer needed. It does not guarantee
* that the pages will be discarded immediately or at all.
*/
virtual bool DiscardSystemPages(void* address, size_t size) { return true; }
/**
* Decommits any wired memory pages in the given range, allowing the OS to
* reclaim them, and marks the region as inacessible (kNoAccess). The address
* range stays reserved and can be accessed again later by changing its
* permissions. However, in that case the memory content is guaranteed to be
* zero-initialized again. The memory must have been previously allocated by a
* call to AllocatePages. Returns true on success, false otherwise.
*/
virtual bool DecommitPages(void* address, size_t size) = 0;
/**
* INTERNAL ONLY: This interface has not been stabilised and may change
* without notice from one release to another without being deprecated first.
@ -498,6 +571,421 @@ class PageAllocator {
virtual bool CanAllocateSharedPages() { return false; }
};
/**
* An allocator that uses per-thread permissions to protect the memory.
*
* The implementation is platform/hardware specific, e.g. using pkeys on x64.
*
* INTERNAL ONLY: This interface has not been stabilised and may change
* without notice from one release to another without being deprecated first.
*/
class ThreadIsolatedAllocator {
public:
virtual ~ThreadIsolatedAllocator() = default;
virtual void* Allocate(size_t size) = 0;
virtual void Free(void* object) = 0;
enum class Type {
kPkey,
};
virtual Type Type() const = 0;
/**
* Return the pkey used to implement the thread isolation if Type == kPkey.
*/
virtual int Pkey() const { return -1; }
};
// Opaque type representing a handle to a shared memory region.
using PlatformSharedMemoryHandle = intptr_t;
static constexpr PlatformSharedMemoryHandle kInvalidSharedMemoryHandle = -1;
// Conversion routines from the platform-dependent shared memory identifiers
// into the opaque PlatformSharedMemoryHandle type. These use the underlying
// types (e.g. unsigned int) instead of the typedef'd ones (e.g. mach_port_t)
// to avoid pulling in large OS header files into this header file. Instead,
// the users of these routines are expected to include the respecitve OS
// headers in addition to this one.
#if V8_OS_DARWIN
// Convert between a shared memory handle and a mach_port_t referencing a memory
// entry object.
inline PlatformSharedMemoryHandle SharedMemoryHandleFromMachMemoryEntry(
unsigned int port) {
return static_cast<PlatformSharedMemoryHandle>(port);
}
inline unsigned int MachMemoryEntryFromSharedMemoryHandle(
PlatformSharedMemoryHandle handle) {
return static_cast<unsigned int>(handle);
}
#elif V8_OS_FUCHSIA
// Convert between a shared memory handle and a zx_handle_t to a VMO.
inline PlatformSharedMemoryHandle SharedMemoryHandleFromVMO(uint32_t handle) {
return static_cast<PlatformSharedMemoryHandle>(handle);
}
inline uint32_t VMOFromSharedMemoryHandle(PlatformSharedMemoryHandle handle) {
return static_cast<uint32_t>(handle);
}
#elif V8_OS_WIN
// Convert between a shared memory handle and a Windows HANDLE to a file mapping
// object.
inline PlatformSharedMemoryHandle SharedMemoryHandleFromFileMapping(
void* handle) {
return reinterpret_cast<PlatformSharedMemoryHandle>(handle);
}
inline void* FileMappingFromSharedMemoryHandle(
PlatformSharedMemoryHandle handle) {
return reinterpret_cast<void*>(handle);
}
#else
// Convert between a shared memory handle and a file descriptor.
inline PlatformSharedMemoryHandle SharedMemoryHandleFromFileDescriptor(int fd) {
return static_cast<PlatformSharedMemoryHandle>(fd);
}
inline int FileDescriptorFromSharedMemoryHandle(
PlatformSharedMemoryHandle handle) {
return static_cast<int>(handle);
}
#endif
/**
* Possible permissions for memory pages.
*/
enum class PagePermissions {
kNoAccess,
kRead,
kReadWrite,
kReadWriteExecute,
kReadExecute,
};
/**
* Class to manage a virtual memory address space.
*
* This class represents a contiguous region of virtual address space in which
* sub-spaces and (private or shared) memory pages can be allocated, freed, and
* modified. This interface is meant to eventually replace the PageAllocator
* interface, and can be used as an alternative in the meantime.
*
* This API is not yet stable and may change without notice!
*/
class VirtualAddressSpace {
public:
using Address = uintptr_t;
VirtualAddressSpace(size_t page_size, size_t allocation_granularity,
Address base, size_t size,
PagePermissions max_page_permissions)
: page_size_(page_size),
allocation_granularity_(allocation_granularity),
base_(base),
size_(size),
max_page_permissions_(max_page_permissions) {}
virtual ~VirtualAddressSpace() = default;
/**
* The page size used inside this space. Guaranteed to be a power of two.
* Used as granularity for all page-related operations except for allocation,
* which use the allocation_granularity(), see below.
*
* \returns the page size in bytes.
*/
size_t page_size() const { return page_size_; }
/**
* The granularity of page allocations and, by extension, of subspace
* allocations. This is guaranteed to be a power of two and a multiple of the
* page_size(). In practice, this is equal to the page size on most OSes, but
* on Windows it is usually 64KB, while the page size is 4KB.
*
* \returns the allocation granularity in bytes.
*/
size_t allocation_granularity() const { return allocation_granularity_; }
/**
* The base address of the address space managed by this instance.
*
* \returns the base address of this address space.
*/
Address base() const { return base_; }
/**
* The size of the address space managed by this instance.
*
* \returns the size of this address space in bytes.
*/
size_t size() const { return size_; }
/**
* The maximum page permissions that pages allocated inside this space can
* obtain.
*
* \returns the maximum page permissions.
*/
PagePermissions max_page_permissions() const { return max_page_permissions_; }
/**
* Sets the random seed so that GetRandomPageAddress() will generate
* repeatable sequences of random addresses.
*
* \param The seed for the PRNG.
*/
virtual void SetRandomSeed(int64_t seed) = 0;
/**
* Returns a random address inside this address space, suitable for page
* allocations hints.
*
* \returns a random address aligned to allocation_granularity().
*/
virtual Address RandomPageAddress() = 0;
/**
* Allocates private memory pages with the given alignment and permissions.
*
* \param hint If nonzero, the allocation is attempted to be placed at the
* given address first. If that fails, the allocation is attempted to be
* placed elsewhere, possibly nearby, but that is not guaranteed. Specifying
* zero for the hint always causes this function to choose a random address.
* The hint, if specified, must be aligned to the specified alignment.
*
* \param size The size of the allocation in bytes. Must be a multiple of the
* allocation_granularity().
*
* \param alignment The alignment of the allocation in bytes. Must be a
* multiple of the allocation_granularity() and should be a power of two.
*
* \param permissions The page permissions of the newly allocated pages.
*
* \returns the start address of the allocated pages on success, zero on
* failure.
*/
static constexpr Address kNoHint = 0;
virtual V8_WARN_UNUSED_RESULT Address
AllocatePages(Address hint, size_t size, size_t alignment,
PagePermissions permissions) = 0;
/**
* Frees previously allocated pages.
*
* This function will terminate the process on failure as this implies a bug
* in the client. As such, there is no return value.
*
* \param address The start address of the pages to free. This address must
* have been obtained through a call to AllocatePages.
*
* \param size The size in bytes of the region to free. This must match the
* size passed to AllocatePages when the pages were allocated.
*/
virtual void FreePages(Address address, size_t size) = 0;
/**
* Sets permissions of all allocated pages in the given range.
*
* This operation can fail due to OOM, in which case false is returned. If
* the operation fails for a reason other than OOM, this function will
* terminate the process as this implies a bug in the client.
*
* \param address The start address of the range. Must be aligned to
* page_size().
*
* \param size The size in bytes of the range. Must be a multiple
* of page_size().
*
* \param permissions The new permissions for the range.
*
* \returns true on success, false on OOM.
*/
virtual V8_WARN_UNUSED_RESULT bool SetPagePermissions(
Address address, size_t size, PagePermissions permissions) = 0;
/**
* Creates a guard region at the specified address.
*
* Guard regions are guaranteed to cause a fault when accessed and generally
* do not count towards any memory consumption limits. Further, allocating
* guard regions can usually not fail in subspaces if the region does not
* overlap with another region, subspace, or page allocation.
*
* \param address The start address of the guard region. Must be aligned to
* the allocation_granularity().
*
* \param size The size of the guard region in bytes. Must be a multiple of
* the allocation_granularity().
*
* \returns true on success, false otherwise.
*/
virtual V8_WARN_UNUSED_RESULT bool AllocateGuardRegion(Address address,
size_t size) = 0;
/**
* Frees an existing guard region.
*
* This function will terminate the process on failure as this implies a bug
* in the client. As such, there is no return value.
*
* \param address The start address of the guard region to free. This address
* must have previously been used as address parameter in a successful
* invocation of AllocateGuardRegion.
*
* \param size The size in bytes of the guard region to free. This must match
* the size passed to AllocateGuardRegion when the region was created.
*/
virtual void FreeGuardRegion(Address address, size_t size) = 0;
/**
* Allocates shared memory pages with the given permissions.
*
* \param hint Placement hint. See AllocatePages.
*
* \param size The size of the allocation in bytes. Must be a multiple of the
* allocation_granularity().
*
* \param permissions The page permissions of the newly allocated pages.
*
* \param handle A platform-specific handle to a shared memory object. See
* the SharedMemoryHandleFromX routines above for ways to obtain these.
*
* \param offset The offset in the shared memory object at which the mapping
* should start. Must be a multiple of the allocation_granularity().
*
* \returns the start address of the allocated pages on success, zero on
* failure.
*/
virtual V8_WARN_UNUSED_RESULT Address
AllocateSharedPages(Address hint, size_t size, PagePermissions permissions,
PlatformSharedMemoryHandle handle, uint64_t offset) = 0;
/**
* Frees previously allocated shared pages.
*
* This function will terminate the process on failure as this implies a bug
* in the client. As such, there is no return value.
*
* \param address The start address of the pages to free. This address must
* have been obtained through a call to AllocateSharedPages.
*
* \param size The size in bytes of the region to free. This must match the
* size passed to AllocateSharedPages when the pages were allocated.
*/
virtual void FreeSharedPages(Address address, size_t size) = 0;
/**
* Whether this instance can allocate subspaces or not.
*
* \returns true if subspaces can be allocated, false if not.
*/
virtual bool CanAllocateSubspaces() = 0;
/*
* Allocate a subspace.
*
* The address space of a subspace stays reserved in the parent space for the
* lifetime of the subspace. As such, it is guaranteed that page allocations
* on the parent space cannot end up inside a subspace.
*
* \param hint Hints where the subspace should be allocated. See
* AllocatePages() for more details.
*
* \param size The size in bytes of the subspace. Must be a multiple of the
* allocation_granularity().
*
* \param alignment The alignment of the subspace in bytes. Must be a multiple
* of the allocation_granularity() and should be a power of two.
*
* \param max_page_permissions The maximum permissions that pages allocated in
* the subspace can obtain.
*
* \returns a new subspace or nullptr on failure.
*/
virtual std::unique_ptr<VirtualAddressSpace> AllocateSubspace(
Address hint, size_t size, size_t alignment,
PagePermissions max_page_permissions) = 0;
//
// TODO(v8) maybe refactor the methods below before stabilizing the API. For
// example by combining them into some form of page operation method that
// takes a command enum as parameter.
//
/**
* Recommits discarded pages in the given range with given permissions.
* Discarded pages must be recommitted with their original permissions
* before they are used again.
*
* \param address The start address of the range. Must be aligned to
* page_size().
*
* \param size The size in bytes of the range. Must be a multiple
* of page_size().
*
* \param permissions The permissions for the range that the pages must have.
*
* \returns true on success, false otherwise.
*/
virtual V8_WARN_UNUSED_RESULT bool RecommitPages(
Address address, size_t size, PagePermissions permissions) = 0;
/**
* Frees memory in the given [address, address + size) range. address and
* size should be aligned to the page_size(). The next write to this memory
* area brings the memory transparently back. This should be treated as a
* hint to the OS that the pages are no longer needed. It does not guarantee
* that the pages will be discarded immediately or at all.
*
* \returns true on success, false otherwise. Since this method is only a
* hint, a successful invocation does not imply that pages have been removed.
*/
virtual V8_WARN_UNUSED_RESULT bool DiscardSystemPages(Address address,
size_t size) {
return true;
}
/**
* Decommits any wired memory pages in the given range, allowing the OS to
* reclaim them, and marks the region as inacessible (kNoAccess). The address
* range stays reserved and can be accessed again later by changing its
* permissions. However, in that case the memory content is guaranteed to be
* zero-initialized again. The memory must have been previously allocated by a
* call to AllocatePages.
*
* \returns true on success, false otherwise.
*/
virtual V8_WARN_UNUSED_RESULT bool DecommitPages(Address address,
size_t size) = 0;
private:
const size_t page_size_;
const size_t allocation_granularity_;
const Address base_;
const size_t size_;
const PagePermissions max_page_permissions_;
};
/**
* V8 Allocator used for allocating zone backings.
*/
class ZoneBackingAllocator {
public:
using MallocFn = void* (*)(size_t);
using FreeFn = void (*)(void*);
virtual MallocFn GetMallocFn() const { return ::malloc; }
virtual FreeFn GetFreeFn() const { return ::free; }
};
/**
* Observer used by V8 to notify the embedder about entering/leaving sections
* with high throughput of malloc/free operations.
*/
class HighAllocationThroughputObserver {
public:
virtual void EnterSection() {}
virtual void LeaveSection() {}
};
/**
* V8 Platform abstraction layer.
*
@ -510,12 +998,28 @@ class Platform {
/**
* Allows the embedder to manage memory page allocations.
* Returning nullptr will cause V8 to use the default page allocator.
*/
virtual PageAllocator* GetPageAllocator() {
// TODO(bbudge) Make this abstract after all embedders implement this.
virtual PageAllocator* GetPageAllocator() = 0;
/**
* Allows the embedder to provide an allocator that uses per-thread memory
* permissions to protect allocations.
* Returning nullptr will cause V8 to disable protections that rely on this
* feature.
*/
virtual ThreadIsolatedAllocator* GetThreadIsolatedAllocator() {
return nullptr;
}
/**
* Allows the embedder to specify a custom allocator used for zones.
*/
virtual ZoneBackingAllocator* GetZoneBackingAllocator() {
static ZoneBackingAllocator default_allocator;
return &default_allocator;
}
/**
* Enables the embedder to respond in cases where V8 can't allocate large
* blocks of memory. V8 retries the failed allocation once after calling this
@ -523,28 +1027,15 @@ class Platform {
* error.
* Embedder overrides of this function must NOT call back into V8.
*/
virtual void OnCriticalMemoryPressure() {
// TODO(bbudge) Remove this when embedders override the following method.
// See crbug.com/634547.
}
virtual void OnCriticalMemoryPressure() {}
/**
* Enables the embedder to respond in cases where V8 can't allocate large
* memory regions. The |length| parameter is the amount of memory needed.
* Returns true if memory is now available. Returns false if no memory could
* be made available. V8 will retry allocations until this method returns
* false.
*
* Embedder overrides of this function must NOT call back into V8.
*/
virtual bool OnCriticalMemoryPressure(size_t length) { return false; }
/**
* Gets the number of worker threads used by
* Call(BlockingTask)OnWorkerThread(). This can be used to estimate the number
* of tasks a work package should be split into. A return value of 0 means
* that there are no worker threads available. Note that a value of 0 won't
* prohibit V8 from posting tasks using |CallOnWorkerThread|.
* Gets the max number of worker threads that may be used to execute
* concurrent work scheduled for any single TaskPriority by
* Call(BlockingTask)OnWorkerThread() or PostJob(). This can be used to
* estimate the number of tasks a work package should be split into. A return
* value of 0 means that there are no worker threads available. Note that a
* value of 0 won't prohibit V8 from posting tasks using |CallOnWorkerThread|.
*/
virtual int NumberOfWorkerThreads() = 0;
@ -558,12 +1049,23 @@ class Platform {
/**
* Schedules a task to be invoked on a worker thread.
* Embedders should override PostTaskOnWorkerThreadImpl() instead of
* CallOnWorkerThread().
* TODO(chromium:1424158): Make non-virtual once embedders are migrated to
* PostTaskOnWorkerThreadImpl().
*/
virtual void CallOnWorkerThread(std::unique_ptr<Task> task) = 0;
virtual void CallOnWorkerThread(std::unique_ptr<Task> task) {
PostTaskOnWorkerThreadImpl(TaskPriority::kUserVisible, std::move(task),
SourceLocation::Current());
}
/**
* Schedules a task that blocks the main thread to be invoked with
* high-priority on a worker thread.
* Embedders should override PostTaskOnWorkerThreadImpl() instead of
* CallBlockingTaskOnWorkerThread().
* TODO(chromium:1424158): Make non-virtual once embedders are migrated to
* PostTaskOnWorkerThreadImpl().
*/
virtual void CallBlockingTaskOnWorkerThread(std::unique_ptr<Task> task) {
// Embedders may optionally override this to process these tasks in a high
@ -573,6 +1075,10 @@ class Platform {
/**
* Schedules a task to be invoked with low-priority on a worker thread.
* Embedders should override PostTaskOnWorkerThreadImpl() instead of
* CallLowPriorityTaskOnWorkerThread().
* TODO(chromium:1424158): Make non-virtual once embedders are migrated to
* PostTaskOnWorkerThreadImpl().
*/
virtual void CallLowPriorityTaskOnWorkerThread(std::unique_ptr<Task> task) {
// Embedders may optionally override this to process these tasks in a low
@ -583,9 +1089,17 @@ class Platform {
/**
* Schedules a task to be invoked on a worker thread after |delay_in_seconds|
* expires.
* Embedders should override PostDelayedTaskOnWorkerThreadImpl() instead of
* CallDelayedOnWorkerThread().
* TODO(chromium:1424158): Make non-virtual once embedders are migrated to
* PostDelayedTaskOnWorkerThreadImpl().
*/
virtual void CallDelayedOnWorkerThread(std::unique_ptr<Task> task,
double delay_in_seconds) = 0;
double delay_in_seconds) {
PostDelayedTaskOnWorkerThreadImpl(TaskPriority::kUserVisible,
std::move(task), delay_in_seconds,
SourceLocation::Current());
}
/**
* Returns true if idle tasks are enabled for the given |isolate|.
@ -635,17 +1149,47 @@ class Platform {
* thread (A=>B/B=>A deadlock) and [2] JobTask::Run or
* JobTask::GetMaxConcurrency may be invoked synchronously from JobHandle
* (B=>JobHandle::foo=>B deadlock).
* Embedders should override CreateJobImpl() instead of PostJob().
* TODO(chromium:1424158): Make non-virtual once embedders are migrated to
* CreateJobImpl().
*/
virtual std::unique_ptr<JobHandle> PostJob(
TaskPriority priority, std::unique_ptr<JobTask> job_task) {
auto handle = CreateJob(priority, std::move(job_task));
handle->NotifyConcurrencyIncrease();
return handle;
}
/**
* Creates and returns a JobHandle associated with a Job. Unlike PostJob(),
* this doesn't immediately schedules |worker_task| to run; the Job is then
* scheduled by calling either NotifyConcurrencyIncrease() or Join().
*
* A sufficient PostJob() implementation that uses the default Job provided in
* libplatform looks like:
* std::unique_ptr<JobHandle> PostJob(
* A sufficient CreateJob() implementation that uses the default Job provided
* in libplatform looks like:
* std::unique_ptr<JobHandle> CreateJob(
* TaskPriority priority, std::unique_ptr<JobTask> job_task) override {
* return v8::platform::NewDefaultJobHandle(
* this, priority, std::move(job_task), NumberOfWorkerThreads());
* }
*
* Embedders should override CreateJobImpl() instead of CreateJob().
* TODO(chromium:1424158): Make non-virtual once embedders are migrated to
* CreateJobImpl().
*/
virtual std::unique_ptr<JobHandle> PostJob(
TaskPriority priority, std::unique_ptr<JobTask> job_task) = 0;
virtual std::unique_ptr<JobHandle> CreateJob(
TaskPriority priority, std::unique_ptr<JobTask> job_task) {
return CreateJobImpl(priority, std::move(job_task),
SourceLocation::Current());
}
/**
* Instantiates a ScopedBlockingCall to annotate a scope that may/will block.
*/
virtual std::unique_ptr<ScopedBlockingCall> CreateBlockingScope(
BlockingType blocking_type) {
return nullptr;
}
/**
* Monotonically increasing time in seconds from an arbitrary fixed point in
@ -657,11 +1201,28 @@ class Platform {
virtual double MonotonicallyIncreasingTime() = 0;
/**
* Current wall-clock time in milliseconds since epoch.
* This function is expected to return at least millisecond-precision values.
* Current wall-clock time in milliseconds since epoch. Use
* CurrentClockTimeMillisHighResolution() when higher precision is
* required.
*/
virtual int64_t CurrentClockTimeMilliseconds() {
return floor(CurrentClockTimeMillis());
}
/**
* This function is deprecated and will be deleted. Use either
* CurrentClockTimeMilliseconds() or
* CurrentClockTimeMillisecondsHighResolution().
*/
virtual double CurrentClockTimeMillis() = 0;
/**
* Same as CurrentClockTimeMilliseconds(), but with more precision.
*/
virtual double CurrentClockTimeMillisecondsHighResolution() {
return CurrentClockTimeMillis();
}
typedef void (*StackTracePrinter)();
/**
@ -681,6 +1242,16 @@ class Platform {
*/
virtual void DumpWithoutCrashing() {}
/**
* Allows the embedder to observe sections with high throughput allocation
* operations.
*/
virtual HighAllocationThroughputObserver*
GetHighAllocationThroughputObserver() {
static HighAllocationThroughputObserver default_observer;
return &default_observer;
}
protected:
/**
* Default implementation of current wall-clock time in milliseconds
@ -688,6 +1259,33 @@ class Platform {
* nothing special needed.
*/
V8_EXPORT static double SystemClockTimeMillis();
/**
* Creates and returns a JobHandle associated with a Job.
* TODO(chromium:1424158): Make pure virtual once embedders implement it.
*/
virtual std::unique_ptr<JobHandle> CreateJobImpl(
TaskPriority priority, std::unique_ptr<JobTask> job_task,
const SourceLocation& location) {
return nullptr;
}
/**
* Schedules a task with |priority| to be invoked on a worker thread.
* TODO(chromium:1424158): Make pure virtual once embedders implement it.
*/
virtual void PostTaskOnWorkerThreadImpl(TaskPriority priority,
std::unique_ptr<Task> task,
const SourceLocation& location) {}
/**
* Schedules a task with |priority| to be invoked on a worker thread after
* |delay_in_seconds| expires.
* TODO(chromium:1424158): Make pure virtual once embedders implement it.
*/
virtual void PostDelayedTaskOnWorkerThreadImpl(
TaskPriority priority, std::unique_ptr<Task> task,
double delay_in_seconds, const SourceLocation& location) {}
};
} // namespace v8

View File

@ -0,0 +1,118 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_PRIMITIVE_OBJECT_H_
#define INCLUDE_V8_PRIMITIVE_OBJECT_H_
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-object.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Isolate;
/**
* A Number object (ECMA-262, 4.3.21).
*/
class V8_EXPORT NumberObject : public Object {
public:
static Local<Value> New(Isolate* isolate, double value);
double ValueOf() const;
V8_INLINE static NumberObject* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<NumberObject*>(value);
}
private:
static void CheckCast(Value* obj);
};
/**
* A BigInt object (https://tc39.github.io/proposal-bigint)
*/
class V8_EXPORT BigIntObject : public Object {
public:
static Local<Value> New(Isolate* isolate, int64_t value);
Local<BigInt> ValueOf() const;
V8_INLINE static BigIntObject* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<BigIntObject*>(value);
}
private:
static void CheckCast(Value* obj);
};
/**
* A Boolean object (ECMA-262, 4.3.15).
*/
class V8_EXPORT BooleanObject : public Object {
public:
static Local<Value> New(Isolate* isolate, bool value);
bool ValueOf() const;
V8_INLINE static BooleanObject* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<BooleanObject*>(value);
}
private:
static void CheckCast(Value* obj);
};
/**
* A String object (ECMA-262, 4.3.18).
*/
class V8_EXPORT StringObject : public Object {
public:
static Local<Value> New(Isolate* isolate, Local<String> value);
Local<String> ValueOf() const;
V8_INLINE static StringObject* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<StringObject*>(value);
}
private:
static void CheckCast(Value* obj);
};
/**
* A Symbol object (ECMA-262 edition 6).
*/
class V8_EXPORT SymbolObject : public Object {
public:
static Local<Value> New(Isolate* isolate, Local<Symbol> value);
Local<Symbol> ValueOf() const;
V8_INLINE static SymbolObject* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<SymbolObject*>(value);
}
private:
static void CheckCast(Value* obj);
};
} // namespace v8
#endif // INCLUDE_V8_PRIMITIVE_OBJECT_H_

View File

@ -0,0 +1,867 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_PRIMITIVE_H_
#define INCLUDE_V8_PRIMITIVE_H_
#include "v8-data.h" // NOLINT(build/include_directory)
#include "v8-internal.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-value.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Context;
class Isolate;
class String;
namespace internal {
class ExternalString;
class ScopedExternalStringLock;
class StringForwardingTable;
} // namespace internal
/**
* The superclass of primitive values. See ECMA-262 4.3.2.
*/
class V8_EXPORT Primitive : public Value {};
/**
* A primitive boolean value (ECMA-262, 4.3.14). Either the true
* or false value.
*/
class V8_EXPORT Boolean : public Primitive {
public:
bool Value() const;
V8_INLINE static Boolean* Cast(v8::Data* data) {
#ifdef V8_ENABLE_CHECKS
CheckCast(data);
#endif
return static_cast<Boolean*>(data);
}
V8_INLINE static Local<Boolean> New(Isolate* isolate, bool value);
private:
static void CheckCast(v8::Data* that);
};
/**
* An array to hold Primitive values. This is used by the embedder to
* pass host defined options to the ScriptOptions during compilation.
*
* This is passed back to the embedder as part of
* HostImportModuleDynamicallyCallback for module loading.
*/
class V8_EXPORT PrimitiveArray : public Data {
public:
static Local<PrimitiveArray> New(Isolate* isolate, int length);
int Length() const;
void Set(Isolate* isolate, int index, Local<Primitive> item);
Local<Primitive> Get(Isolate* isolate, int index);
V8_INLINE static PrimitiveArray* Cast(Data* data) {
#ifdef V8_ENABLE_CHECKS
CheckCast(data);
#endif
return reinterpret_cast<PrimitiveArray*>(data);
}
private:
static void CheckCast(Data* obj);
};
/**
* A superclass for symbols and strings.
*/
class V8_EXPORT Name : public Primitive {
public:
/**
* Returns the identity hash for this object. The current implementation
* uses an inline property on the object to store the identity hash.
*
* The return value will never be 0. Also, it is not guaranteed to be
* unique.
*/
int GetIdentityHash();
V8_INLINE static Name* Cast(Data* data) {
#ifdef V8_ENABLE_CHECKS
CheckCast(data);
#endif
return static_cast<Name*>(data);
}
private:
static void CheckCast(Data* that);
};
/**
* A flag describing different modes of string creation.
*
* Aside from performance implications there are no differences between the two
* creation modes.
*/
enum class NewStringType {
/**
* Create a new string, always allocating new storage memory.
*/
kNormal,
/**
* Acts as a hint that the string should be created in the
* old generation heap space and be deduplicated if an identical string
* already exists.
*/
kInternalized
};
/**
* A JavaScript string value (ECMA-262, 4.3.17).
*/
class V8_EXPORT String : public Name {
public:
static constexpr int kMaxLength =
internal::kApiSystemPointerSize == 4 ? (1 << 28) - 16 : (1 << 29) - 24;
enum Encoding {
UNKNOWN_ENCODING = 0x1,
TWO_BYTE_ENCODING = 0x0,
ONE_BYTE_ENCODING = 0x8
};
/**
* Returns the number of characters (UTF-16 code units) in this string.
*/
int Length() const;
/**
* Returns the number of bytes in the UTF-8 encoded
* representation of this string.
*/
int Utf8Length(Isolate* isolate) const;
/**
* Returns whether this string is known to contain only one byte data,
* i.e. ISO-8859-1 code points.
* Does not read the string.
* False negatives are possible.
*/
bool IsOneByte() const;
/**
* Returns whether this string contain only one byte data,
* i.e. ISO-8859-1 code points.
* Will read the entire string in some cases.
*/
bool ContainsOnlyOneByte() const;
/**
* Write the contents of the string to an external buffer.
* If no arguments are given, expects the buffer to be large
* enough to hold the entire string and NULL terminator. Copies
* the contents of the string and the NULL terminator into the
* buffer.
*
* WriteUtf8 will not write partial UTF-8 sequences, preferring to stop
* before the end of the buffer.
*
* Copies up to length characters into the output buffer.
* Only null-terminates if there is enough space in the buffer.
*
* \param buffer The buffer into which the string will be copied.
* \param start The starting position within the string at which
* copying begins.
* \param length The number of characters to copy from the string. For
* WriteUtf8 the number of bytes in the buffer.
* \param nchars_ref The number of characters written, can be NULL.
* \param options Various options that might affect performance of this or
* subsequent operations.
* \return The number of characters copied to the buffer excluding the null
* terminator. For WriteUtf8: The number of bytes copied to the buffer
* including the null terminator (if written).
*/
enum WriteOptions {
NO_OPTIONS = 0,
HINT_MANY_WRITES_EXPECTED = 1,
NO_NULL_TERMINATION = 2,
PRESERVE_ONE_BYTE_NULL = 4,
// Used by WriteUtf8 to replace orphan surrogate code units with the
// unicode replacement character. Needs to be set to guarantee valid UTF-8
// output.
REPLACE_INVALID_UTF8 = 8
};
// 16-bit character codes.
int Write(Isolate* isolate, uint16_t* buffer, int start = 0, int length = -1,
int options = NO_OPTIONS) const;
// One byte characters.
int WriteOneByte(Isolate* isolate, uint8_t* buffer, int start = 0,
int length = -1, int options = NO_OPTIONS) const;
// UTF-8 encoded characters.
int WriteUtf8(Isolate* isolate, char* buffer, int length = -1,
int* nchars_ref = nullptr, int options = NO_OPTIONS) const;
/**
* A zero length string.
*/
V8_INLINE static Local<String> Empty(Isolate* isolate);
/**
* Returns true if the string is external.
*/
bool IsExternal() const;
/**
* Returns true if the string is both external and two-byte.
*/
bool IsExternalTwoByte() const;
/**
* Returns true if the string is both external and one-byte.
*/
bool IsExternalOneByte() const;
class V8_EXPORT ExternalStringResourceBase {
public:
virtual ~ExternalStringResourceBase() = default;
/**
* If a string is cacheable, the value returned by
* ExternalStringResource::data() may be cached, otherwise it is not
* expected to be stable beyond the current top-level task.
*/
virtual bool IsCacheable() const { return true; }
// Disallow copying and assigning.
ExternalStringResourceBase(const ExternalStringResourceBase&) = delete;
void operator=(const ExternalStringResourceBase&) = delete;
protected:
ExternalStringResourceBase() = default;
/**
* Internally V8 will call this Dispose method when the external string
* resource is no longer needed. The default implementation will use the
* delete operator. This method can be overridden in subclasses to
* control how allocated external string resources are disposed.
*/
virtual void Dispose() { delete this; }
/**
* For a non-cacheable string, the value returned by
* |ExternalStringResource::data()| has to be stable between |Lock()| and
* |Unlock()|, that is the string must behave as is |IsCacheable()| returned
* true.
*
* These two functions must be thread-safe, and can be called from anywhere.
* They also must handle lock depth, in the sense that each can be called
* several times, from different threads, and unlocking should only happen
* when the balance of Lock() and Unlock() calls is 0.
*/
virtual void Lock() const {}
/**
* Unlocks the string.
*/
virtual void Unlock() const {}
private:
friend class internal::ExternalString;
friend class v8::String;
friend class internal::StringForwardingTable;
friend class internal::ScopedExternalStringLock;
};
/**
* An ExternalStringResource is a wrapper around a two-byte string
* buffer that resides outside V8's heap. Implement an
* ExternalStringResource to manage the life cycle of the underlying
* buffer. Note that the string data must be immutable.
*/
class V8_EXPORT ExternalStringResource : public ExternalStringResourceBase {
public:
/**
* Override the destructor to manage the life cycle of the underlying
* buffer.
*/
~ExternalStringResource() override = default;
/**
* The string data from the underlying buffer. If the resource is cacheable
* then data() must return the same value for all invocations.
*/
virtual const uint16_t* data() const = 0;
/**
* The length of the string. That is, the number of two-byte characters.
*/
virtual size_t length() const = 0;
/**
* Returns the cached data from the underlying buffer. This method can be
* called only for cacheable resources (i.e. IsCacheable() == true) and only
* after UpdateDataCache() was called.
*/
const uint16_t* cached_data() const {
CheckCachedDataInvariants();
return cached_data_;
}
/**
* Update {cached_data_} with the data from the underlying buffer. This can
* be called only for cacheable resources.
*/
void UpdateDataCache();
protected:
ExternalStringResource() = default;
private:
void CheckCachedDataInvariants() const;
const uint16_t* cached_data_ = nullptr;
};
/**
* An ExternalOneByteStringResource is a wrapper around an one-byte
* string buffer that resides outside V8's heap. Implement an
* ExternalOneByteStringResource to manage the life cycle of the
* underlying buffer. Note that the string data must be immutable
* and that the data must be Latin-1 and not UTF-8, which would require
* special treatment internally in the engine and do not allow efficient
* indexing. Use String::New or convert to 16 bit data for non-Latin1.
*/
class V8_EXPORT ExternalOneByteStringResource
: public ExternalStringResourceBase {
public:
/**
* Override the destructor to manage the life cycle of the underlying
* buffer.
*/
~ExternalOneByteStringResource() override = default;
/**
* The string data from the underlying buffer. If the resource is cacheable
* then data() must return the same value for all invocations.
*/
virtual const char* data() const = 0;
/** The number of Latin-1 characters in the string.*/
virtual size_t length() const = 0;
/**
* Returns the cached data from the underlying buffer. If the resource is
* uncacheable or if UpdateDataCache() was not called before, it has
* undefined behaviour.
*/
const char* cached_data() const {
CheckCachedDataInvariants();
return cached_data_;
}
/**
* Update {cached_data_} with the data from the underlying buffer. This can
* be called only for cacheable resources.
*/
void UpdateDataCache();
protected:
ExternalOneByteStringResource() = default;
private:
void CheckCachedDataInvariants() const;
const char* cached_data_ = nullptr;
};
/**
* If the string is an external string, return the ExternalStringResourceBase
* regardless of the encoding, otherwise return NULL. The encoding of the
* string is returned in encoding_out.
*/
V8_INLINE ExternalStringResourceBase* GetExternalStringResourceBase(
Encoding* encoding_out) const;
/**
* Get the ExternalStringResource for an external string. Returns
* NULL if IsExternal() doesn't return true.
*/
V8_INLINE ExternalStringResource* GetExternalStringResource() const;
/**
* Get the ExternalOneByteStringResource for an external one-byte string.
* Returns NULL if IsExternalOneByte() doesn't return true.
*/
const ExternalOneByteStringResource* GetExternalOneByteStringResource() const;
V8_INLINE static String* Cast(v8::Data* data) {
#ifdef V8_ENABLE_CHECKS
CheckCast(data);
#endif
return static_cast<String*>(data);
}
/**
* Allocates a new string from a UTF-8 literal. This is equivalent to calling
* String::NewFromUtf(isolate, "...").ToLocalChecked(), but without the check
* overhead.
*
* When called on a string literal containing '\0', the inferred length is the
* length of the input array minus 1 (for the final '\0') and not the value
* returned by strlen.
**/
template <int N>
static V8_WARN_UNUSED_RESULT Local<String> NewFromUtf8Literal(
Isolate* isolate, const char (&literal)[N],
NewStringType type = NewStringType::kNormal) {
static_assert(N <= kMaxLength, "String is too long");
return NewFromUtf8Literal(isolate, literal, type, N - 1);
}
/** Allocates a new string from UTF-8 data. Only returns an empty value when
* length > kMaxLength. **/
static V8_WARN_UNUSED_RESULT MaybeLocal<String> NewFromUtf8(
Isolate* isolate, const char* data,
NewStringType type = NewStringType::kNormal, int length = -1);
/** Allocates a new string from Latin-1 data. Only returns an empty value
* when length > kMaxLength. **/
static V8_WARN_UNUSED_RESULT MaybeLocal<String> NewFromOneByte(
Isolate* isolate, const uint8_t* data,
NewStringType type = NewStringType::kNormal, int length = -1);
/** Allocates a new string from UTF-16 data. Only returns an empty value when
* length > kMaxLength. **/
static V8_WARN_UNUSED_RESULT MaybeLocal<String> NewFromTwoByte(
Isolate* isolate, const uint16_t* data,
NewStringType type = NewStringType::kNormal, int length = -1);
/**
* Creates a new string by concatenating the left and the right strings
* passed in as parameters.
*/
static Local<String> Concat(Isolate* isolate, Local<String> left,
Local<String> right);
/**
* Creates a new external string using the data defined in the given
* resource. When the external string is no longer live on V8's heap the
* resource will be disposed by calling its Dispose method. The caller of
* this function should not otherwise delete or modify the resource. Neither
* should the underlying buffer be deallocated or modified except through the
* destructor of the external string resource.
*/
static V8_WARN_UNUSED_RESULT MaybeLocal<String> NewExternalTwoByte(
Isolate* isolate, ExternalStringResource* resource);
/**
* Associate an external string resource with this string by transforming it
* in place so that existing references to this string in the JavaScript heap
* will use the external string resource. The external string resource's
* character contents need to be equivalent to this string.
* Returns true if the string has been changed to be an external string.
* The string is not modified if the operation fails. See NewExternal for
* information on the lifetime of the resource.
*/
bool MakeExternal(ExternalStringResource* resource);
/**
* Creates a new external string using the one-byte data defined in the given
* resource. When the external string is no longer live on V8's heap the
* resource will be disposed by calling its Dispose method. The caller of
* this function should not otherwise delete or modify the resource. Neither
* should the underlying buffer be deallocated or modified except through the
* destructor of the external string resource.
*/
static V8_WARN_UNUSED_RESULT MaybeLocal<String> NewExternalOneByte(
Isolate* isolate, ExternalOneByteStringResource* resource);
/**
* Associate an external string resource with this string by transforming it
* in place so that existing references to this string in the JavaScript heap
* will use the external string resource. The external string resource's
* character contents need to be equivalent to this string.
* Returns true if the string has been changed to be an external string.
* The string is not modified if the operation fails. See NewExternal for
* information on the lifetime of the resource.
*/
bool MakeExternal(ExternalOneByteStringResource* resource);
/**
* Returns true if this string can be made external, given the encoding for
* the external string resource.
*/
bool CanMakeExternal(Encoding encoding) const;
/**
* Returns true if the strings values are equal. Same as JS ==/===.
*/
bool StringEquals(Local<String> str) const;
/**
* Converts an object to a UTF-8-encoded character array. Useful if
* you want to print the object. If conversion to a string fails
* (e.g. due to an exception in the toString() method of the object)
* then the length() method returns 0 and the * operator returns
* NULL.
*/
class V8_EXPORT Utf8Value {
public:
Utf8Value(Isolate* isolate, Local<v8::Value> obj);
~Utf8Value();
char* operator*() { return str_; }
const char* operator*() const { return str_; }
int length() const { return length_; }
// Disallow copying and assigning.
Utf8Value(const Utf8Value&) = delete;
void operator=(const Utf8Value&) = delete;
private:
char* str_;
int length_;
};
/**
* Converts an object to a two-byte (UTF-16-encoded) string.
* If conversion to a string fails (eg. due to an exception in the toString()
* method of the object) then the length() method returns 0 and the * operator
* returns NULL.
*/
class V8_EXPORT Value {
public:
Value(Isolate* isolate, Local<v8::Value> obj);
~Value();
uint16_t* operator*() { return str_; }
const uint16_t* operator*() const { return str_; }
int length() const { return length_; }
// Disallow copying and assigning.
Value(const Value&) = delete;
void operator=(const Value&) = delete;
private:
uint16_t* str_;
int length_;
};
private:
void VerifyExternalStringResourceBase(ExternalStringResourceBase* v,
Encoding encoding) const;
void VerifyExternalStringResource(ExternalStringResource* val) const;
ExternalStringResource* GetExternalStringResourceSlow() const;
ExternalStringResourceBase* GetExternalStringResourceBaseSlow(
String::Encoding* encoding_out) const;
static Local<v8::String> NewFromUtf8Literal(Isolate* isolate,
const char* literal,
NewStringType type, int length);
static void CheckCast(v8::Data* that);
};
// Zero-length string specialization (templated string size includes
// terminator).
template <>
inline V8_WARN_UNUSED_RESULT Local<String> String::NewFromUtf8Literal(
Isolate* isolate, const char (&literal)[1], NewStringType type) {
return String::Empty(isolate);
}
/**
* Interface for iterating through all external resources in the heap.
*/
class V8_EXPORT ExternalResourceVisitor {
public:
virtual ~ExternalResourceVisitor() = default;
virtual void VisitExternalString(Local<String> string) {}
};
/**
* A JavaScript symbol (ECMA-262 edition 6)
*/
class V8_EXPORT Symbol : public Name {
public:
/**
* Returns the description string of the symbol, or undefined if none.
*/
Local<Value> Description(Isolate* isolate) const;
/**
* Create a symbol. If description is not empty, it will be used as the
* description.
*/
static Local<Symbol> New(Isolate* isolate,
Local<String> description = Local<String>());
/**
* Access global symbol registry.
* Note that symbols created this way are never collected, so
* they should only be used for statically fixed properties.
* Also, there is only one global name space for the descriptions used as
* keys.
* To minimize the potential for clashes, use qualified names as keys.
*/
static Local<Symbol> For(Isolate* isolate, Local<String> description);
/**
* Retrieve a global symbol. Similar to |For|, but using a separate
* registry that is not accessible by (and cannot clash with) JavaScript code.
*/
static Local<Symbol> ForApi(Isolate* isolate, Local<String> description);
// Well-known symbols
static Local<Symbol> GetAsyncIterator(Isolate* isolate);
static Local<Symbol> GetHasInstance(Isolate* isolate);
static Local<Symbol> GetIsConcatSpreadable(Isolate* isolate);
static Local<Symbol> GetIterator(Isolate* isolate);
static Local<Symbol> GetMatch(Isolate* isolate);
static Local<Symbol> GetReplace(Isolate* isolate);
static Local<Symbol> GetSearch(Isolate* isolate);
static Local<Symbol> GetSplit(Isolate* isolate);
static Local<Symbol> GetToPrimitive(Isolate* isolate);
static Local<Symbol> GetToStringTag(Isolate* isolate);
static Local<Symbol> GetUnscopables(Isolate* isolate);
V8_INLINE static Symbol* Cast(Data* data) {
#ifdef V8_ENABLE_CHECKS
CheckCast(data);
#endif
return static_cast<Symbol*>(data);
}
private:
Symbol();
static void CheckCast(Data* that);
};
/**
* A JavaScript number value (ECMA-262, 4.3.20)
*/
class V8_EXPORT Number : public Primitive {
public:
double Value() const;
static Local<Number> New(Isolate* isolate, double value);
V8_INLINE static Number* Cast(v8::Data* data) {
#ifdef V8_ENABLE_CHECKS
CheckCast(data);
#endif
return static_cast<Number*>(data);
}
private:
Number();
static void CheckCast(v8::Data* that);
};
/**
* A JavaScript value representing a signed integer.
*/
class V8_EXPORT Integer : public Number {
public:
static Local<Integer> New(Isolate* isolate, int32_t value);
static Local<Integer> NewFromUnsigned(Isolate* isolate, uint32_t value);
int64_t Value() const;
V8_INLINE static Integer* Cast(v8::Data* data) {
#ifdef V8_ENABLE_CHECKS
CheckCast(data);
#endif
return static_cast<Integer*>(data);
}
private:
Integer();
static void CheckCast(v8::Data* that);
};
/**
* A JavaScript value representing a 32-bit signed integer.
*/
class V8_EXPORT Int32 : public Integer {
public:
int32_t Value() const;
V8_INLINE static Int32* Cast(v8::Data* data) {
#ifdef V8_ENABLE_CHECKS
CheckCast(data);
#endif
return static_cast<Int32*>(data);
}
private:
Int32();
static void CheckCast(v8::Data* that);
};
/**
* A JavaScript value representing a 32-bit unsigned integer.
*/
class V8_EXPORT Uint32 : public Integer {
public:
uint32_t Value() const;
V8_INLINE static Uint32* Cast(v8::Data* data) {
#ifdef V8_ENABLE_CHECKS
CheckCast(data);
#endif
return static_cast<Uint32*>(data);
}
private:
Uint32();
static void CheckCast(v8::Data* that);
};
/**
* A JavaScript BigInt value (https://tc39.github.io/proposal-bigint)
*/
class V8_EXPORT BigInt : public Primitive {
public:
static Local<BigInt> New(Isolate* isolate, int64_t value);
static Local<BigInt> NewFromUnsigned(Isolate* isolate, uint64_t value);
/**
* Creates a new BigInt object using a specified sign bit and a
* specified list of digits/words.
* The resulting number is calculated as:
*
* (-1)^sign_bit * (words[0] * (2^64)^0 + words[1] * (2^64)^1 + ...)
*/
static MaybeLocal<BigInt> NewFromWords(Local<Context> context, int sign_bit,
int word_count, const uint64_t* words);
/**
* Returns the value of this BigInt as an unsigned 64-bit integer.
* If `lossless` is provided, it will reflect whether the return value was
* truncated or wrapped around. In particular, it is set to `false` if this
* BigInt is negative.
*/
uint64_t Uint64Value(bool* lossless = nullptr) const;
/**
* Returns the value of this BigInt as a signed 64-bit integer.
* If `lossless` is provided, it will reflect whether this BigInt was
* truncated or not.
*/
int64_t Int64Value(bool* lossless = nullptr) const;
/**
* Returns the number of 64-bit words needed to store the result of
* ToWordsArray().
*/
int WordCount() const;
/**
* Writes the contents of this BigInt to a specified memory location.
* `sign_bit` must be provided and will be set to 1 if this BigInt is
* negative.
* `*word_count` has to be initialized to the length of the `words` array.
* Upon return, it will be set to the actual number of words that would
* be needed to store this BigInt (i.e. the return value of `WordCount()`).
*/
void ToWordsArray(int* sign_bit, int* word_count, uint64_t* words) const;
V8_INLINE static BigInt* Cast(v8::Data* data) {
#ifdef V8_ENABLE_CHECKS
CheckCast(data);
#endif
return static_cast<BigInt*>(data);
}
private:
BigInt();
static void CheckCast(v8::Data* that);
};
Local<String> String::Empty(Isolate* isolate) {
using S = internal::Address;
using I = internal::Internals;
I::CheckInitialized(isolate);
S* slot = I::GetRootSlot(isolate, I::kEmptyStringRootIndex);
return Local<String>::FromSlot(slot);
}
String::ExternalStringResource* String::GetExternalStringResource() const {
using A = internal::Address;
using I = internal::Internals;
A obj = internal::ValueHelper::ValueAsAddress(this);
ExternalStringResource* result;
if (I::IsExternalTwoByteString(I::GetInstanceType(obj))) {
Isolate* isolate = I::GetIsolateForSandbox(obj);
A value = I::ReadExternalPointerField<internal::kExternalStringResourceTag>(
isolate, obj, I::kStringResourceOffset);
result = reinterpret_cast<String::ExternalStringResource*>(value);
} else {
result = GetExternalStringResourceSlow();
}
#ifdef V8_ENABLE_CHECKS
VerifyExternalStringResource(result);
#endif
return result;
}
String::ExternalStringResourceBase* String::GetExternalStringResourceBase(
String::Encoding* encoding_out) const {
using A = internal::Address;
using I = internal::Internals;
A obj = internal::ValueHelper::ValueAsAddress(this);
int type = I::GetInstanceType(obj) & I::kStringRepresentationAndEncodingMask;
*encoding_out = static_cast<Encoding>(type & I::kStringEncodingMask);
ExternalStringResourceBase* resource;
if (type == I::kExternalOneByteRepresentationTag ||
type == I::kExternalTwoByteRepresentationTag) {
Isolate* isolate = I::GetIsolateForSandbox(obj);
A value = I::ReadExternalPointerField<internal::kExternalStringResourceTag>(
isolate, obj, I::kStringResourceOffset);
resource = reinterpret_cast<ExternalStringResourceBase*>(value);
} else {
resource = GetExternalStringResourceBaseSlow(encoding_out);
}
#ifdef V8_ENABLE_CHECKS
VerifyExternalStringResourceBase(resource, *encoding_out);
#endif
return resource;
}
// --- Statics ---
V8_INLINE Local<Primitive> Undefined(Isolate* isolate) {
using S = internal::Address;
using I = internal::Internals;
I::CheckInitialized(isolate);
S* slot = I::GetRootSlot(isolate, I::kUndefinedValueRootIndex);
return Local<Primitive>::FromSlot(slot);
}
V8_INLINE Local<Primitive> Null(Isolate* isolate) {
using S = internal::Address;
using I = internal::Internals;
I::CheckInitialized(isolate);
S* slot = I::GetRootSlot(isolate, I::kNullValueRootIndex);
return Local<Primitive>::FromSlot(slot);
}
V8_INLINE Local<Boolean> True(Isolate* isolate) {
using S = internal::Address;
using I = internal::Internals;
I::CheckInitialized(isolate);
S* slot = I::GetRootSlot(isolate, I::kTrueValueRootIndex);
return Local<Boolean>::FromSlot(slot);
}
V8_INLINE Local<Boolean> False(Isolate* isolate) {
using S = internal::Address;
using I = internal::Internals;
I::CheckInitialized(isolate);
S* slot = I::GetRootSlot(isolate, I::kFalseValueRootIndex);
return Local<Boolean>::FromSlot(slot);
}
Local<Boolean> Boolean::New(Isolate* isolate, bool value) {
return value ? True(isolate) : False(isolate);
}
} // namespace v8
#endif // INCLUDE_V8_PRIMITIVE_H_

View File

@ -11,18 +11,25 @@
#include <unordered_set>
#include <vector>
#include "v8.h" // NOLINT(build/include_directory)
#include "cppgc/common.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-message.h" // NOLINT(build/include_directory)
#include "v8-persistent-handle.h" // NOLINT(build/include_directory)
/**
* Profiler support for the V8 JavaScript engine.
*/
namespace v8 {
enum class EmbedderStateTag : uint8_t;
class HeapGraphNode;
struct HeapStatsUpdate;
class Object;
enum StateTag : uint16_t;
using NativeObject = void*;
using SnapshotObjectId = uint32_t;
using ProfilerId = uint32_t;
struct CpuProfileDeoptFrame {
int script_id;
@ -169,6 +176,32 @@ class V8_EXPORT CpuProfileNode {
static const int kNoColumnNumberInfo = Message::kNoColumnInfo;
};
/**
* An interface for exporting data from V8, using "push" model.
*/
class V8_EXPORT OutputStream {
public:
enum WriteResult { kContinue = 0, kAbort = 1 };
virtual ~OutputStream() = default;
/** Notify about the end of stream. */
virtual void EndOfStream() = 0;
/** Get preferred output chunk size. Called only once. */
virtual int GetChunkSize() { return 1024; }
/**
* Writes the next chunk of snapshot data into the stream. Writing
* can be stopped by returning kAbort as function result. EndOfStream
* will not be called in case writing was aborted.
*/
virtual WriteResult WriteAsciiChunk(char* data, int size) = 0;
/**
* Writes the next chunk of heap stats data into the stream. Writing
* can be stopped by returning kAbort as function result. EndOfStream
* will not be called in case writing was aborted.
*/
virtual WriteResult WriteHeapStatsChunk(HeapStatsUpdate* data, int count) {
return kAbort;
}
};
/**
* CpuProfile contains a CPU profile in a form of top-down call tree
@ -176,6 +209,9 @@ class V8_EXPORT CpuProfileNode {
*/
class V8_EXPORT CpuProfile {
public:
enum SerializationFormat {
kJSON = 0 // See format description near 'Serialize' method.
};
/** Returns CPU profile title. */
Local<String> GetTitle() const;
@ -207,6 +243,16 @@ class V8_EXPORT CpuProfile {
*/
int64_t GetStartTime() const;
/**
* Returns state of the vm when sample was captured.
*/
StateTag GetSampleState(int index) const;
/**
* Returns state of the embedder when sample was captured.
*/
EmbedderStateTag GetSampleEmbedderState(int index) const;
/**
* Returns time when the profile recording was stopped (in microseconds)
* since some unspecified starting point.
@ -219,6 +265,25 @@ class V8_EXPORT CpuProfile {
* All pointers to nodes previously returned become invalid.
*/
void Delete();
/**
* Prepare a serialized representation of the profile. The result
* is written into the stream provided in chunks of specified size.
*
* For the JSON format, heap contents are represented as an object
* with the following structure:
*
* {
* nodes: [nodes array],
* startTime: number,
* endTime: number
* samples: [strings array]
* timeDeltas: [numbers array]
* }
*
*/
void Serialize(OutputStream* stream,
SerializationFormat format = kJSON) const;
};
enum CpuProfilingMode {
@ -258,15 +323,33 @@ enum class CpuProfilingStatus {
kErrorTooManyProfilers
};
/**
* Result from StartProfiling returning the Profiling Status, and
* id of the started profiler, or 0 if profiler is not started
*/
struct CpuProfilingResult {
const ProfilerId id;
const CpuProfilingStatus status;
};
/**
* Delegate for when max samples reached and samples are discarded.
*/
class V8_EXPORT DiscardedSamplesDelegate {
public:
DiscardedSamplesDelegate() {}
DiscardedSamplesDelegate() = default;
virtual ~DiscardedSamplesDelegate() = default;
virtual void Notify() = 0;
ProfilerId GetId() const { return profiler_id_; }
private:
friend internal::CpuProfile;
void SetId(ProfilerId id) { profiler_id_ = id; }
ProfilerId profiler_id_;
};
/**
@ -289,14 +372,17 @@ class V8_EXPORT CpuProfilingOptions {
* interval, set via SetSamplingInterval(). If
* zero, the sampling interval will be equal to
* the profiler's sampling interval.
* \param filter_context Deprecated option to filter by context, currently a
* no-op.
* \param filter_context If specified, profiles will only contain frames
* using this context. Other frames will be elided.
*/
CpuProfilingOptions(
CpuProfilingMode mode = kLeafNodeLineNumbers,
unsigned max_samples = kNoSampleLimit, int sampling_interval_us = 0,
MaybeLocal<Context> filter_context = MaybeLocal<Context>());
CpuProfilingOptions(CpuProfilingOptions&&) = default;
CpuProfilingOptions& operator=(CpuProfilingOptions&&) = default;
CpuProfilingMode mode() const { return mode_; }
unsigned max_samples() const { return max_samples_; }
int sampling_interval_us() const { return sampling_interval_us_; }
@ -304,9 +390,13 @@ class V8_EXPORT CpuProfilingOptions {
private:
friend class internal::CpuProfile;
bool has_filter_context() const { return !filter_context_.IsEmpty(); }
void* raw_filter_context() const;
CpuProfilingMode mode_;
unsigned max_samples_;
int sampling_interval_us_;
Global<Context> filter_context_;
};
/**
@ -352,6 +442,45 @@ class V8_EXPORT CpuProfiler {
*/
void SetUsePreciseSampling(bool);
/**
* Starts collecting a CPU profile. Several profiles may be collected at once.
* Generates an anonymous profiler, without a String identifier.
*/
CpuProfilingResult Start(
CpuProfilingOptions options,
std::unique_ptr<DiscardedSamplesDelegate> delegate = nullptr);
/**
* Starts collecting a CPU profile. Title may be an empty string. Several
* profiles may be collected at once. Attempts to start collecting several
* profiles with the same title are silently ignored.
*/
CpuProfilingResult Start(
Local<String> title, CpuProfilingOptions options,
std::unique_ptr<DiscardedSamplesDelegate> delegate = nullptr);
/**
* Starts profiling with the same semantics as above, except with expanded
* parameters.
*
* |record_samples| parameter controls whether individual samples should
* be recorded in addition to the aggregated tree.
*
* |max_samples| controls the maximum number of samples that should be
* recorded by the profiler. Samples obtained after this limit will be
* discarded.
*/
CpuProfilingResult Start(
Local<String> title, CpuProfilingMode mode, bool record_samples = false,
unsigned max_samples = CpuProfilingOptions::kNoSampleLimit);
/**
* The same as StartProfiling above, but the CpuProfilingMode defaults to
* kLeafNodeLineNumbers mode, which was the previous default behavior of the
* profiler.
*/
CpuProfilingResult Start(Local<String> title, bool record_samples = false);
/**
* Starts collecting a CPU profile. Title may be an empty string. Several
* profiles may be collected at once. Attempts to start collecting several
@ -375,6 +504,7 @@ class V8_EXPORT CpuProfiler {
CpuProfilingStatus StartProfiling(
Local<String> title, CpuProfilingMode mode, bool record_samples = false,
unsigned max_samples = CpuProfilingOptions::kNoSampleLimit);
/**
* The same as StartProfiling above, but the CpuProfilingMode defaults to
* kLeafNodeLineNumbers mode, which was the previous default behavior of the
@ -383,6 +513,11 @@ class V8_EXPORT CpuProfiler {
CpuProfilingStatus StartProfiling(Local<String> title,
bool record_samples = false);
/**
* Stops collecting CPU profile with a given id and returns it.
*/
CpuProfile* Stop(ProfilerId id);
/**
* Stops collecting CPU profile with a given title and returns it.
* If the title given is empty, finishes the last profile started.
@ -459,7 +594,10 @@ class V8_EXPORT HeapGraphNode {
kConsString = 10, // Concatenated string. A pair of pointers to strings.
kSlicedString = 11, // Sliced string. A fragment of another string.
kSymbol = 12, // A Symbol (ES6).
kBigInt = 13 // BigInt.
kBigInt = 13, // BigInt.
kObjectShape = 14, // Internal data used for tracking the shapes (or
// "hidden classes") of JS objects.
kWasmObject = 15, // A WasmGC struct or array.
};
/** Returns node type (see HeapGraphNode::Type). */
@ -488,38 +626,6 @@ class V8_EXPORT HeapGraphNode {
const HeapGraphEdge* GetChild(int index) const;
};
/**
* An interface for exporting data from V8, using "push" model.
*/
class V8_EXPORT OutputStream { // NOLINT
public:
enum WriteResult {
kContinue = 0,
kAbort = 1
};
virtual ~OutputStream() = default;
/** Notify about the end of stream. */
virtual void EndOfStream() = 0;
/** Get preferred output chunk size. Called only once. */
virtual int GetChunkSize() { return 1024; }
/**
* Writes the next chunk of snapshot data into the stream. Writing
* can be stopped by returning kAbort as function result. EndOfStream
* will not be called in case writing was aborted.
*/
virtual WriteResult WriteAsciiChunk(char* data, int size) = 0;
/**
* Writes the next chunk of heap stats data into the stream. Writing
* can be stopped by returning kAbort as function result. EndOfStream
* will not be called in case writing was aborted.
*/
virtual WriteResult WriteHeapStatsChunk(HeapStatsUpdate* data, int count) {
return kAbort;
}
};
/**
* HeapSnapshots record the state of the JS heap at some moment.
*/
@ -586,7 +692,7 @@ class V8_EXPORT HeapSnapshot {
* An interface for reporting progress and controlling long-running
* activities.
*/
class V8_EXPORT ActivityControl { // NOLINT
class V8_EXPORT ActivityControl {
public:
enum ControlOption {
kContinue = 0,
@ -597,10 +703,9 @@ class V8_EXPORT ActivityControl { // NOLINT
* Notify about current progress. The activity can be stopped by
* returning kAbort as the callback result.
*/
virtual ControlOption ReportProgressValue(int done, int total) = 0;
virtual ControlOption ReportProgressValue(uint32_t done, uint32_t total) = 0;
};
/**
* AllocationProfile is a sampled profile of allocations done by the program.
* This is structured as a call-graph.
@ -779,6 +884,15 @@ class V8_EXPORT EmbedderGraph {
*/
virtual Detachedness GetDetachedness() { return Detachedness::kUnknown; }
/**
* Returns the address of the object in the embedder heap, or nullptr to not
* specify the address. If this address is provided, then V8 can generate
* consistent IDs for objects across subsequent heap snapshots, which allows
* devtools to determine which objects were retained from one snapshot to
* the next. This value is used only if GetNativeObject returns nullptr.
*/
virtual const void* GetAddress() { return nullptr; }
Node(const Node&) = delete;
Node& operator=(const Node&) = delete;
};
@ -817,6 +931,8 @@ class V8_EXPORT HeapProfiler {
enum SamplingFlags {
kSamplingNoFlags = 0,
kSamplingForceGC = 1 << 0,
kSamplingIncludeObjectsCollectedByMajorGC = 1 << 1,
kSamplingIncludeObjectsCollectedByMinorGC = 1 << 2,
};
/**
@ -894,13 +1010,76 @@ class V8_EXPORT HeapProfiler {
virtual ~ObjectNameResolver() = default;
};
enum class HeapSnapshotMode {
/**
* Heap snapshot for regular developers.
*/
kRegular,
/**
* Heap snapshot is exposing internals that may be useful for experts.
*/
kExposeInternals,
};
enum class NumericsMode {
/**
* Numeric values are hidden as they are values of the corresponding
* objects.
*/
kHideNumericValues,
/**
* Numeric values are exposed in artificial fields.
*/
kExposeNumericValues
};
struct HeapSnapshotOptions final {
// Manually define default constructor here to be able to use it in
// `TakeSnapshot()` below.
// NOLINTNEXTLINE
HeapSnapshotOptions() {}
/**
* The control used to report intermediate progress to.
*/
ActivityControl* control = nullptr;
/**
* The resolver used by the snapshot generator to get names for V8 objects.
*/
ObjectNameResolver* global_object_name_resolver = nullptr;
/**
* Mode for taking the snapshot, see `HeapSnapshotMode`.
*/
HeapSnapshotMode snapshot_mode = HeapSnapshotMode::kRegular;
/**
* Mode for dealing with numeric values, see `NumericsMode`.
*/
NumericsMode numerics_mode = NumericsMode::kHideNumericValues;
/**
* Whether stack is considered as a root set.
*/
cppgc::EmbedderStackState stack_state =
cppgc::EmbedderStackState::kMayContainHeapPointers;
};
/**
* Takes a heap snapshot and returns it.
* Takes a heap snapshot.
*
* \returns the snapshot.
*/
const HeapSnapshot* TakeHeapSnapshot(
ActivityControl* control = nullptr,
const HeapSnapshotOptions& options = HeapSnapshotOptions());
/**
* Takes a heap snapshot. See `HeapSnapshotOptions` for details on the
* parameters.
*
* \returns the snapshot.
*/
const HeapSnapshot* TakeHeapSnapshot(
ActivityControl* control,
ObjectNameResolver* global_object_name_resolver = nullptr,
bool treat_global_objects_as_roots = true);
bool hide_internals = true, bool capture_numeric_value = false);
/**
* Starts tracking of heap objects population statistics. After calling
@ -953,10 +1132,8 @@ class V8_EXPORT HeapProfiler {
* |stack_depth| parameter controls the maximum number of stack frames to be
* captured on each allocation.
*
* NOTE: This is a proof-of-concept at this point. Right now we only sample
* newspace allocations. Support for paged space allocation (e.g. pre-tenured
* objects, large objects, code objects, etc.) and native allocations
* doesn't exist yet, but is anticipated in the future.
* NOTE: Support for native allocations doesn't exist yet, but is anticipated
* in the future.
*
* Objects allocated before the sampling is started will not be included in
* the profile.
@ -1019,18 +1196,18 @@ struct HeapStatsUpdate {
uint32_t size; // New value of size field for the interval with this index.
};
#define CODE_EVENTS_LIST(V) \
V(Builtin) \
V(Callback) \
V(Eval) \
V(Function) \
V(InterpretedFunction) \
V(Handler) \
V(BytecodeHandler) \
V(LazyCompile) \
V(RegExp) \
V(Script) \
V(Stub) \
#define CODE_EVENTS_LIST(V) \
V(Builtin) \
V(Callback) \
V(Eval) \
V(Function) \
V(InterpretedFunction) \
V(Handler) \
V(BytecodeHandler) \
V(LazyCompile) /* Unused, use kFunction instead */ \
V(RegExp) \
V(Script) \
V(Stub) \
V(Relocation)
/**

View File

@ -0,0 +1,174 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_PROMISE_H_
#define INCLUDE_V8_PROMISE_H_
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-object.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Context;
#ifndef V8_PROMISE_INTERNAL_FIELD_COUNT
// The number of required internal fields can be defined by embedder.
#define V8_PROMISE_INTERNAL_FIELD_COUNT 0
#endif
/**
* An instance of the built-in Promise constructor (ES6 draft).
*/
class V8_EXPORT Promise : public Object {
public:
/**
* State of the promise. Each value corresponds to one of the possible values
* of the [[PromiseState]] field.
*/
enum PromiseState { kPending, kFulfilled, kRejected };
class V8_EXPORT Resolver : public Object {
public:
/**
* Create a new resolver, along with an associated promise in pending state.
*/
static V8_WARN_UNUSED_RESULT MaybeLocal<Resolver> New(
Local<Context> context);
/**
* Extract the associated promise.
*/
Local<Promise> GetPromise();
/**
* Resolve/reject the associated promise with a given value.
* Ignored if the promise is no longer pending.
*/
V8_WARN_UNUSED_RESULT Maybe<bool> Resolve(Local<Context> context,
Local<Value> value);
V8_WARN_UNUSED_RESULT Maybe<bool> Reject(Local<Context> context,
Local<Value> value);
V8_INLINE static Resolver* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Promise::Resolver*>(value);
}
private:
Resolver();
static void CheckCast(Value* obj);
};
/**
* Register a resolution/rejection handler with a promise.
* The handler is given the respective resolution/rejection value as
* an argument. If the promise is already resolved/rejected, the handler is
* invoked at the end of turn.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Promise> Catch(Local<Context> context,
Local<Function> handler);
V8_WARN_UNUSED_RESULT MaybeLocal<Promise> Then(Local<Context> context,
Local<Function> handler);
V8_WARN_UNUSED_RESULT MaybeLocal<Promise> Then(Local<Context> context,
Local<Function> on_fulfilled,
Local<Function> on_rejected);
/**
* Returns true if the promise has at least one derived promise, and
* therefore resolve/reject handlers (including default handler).
*/
bool HasHandler() const;
/**
* Returns the content of the [[PromiseResult]] field. The Promise must not
* be pending.
*/
Local<Value> Result();
/**
* Returns the value of the [[PromiseState]] field.
*/
PromiseState State();
/**
* Marks this promise as handled to avoid reporting unhandled rejections.
*/
void MarkAsHandled();
/**
* Marks this promise as silent to prevent pausing the debugger when the
* promise is rejected.
*/
void MarkAsSilent();
V8_INLINE static Promise* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Promise*>(value);
}
static const int kEmbedderFieldCount = V8_PROMISE_INTERNAL_FIELD_COUNT;
private:
Promise();
static void CheckCast(Value* obj);
};
/**
* PromiseHook with type kInit is called when a new promise is
* created. When a new promise is created as part of the chain in the
* case of Promise.then or in the intermediate promises created by
* Promise.{race, all}/AsyncFunctionAwait, we pass the parent promise
* otherwise we pass undefined.
*
* PromiseHook with type kResolve is called at the beginning of
* resolve or reject function defined by CreateResolvingFunctions.
*
* PromiseHook with type kBefore is called at the beginning of the
* PromiseReactionJob.
*
* PromiseHook with type kAfter is called right at the end of the
* PromiseReactionJob.
*/
enum class PromiseHookType { kInit, kResolve, kBefore, kAfter };
using PromiseHook = void (*)(PromiseHookType type, Local<Promise> promise,
Local<Value> parent);
// --- Promise Reject Callback ---
enum PromiseRejectEvent {
kPromiseRejectWithNoHandler = 0,
kPromiseHandlerAddedAfterReject = 1,
kPromiseRejectAfterResolved = 2,
kPromiseResolveAfterResolved = 3,
};
class PromiseRejectMessage {
public:
PromiseRejectMessage(Local<Promise> promise, PromiseRejectEvent event,
Local<Value> value)
: promise_(promise), event_(event), value_(value) {}
V8_INLINE Local<Promise> GetPromise() const { return promise_; }
V8_INLINE PromiseRejectEvent GetEvent() const { return event_; }
V8_INLINE Local<Value> GetValue() const { return value_; }
private:
Local<Promise> promise_;
PromiseRejectEvent event_;
Local<Value> value_;
};
using PromiseRejectCallback = void (*)(PromiseRejectMessage message);
} // namespace v8
#endif // INCLUDE_V8_PROMISE_H_

View File

@ -0,0 +1,50 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_PROXY_H_
#define INCLUDE_V8_PROXY_H_
#include "v8-context.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-object.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Context;
/**
* An instance of the built-in Proxy constructor (ECMA-262, 6th Edition,
* 26.2.1).
*/
class V8_EXPORT Proxy : public Object {
public:
Local<Value> GetTarget();
Local<Value> GetHandler();
bool IsRevoked() const;
void Revoke();
/**
* Creates a new Proxy for the target object.
*/
static MaybeLocal<Proxy> New(Local<Context> context,
Local<Object> local_target,
Local<Object> local_handler);
V8_INLINE static Proxy* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Proxy*>(value);
}
private:
Proxy();
static void CheckCast(Value* obj);
};
} // namespace v8
#endif // INCLUDE_V8_PROXY_H_

View File

@ -0,0 +1,106 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_REGEXP_H_
#define INCLUDE_V8_REGEXP_H_
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-object.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Context;
/**
* An instance of the built-in RegExp constructor (ECMA-262, 15.10).
*/
class V8_EXPORT RegExp : public Object {
public:
/**
* Regular expression flag bits. They can be or'ed to enable a set
* of flags.
* The kLinear value ('l') is experimental and can only be used with
* --enable-experimental-regexp-engine. RegExps with kLinear flag are
* guaranteed to be executed in asymptotic linear time wrt. the length of
* the subject string.
*/
enum Flags {
kNone = 0,
kGlobal = 1 << 0,
kIgnoreCase = 1 << 1,
kMultiline = 1 << 2,
kSticky = 1 << 3,
kUnicode = 1 << 4,
kDotAll = 1 << 5,
kLinear = 1 << 6,
kHasIndices = 1 << 7,
kUnicodeSets = 1 << 8,
};
static constexpr int kFlagCount = 9;
/**
* Creates a regular expression from the given pattern string and
* the flags bit field. May throw a JavaScript exception as
* described in ECMA-262, 15.10.4.1.
*
* For example,
* RegExp::New(v8::String::New("foo"),
* static_cast<RegExp::Flags>(kGlobal | kMultiline))
* is equivalent to evaluating "/foo/gm".
*/
static V8_WARN_UNUSED_RESULT MaybeLocal<RegExp> New(Local<Context> context,
Local<String> pattern,
Flags flags);
/**
* Like New, but additionally specifies a backtrack limit. If the number of
* backtracks done in one Exec call hits the limit, a match failure is
* immediately returned.
*/
static V8_WARN_UNUSED_RESULT MaybeLocal<RegExp> NewWithBacktrackLimit(
Local<Context> context, Local<String> pattern, Flags flags,
uint32_t backtrack_limit);
/**
* Executes the current RegExp instance on the given subject string.
* Equivalent to RegExp.prototype.exec as described in
*
* https://tc39.es/ecma262/#sec-regexp.prototype.exec
*
* On success, an Array containing the matched strings is returned. On
* failure, returns Null.
*
* Note: modifies global context state, accessible e.g. through RegExp.input.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Object> Exec(Local<Context> context,
Local<String> subject);
/**
* Returns the value of the source property: a string representing
* the regular expression.
*/
Local<String> GetSource() const;
/**
* Returns the flags bit field.
*/
Flags GetFlags() const;
V8_INLINE static RegExp* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<RegExp*>(value);
}
private:
static void CheckCast(Value* obj);
};
} // namespace v8
#endif // INCLUDE_V8_REGEXP_H_

View File

@ -0,0 +1,834 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_SCRIPT_H_
#define INCLUDE_V8_SCRIPT_H_
#include <stddef.h>
#include <stdint.h>
#include <memory>
#include <tuple>
#include <vector>
#include "v8-callbacks.h" // NOLINT(build/include_directory)
#include "v8-data.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-maybe.h" // NOLINT(build/include_directory)
#include "v8-message.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Function;
class Message;
class Object;
class PrimitiveArray;
class Script;
namespace internal {
class BackgroundDeserializeTask;
struct ScriptStreamingData;
} // namespace internal
/**
* A container type that holds relevant metadata for module loading.
*
* This is passed back to the embedder as part of
* HostImportModuleDynamicallyCallback for module loading.
*/
class V8_EXPORT ScriptOrModule {
public:
/**
* The name that was passed by the embedder as ResourceName to the
* ScriptOrigin. This can be either a v8::String or v8::Undefined.
*/
Local<Value> GetResourceName();
/**
* The options that were passed by the embedder as HostDefinedOptions to
* the ScriptOrigin.
*/
Local<Data> HostDefinedOptions();
};
/**
* A compiled JavaScript script, not yet tied to a Context.
*/
class V8_EXPORT UnboundScript {
public:
/**
* Binds the script to the currently entered context.
*/
Local<Script> BindToCurrentContext();
int GetId() const;
Local<Value> GetScriptName();
/**
* Data read from magic sourceURL comments.
*/
Local<Value> GetSourceURL();
/**
* Data read from magic sourceMappingURL comments.
*/
Local<Value> GetSourceMappingURL();
/**
* Returns zero based line number of the code_pos location in the script.
* -1 will be returned if no information available.
*/
int GetLineNumber(int code_pos = 0);
/**
* Returns zero based column number of the code_pos location in the script.
* -1 will be returned if no information available.
*/
int GetColumnNumber(int code_pos = 0);
static const int kNoScriptId = 0;
};
/**
* A compiled JavaScript module, not yet tied to a Context.
*/
class V8_EXPORT UnboundModuleScript : public Data {
public:
/**
* Data read from magic sourceURL comments.
*/
Local<Value> GetSourceURL();
/**
* Data read from magic sourceMappingURL comments.
*/
Local<Value> GetSourceMappingURL();
};
/**
* A location in JavaScript source.
*/
class V8_EXPORT Location {
public:
int GetLineNumber() { return line_number_; }
int GetColumnNumber() { return column_number_; }
Location(int line_number, int column_number)
: line_number_(line_number), column_number_(column_number) {}
private:
int line_number_;
int column_number_;
};
class V8_EXPORT ModuleRequest : public Data {
public:
/**
* Returns the module specifier for this ModuleRequest.
*/
Local<String> GetSpecifier() const;
/**
* Returns the source code offset of this module request.
* Use Module::SourceOffsetToLocation to convert this to line/column numbers.
*/
int GetSourceOffset() const;
/**
* Contains the import assertions for this request in the form:
* [key1, value1, source_offset1, key2, value2, source_offset2, ...].
* The keys and values are of type v8::String, and the source offsets are of
* type Int32. Use Module::SourceOffsetToLocation to convert the source
* offsets to Locations with line/column numbers.
*
* All assertions present in the module request will be supplied in this
* list, regardless of whether they are supported by the host. Per
* https://tc39.es/proposal-import-assertions/#sec-hostgetsupportedimportassertions,
* hosts are expected to ignore assertions that they do not support (as
* opposed to, for example, triggering an error if an unsupported assertion is
* present).
*/
Local<FixedArray> GetImportAssertions() const;
V8_INLINE static ModuleRequest* Cast(Data* data);
private:
static void CheckCast(Data* obj);
};
/**
* A compiled JavaScript module.
*/
class V8_EXPORT Module : public Data {
public:
/**
* The different states a module can be in.
*
* This corresponds to the states used in ECMAScript except that "evaluated"
* is split into kEvaluated and kErrored, indicating success and failure,
* respectively.
*/
enum Status {
kUninstantiated,
kInstantiating,
kInstantiated,
kEvaluating,
kEvaluated,
kErrored
};
/**
* Returns the module's current status.
*/
Status GetStatus() const;
/**
* For a module in kErrored status, this returns the corresponding exception.
*/
Local<Value> GetException() const;
/**
* Returns the ModuleRequests for this module.
*/
Local<FixedArray> GetModuleRequests() const;
/**
* For the given source text offset in this module, returns the corresponding
* Location with line and column numbers.
*/
Location SourceOffsetToLocation(int offset) const;
/**
* Returns the identity hash for this object.
*/
int GetIdentityHash() const;
using ResolveModuleCallback = MaybeLocal<Module> (*)(
Local<Context> context, Local<String> specifier,
Local<FixedArray> import_assertions, Local<Module> referrer);
/**
* Instantiates the module and its dependencies.
*
* Returns an empty Maybe<bool> if an exception occurred during
* instantiation. (In the case where the callback throws an exception, that
* exception is propagated.)
*/
V8_WARN_UNUSED_RESULT Maybe<bool> InstantiateModule(
Local<Context> context, ResolveModuleCallback callback);
/**
* Evaluates the module and its dependencies.
*
* If status is kInstantiated, run the module's code and return a Promise
* object. On success, set status to kEvaluated and resolve the Promise with
* the completion value; on failure, set status to kErrored and reject the
* Promise with the error.
*
* If IsGraphAsync() is false, the returned Promise is settled.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Value> Evaluate(Local<Context> context);
/**
* Returns the namespace object of this module.
*
* The module's status must be at least kInstantiated.
*/
Local<Value> GetModuleNamespace();
/**
* Returns the corresponding context-unbound module script.
*
* The module must be unevaluated, i.e. its status must not be kEvaluating,
* kEvaluated or kErrored.
*/
Local<UnboundModuleScript> GetUnboundModuleScript();
/**
* Returns the underlying script's id.
*
* The module must be a SourceTextModule and must not have a kErrored status.
*/
int ScriptId() const;
/**
* Returns whether this module or any of its requested modules is async,
* i.e. contains top-level await.
*
* The module's status must be at least kInstantiated.
*/
bool IsGraphAsync() const;
/**
* Returns whether the module is a SourceTextModule.
*/
bool IsSourceTextModule() const;
/**
* Returns whether the module is a SyntheticModule.
*/
bool IsSyntheticModule() const;
/*
* Callback defined in the embedder. This is responsible for setting
* the module's exported values with calls to SetSyntheticModuleExport().
* The callback must return a resolved Promise to indicate success (where no
* exception was thrown) and return an empy MaybeLocal to indicate falure
* (where an exception was thrown).
*/
using SyntheticModuleEvaluationSteps =
MaybeLocal<Value> (*)(Local<Context> context, Local<Module> module);
/**
* Creates a new SyntheticModule with the specified export names, where
* evaluation_steps will be executed upon module evaluation.
* export_names must not contain duplicates.
* module_name is used solely for logging/debugging and doesn't affect module
* behavior.
*/
static Local<Module> CreateSyntheticModule(
Isolate* isolate, Local<String> module_name,
const std::vector<Local<String>>& export_names,
SyntheticModuleEvaluationSteps evaluation_steps);
/**
* Set this module's exported value for the name export_name to the specified
* export_value. This method must be called only on Modules created via
* CreateSyntheticModule. An error will be thrown if export_name is not one
* of the export_names that were passed in that CreateSyntheticModule call.
* Returns Just(true) on success, Nothing<bool>() if an error was thrown.
*/
V8_WARN_UNUSED_RESULT Maybe<bool> SetSyntheticModuleExport(
Isolate* isolate, Local<String> export_name, Local<Value> export_value);
/**
* Search the modules requested directly or indirectly by the module for
* any top-level await that has not yet resolved. If there is any, the
* returned vector contains a tuple of the unresolved module and a message
* with the pending top-level await.
* An embedder may call this before exiting to improve error messages.
*/
std::vector<std::tuple<Local<Module>, Local<Message>>>
GetStalledTopLevelAwaitMessage(Isolate* isolate);
V8_INLINE static Module* Cast(Data* data);
private:
static void CheckCast(Data* obj);
};
/**
* A compiled JavaScript script, tied to a Context which was active when the
* script was compiled.
*/
class V8_EXPORT Script {
public:
/**
* A shorthand for ScriptCompiler::Compile().
*/
static V8_WARN_UNUSED_RESULT MaybeLocal<Script> Compile(
Local<Context> context, Local<String> source,
ScriptOrigin* origin = nullptr);
/**
* Runs the script returning the resulting value. It will be run in the
* context in which it was created (ScriptCompiler::CompileBound or
* UnboundScript::BindToCurrentContext()).
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Value> Run(Local<Context> context);
V8_WARN_UNUSED_RESULT MaybeLocal<Value> Run(Local<Context> context,
Local<Data> host_defined_options);
/**
* Returns the corresponding context-unbound script.
*/
Local<UnboundScript> GetUnboundScript();
/**
* The name that was passed by the embedder as ResourceName to the
* ScriptOrigin. This can be either a v8::String or v8::Undefined.
*/
Local<Value> GetResourceName();
/**
* If the script was compiled, returns the positions of lazy functions which
* were eventually compiled and executed.
*/
std::vector<int> GetProducedCompileHints() const;
};
enum class ScriptType { kClassic, kModule };
/**
* For compiling scripts.
*/
class V8_EXPORT ScriptCompiler {
public:
class ConsumeCodeCacheTask;
/**
* Compilation data that the embedder can cache and pass back to speed up
* future compilations. The data is produced if the CompilerOptions passed to
* the compilation functions in ScriptCompiler contains produce_data_to_cache
* = true. The data to cache can then can be retrieved from
* UnboundScript.
*/
struct V8_EXPORT CachedData {
enum BufferPolicy { BufferNotOwned, BufferOwned };
CachedData()
: data(nullptr),
length(0),
rejected(false),
buffer_policy(BufferNotOwned) {}
// If buffer_policy is BufferNotOwned, the caller keeps the ownership of
// data and guarantees that it stays alive until the CachedData object is
// destroyed. If the policy is BufferOwned, the given data will be deleted
// (with delete[]) when the CachedData object is destroyed.
CachedData(const uint8_t* data, int length,
BufferPolicy buffer_policy = BufferNotOwned);
~CachedData();
// TODO(marja): Async compilation; add constructors which take a callback
// which will be called when V8 no longer needs the data.
const uint8_t* data;
int length;
bool rejected;
BufferPolicy buffer_policy;
// Prevent copying.
CachedData(const CachedData&) = delete;
CachedData& operator=(const CachedData&) = delete;
};
/**
* Source code which can be then compiled to a UnboundScript or Script.
*/
class Source {
public:
// Source takes ownership of both CachedData and CodeCacheConsumeTask.
// The caller *must* ensure that the cached data is from a trusted source.
V8_INLINE Source(Local<String> source_string, const ScriptOrigin& origin,
CachedData* cached_data = nullptr,
ConsumeCodeCacheTask* consume_cache_task = nullptr);
// Source takes ownership of both CachedData and CodeCacheConsumeTask.
V8_INLINE explicit Source(
Local<String> source_string, CachedData* cached_data = nullptr,
ConsumeCodeCacheTask* consume_cache_task = nullptr);
V8_INLINE Source(Local<String> source_string, const ScriptOrigin& origin,
CompileHintCallback callback, void* callback_data);
V8_INLINE ~Source() = default;
// Ownership of the CachedData or its buffers is *not* transferred to the
// caller. The CachedData object is alive as long as the Source object is
// alive.
V8_INLINE const CachedData* GetCachedData() const;
V8_INLINE const ScriptOriginOptions& GetResourceOptions() const;
private:
friend class ScriptCompiler;
Local<String> source_string;
// Origin information
Local<Value> resource_name;
int resource_line_offset;
int resource_column_offset;
ScriptOriginOptions resource_options;
Local<Value> source_map_url;
Local<Data> host_defined_options;
// Cached data from previous compilation (if a kConsume*Cache flag is
// set), or hold newly generated cache data (kProduce*Cache flags) are
// set when calling a compile method.
std::unique_ptr<CachedData> cached_data;
std::unique_ptr<ConsumeCodeCacheTask> consume_cache_task;
// For requesting compile hints from the embedder.
CompileHintCallback compile_hint_callback = nullptr;
void* compile_hint_callback_data = nullptr;
};
/**
* For streaming incomplete script data to V8. The embedder should implement a
* subclass of this class.
*/
class V8_EXPORT ExternalSourceStream {
public:
virtual ~ExternalSourceStream() = default;
/**
* V8 calls this to request the next chunk of data from the embedder. This
* function will be called on a background thread, so it's OK to block and
* wait for the data, if the embedder doesn't have data yet. Returns the
* length of the data returned. When the data ends, GetMoreData should
* return 0. Caller takes ownership of the data.
*
* When streaming UTF-8 data, V8 handles multi-byte characters split between
* two data chunks, but doesn't handle multi-byte characters split between
* more than two data chunks. The embedder can avoid this problem by always
* returning at least 2 bytes of data.
*
* When streaming UTF-16 data, V8 does not handle characters split between
* two data chunks. The embedder has to make sure that chunks have an even
* length.
*
* If the embedder wants to cancel the streaming, they should make the next
* GetMoreData call return 0. V8 will interpret it as end of data (and most
* probably, parsing will fail). The streaming task will return as soon as
* V8 has parsed the data it received so far.
*/
virtual size_t GetMoreData(const uint8_t** src) = 0;
};
/**
* Source code which can be streamed into V8 in pieces. It will be parsed
* while streaming and compiled after parsing has completed. StreamedSource
* must be kept alive while the streaming task is run (see ScriptStreamingTask
* below).
*/
class V8_EXPORT StreamedSource {
public:
enum Encoding { ONE_BYTE, TWO_BYTE, UTF8, WINDOWS_1252 };
StreamedSource(std::unique_ptr<ExternalSourceStream> source_stream,
Encoding encoding);
~StreamedSource();
internal::ScriptStreamingData* impl() const { return impl_.get(); }
// Prevent copying.
StreamedSource(const StreamedSource&) = delete;
StreamedSource& operator=(const StreamedSource&) = delete;
private:
std::unique_ptr<internal::ScriptStreamingData> impl_;
};
/**
* A streaming task which the embedder must run on a background thread to
* stream scripts into V8. Returned by ScriptCompiler::StartStreaming.
*/
class V8_EXPORT ScriptStreamingTask final {
public:
void Run();
private:
friend class ScriptCompiler;
explicit ScriptStreamingTask(internal::ScriptStreamingData* data)
: data_(data) {}
internal::ScriptStreamingData* data_;
};
/**
* A task which the embedder must run on a background thread to
* consume a V8 code cache. Returned by
* ScriptCompiler::StartConsumingCodeCache.
*/
class V8_EXPORT ConsumeCodeCacheTask final {
public:
~ConsumeCodeCacheTask();
void Run();
/**
* Provides the source text string and origin information to the consumption
* task. May be called before, during, or after Run(). This step checks
* whether the script matches an existing script in the Isolate's
* compilation cache. To check whether such a script was found, call
* ShouldMergeWithExistingScript.
*
* The Isolate provided must be the same one used during
* StartConsumingCodeCache and must be currently entered on the thread that
* calls this function. The source text and origin provided in this step
* must precisely match those used later in the ScriptCompiler::Source that
* will contain this ConsumeCodeCacheTask.
*/
void SourceTextAvailable(Isolate* isolate, Local<String> source_text,
const ScriptOrigin& origin);
/**
* Returns whether the embedder should call MergeWithExistingScript. This
* function may be called from any thread, any number of times, but its
* return value is only meaningful after SourceTextAvailable has completed.
*/
bool ShouldMergeWithExistingScript() const;
/**
* Merges newly deserialized data into an existing script which was found
* during SourceTextAvailable. May be called only after Run() has completed.
* Can execute on any thread, like Run().
*/
void MergeWithExistingScript();
private:
friend class ScriptCompiler;
explicit ConsumeCodeCacheTask(
std::unique_ptr<internal::BackgroundDeserializeTask> impl);
std::unique_ptr<internal::BackgroundDeserializeTask> impl_;
};
enum CompileOptions {
kNoCompileOptions = 0,
kConsumeCodeCache,
kEagerCompile,
kProduceCompileHints,
kConsumeCompileHints
};
/**
* The reason for which we are not requesting or providing a code cache.
*/
enum NoCacheReason {
kNoCacheNoReason = 0,
kNoCacheBecauseCachingDisabled,
kNoCacheBecauseNoResource,
kNoCacheBecauseInlineScript,
kNoCacheBecauseModule,
kNoCacheBecauseStreamingSource,
kNoCacheBecauseInspector,
kNoCacheBecauseScriptTooSmall,
kNoCacheBecauseCacheTooCold,
kNoCacheBecauseV8Extension,
kNoCacheBecauseExtensionModule,
kNoCacheBecausePacScript,
kNoCacheBecauseInDocumentWrite,
kNoCacheBecauseResourceWithNoCacheHandler,
kNoCacheBecauseDeferredProduceCodeCache
};
/**
* Compiles the specified script (context-independent).
* Cached data as part of the source object can be optionally produced to be
* consumed later to speed up compilation of identical source scripts.
*
* Note that when producing cached data, the source must point to NULL for
* cached data. When consuming cached data, the cached data must have been
* produced by the same version of V8, and the embedder needs to ensure the
* cached data is the correct one for the given script.
*
* \param source Script source code.
* \return Compiled script object (context independent; for running it must be
* bound to a context).
*/
static V8_WARN_UNUSED_RESULT MaybeLocal<UnboundScript> CompileUnboundScript(
Isolate* isolate, Source* source,
CompileOptions options = kNoCompileOptions,
NoCacheReason no_cache_reason = kNoCacheNoReason);
/**
* Compiles the specified script (bound to current context).
*
* \param source Script source code.
* \param pre_data Pre-parsing data, as obtained by ScriptData::PreCompile()
* using pre_data speeds compilation if it's done multiple times.
* Owned by caller, no references are kept when this function returns.
* \return Compiled script object, bound to the context that was active
* when this function was called. When run it will always use this
* context.
*/
static V8_WARN_UNUSED_RESULT MaybeLocal<Script> Compile(
Local<Context> context, Source* source,
CompileOptions options = kNoCompileOptions,
NoCacheReason no_cache_reason = kNoCacheNoReason);
/**
* Returns a task which streams script data into V8, or NULL if the script
* cannot be streamed. The user is responsible for running the task on a
* background thread and deleting it. When ran, the task starts parsing the
* script, and it will request data from the StreamedSource as needed. When
* ScriptStreamingTask::Run exits, all data has been streamed and the script
* can be compiled (see Compile below).
*
* This API allows to start the streaming with as little data as possible, and
* the remaining data (for example, the ScriptOrigin) is passed to Compile.
*/
static ScriptStreamingTask* StartStreaming(
Isolate* isolate, StreamedSource* source,
ScriptType type = ScriptType::kClassic,
CompileOptions options = kNoCompileOptions,
CompileHintCallback compile_hint_callback = nullptr,
void* compile_hint_callback_data = nullptr);
static ConsumeCodeCacheTask* StartConsumingCodeCache(
Isolate* isolate, std::unique_ptr<CachedData> source);
/**
* Compiles a streamed script (bound to current context).
*
* This can only be called after the streaming has finished
* (ScriptStreamingTask has been run). V8 doesn't construct the source string
* during streaming, so the embedder needs to pass the full source here.
*/
static V8_WARN_UNUSED_RESULT MaybeLocal<Script> Compile(
Local<Context> context, StreamedSource* source,
Local<String> full_source_string, const ScriptOrigin& origin);
/**
* Return a version tag for CachedData for the current V8 version & flags.
*
* This value is meant only for determining whether a previously generated
* CachedData instance is still valid; the tag has no other meaing.
*
* Background: The data carried by CachedData may depend on the exact
* V8 version number or current compiler flags. This means that when
* persisting CachedData, the embedder must take care to not pass in
* data from another V8 version, or the same version with different
* features enabled.
*
* The easiest way to do so is to clear the embedder's cache on any
* such change.
*
* Alternatively, this tag can be stored alongside the cached data and
* compared when it is being used.
*/
static uint32_t CachedDataVersionTag();
/**
* Compile an ES module, returning a Module that encapsulates
* the compiled code.
*
* Corresponds to the ParseModule abstract operation in the
* ECMAScript specification.
*/
static V8_WARN_UNUSED_RESULT MaybeLocal<Module> CompileModule(
Isolate* isolate, Source* source,
CompileOptions options = kNoCompileOptions,
NoCacheReason no_cache_reason = kNoCacheNoReason);
/**
* Compiles a streamed module script.
*
* This can only be called after the streaming has finished
* (ScriptStreamingTask has been run). V8 doesn't construct the source string
* during streaming, so the embedder needs to pass the full source here.
*/
static V8_WARN_UNUSED_RESULT MaybeLocal<Module> CompileModule(
Local<Context> context, StreamedSource* v8_source,
Local<String> full_source_string, const ScriptOrigin& origin);
/**
* Compile a function for a given context. This is equivalent to running
*
* with (obj) {
* return function(args) { ... }
* }
*
* It is possible to specify multiple context extensions (obj in the above
* example).
*/
V8_DEPRECATED("Use CompileFunction")
static V8_WARN_UNUSED_RESULT MaybeLocal<Function> CompileFunctionInContext(
Local<Context> context, Source* source, size_t arguments_count,
Local<String> arguments[], size_t context_extension_count,
Local<Object> context_extensions[],
CompileOptions options = kNoCompileOptions,
NoCacheReason no_cache_reason = kNoCacheNoReason,
Local<ScriptOrModule>* script_or_module_out = nullptr);
static V8_WARN_UNUSED_RESULT MaybeLocal<Function> CompileFunction(
Local<Context> context, Source* source, size_t arguments_count = 0,
Local<String> arguments[] = nullptr, size_t context_extension_count = 0,
Local<Object> context_extensions[] = nullptr,
CompileOptions options = kNoCompileOptions,
NoCacheReason no_cache_reason = kNoCacheNoReason);
/**
* Creates and returns code cache for the specified unbound_script.
* This will return nullptr if the script cannot be serialized. The
* CachedData returned by this function should be owned by the caller.
*/
static CachedData* CreateCodeCache(Local<UnboundScript> unbound_script);
/**
* Creates and returns code cache for the specified unbound_module_script.
* This will return nullptr if the script cannot be serialized. The
* CachedData returned by this function should be owned by the caller.
*/
static CachedData* CreateCodeCache(
Local<UnboundModuleScript> unbound_module_script);
/**
* Creates and returns code cache for the specified function that was
* previously produced by CompileFunction.
* This will return nullptr if the script cannot be serialized. The
* CachedData returned by this function should be owned by the caller.
*/
static CachedData* CreateCodeCacheForFunction(Local<Function> function);
private:
static V8_WARN_UNUSED_RESULT MaybeLocal<UnboundScript> CompileUnboundInternal(
Isolate* isolate, Source* source, CompileOptions options,
NoCacheReason no_cache_reason);
static V8_WARN_UNUSED_RESULT MaybeLocal<Function> CompileFunctionInternal(
Local<Context> context, Source* source, size_t arguments_count,
Local<String> arguments[], size_t context_extension_count,
Local<Object> context_extensions[], CompileOptions options,
NoCacheReason no_cache_reason,
Local<ScriptOrModule>* script_or_module_out);
};
ScriptCompiler::Source::Source(Local<String> string, const ScriptOrigin& origin,
CachedData* data,
ConsumeCodeCacheTask* consume_cache_task)
: source_string(string),
resource_name(origin.ResourceName()),
resource_line_offset(origin.LineOffset()),
resource_column_offset(origin.ColumnOffset()),
resource_options(origin.Options()),
source_map_url(origin.SourceMapUrl()),
host_defined_options(origin.GetHostDefinedOptions()),
cached_data(data),
consume_cache_task(consume_cache_task) {}
ScriptCompiler::Source::Source(Local<String> string, CachedData* data,
ConsumeCodeCacheTask* consume_cache_task)
: source_string(string),
cached_data(data),
consume_cache_task(consume_cache_task) {}
ScriptCompiler::Source::Source(Local<String> string, const ScriptOrigin& origin,
CompileHintCallback callback,
void* callback_data)
: source_string(string),
resource_name(origin.ResourceName()),
resource_line_offset(origin.LineOffset()),
resource_column_offset(origin.ColumnOffset()),
resource_options(origin.Options()),
source_map_url(origin.SourceMapUrl()),
host_defined_options(origin.GetHostDefinedOptions()),
compile_hint_callback(callback),
compile_hint_callback_data(callback_data) {}
const ScriptCompiler::CachedData* ScriptCompiler::Source::GetCachedData()
const {
return cached_data.get();
}
const ScriptOriginOptions& ScriptCompiler::Source::GetResourceOptions() const {
return resource_options;
}
ModuleRequest* ModuleRequest::Cast(Data* data) {
#ifdef V8_ENABLE_CHECKS
CheckCast(data);
#endif
return reinterpret_cast<ModuleRequest*>(data);
}
Module* Module::Cast(Data* data) {
#ifdef V8_ENABLE_CHECKS
CheckCast(data);
#endif
return reinterpret_cast<Module*>(data);
}
} // namespace v8
#endif // INCLUDE_V8_SCRIPT_H_

View File

@ -0,0 +1,195 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_SNAPSHOT_H_
#define INCLUDE_V8_SNAPSHOT_H_
#include "v8-internal.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Object;
class V8_EXPORT StartupData {
public:
/**
* Whether the data created can be rehashed and and the hash seed can be
* recomputed when deserialized.
* Only valid for StartupData returned by SnapshotCreator::CreateBlob().
*/
bool CanBeRehashed() const;
/**
* Allows embedders to verify whether the data is valid for the current
* V8 instance.
*/
bool IsValid() const;
const char* data;
int raw_size;
};
/**
* Callback and supporting data used in SnapshotCreator to implement embedder
* logic to serialize internal fields.
* Internal fields that directly reference V8 objects are serialized without
* calling this callback. Internal fields that contain aligned pointers are
* serialized by this callback if it returns non-zero result. Otherwise it is
* serialized verbatim.
*/
struct SerializeInternalFieldsCallback {
using CallbackFunction = StartupData (*)(Local<Object> holder, int index,
void* data);
SerializeInternalFieldsCallback(CallbackFunction function = nullptr,
void* data_arg = nullptr)
: callback(function), data(data_arg) {}
CallbackFunction callback;
void* data;
};
// Note that these fields are called "internal fields" in the API and called
// "embedder fields" within V8.
using SerializeEmbedderFieldsCallback = SerializeInternalFieldsCallback;
/**
* Callback and supporting data used to implement embedder logic to deserialize
* internal fields.
*/
struct DeserializeInternalFieldsCallback {
using CallbackFunction = void (*)(Local<Object> holder, int index,
StartupData payload, void* data);
DeserializeInternalFieldsCallback(CallbackFunction function = nullptr,
void* data_arg = nullptr)
: callback(function), data(data_arg) {}
void (*callback)(Local<Object> holder, int index, StartupData payload,
void* data);
void* data;
};
using DeserializeEmbedderFieldsCallback = DeserializeInternalFieldsCallback;
/**
* Helper class to create a snapshot data blob.
*
* The Isolate used by a SnapshotCreator is owned by it, and will be entered
* and exited by the constructor and destructor, respectively; The destructor
* will also destroy the Isolate. Experimental language features, including
* those available by default, are not available while creating a snapshot.
*/
class V8_EXPORT SnapshotCreator {
public:
enum class FunctionCodeHandling { kClear, kKeep };
/**
* Initialize and enter an isolate, and set it up for serialization.
* The isolate is either created from scratch or from an existing snapshot.
* The caller keeps ownership of the argument snapshot.
* \param existing_blob existing snapshot from which to create this one.
* \param external_references a null-terminated array of external references
* that must be equivalent to CreateParams::external_references.
* \param owns_isolate whether this SnapshotCreator should call
* v8::Isolate::Dispose() during its destructor.
*/
SnapshotCreator(Isolate* isolate,
const intptr_t* external_references = nullptr,
const StartupData* existing_blob = nullptr,
bool owns_isolate = true);
/**
* Create and enter an isolate, and set it up for serialization.
* The isolate is either created from scratch or from an existing snapshot.
* The caller keeps ownership of the argument snapshot.
* \param existing_blob existing snapshot from which to create this one.
* \param external_references a null-terminated array of external references
* that must be equivalent to CreateParams::external_references.
*/
SnapshotCreator(const intptr_t* external_references = nullptr,
const StartupData* existing_blob = nullptr);
/**
* Destroy the snapshot creator, and exit and dispose of the Isolate
* associated with it.
*/
~SnapshotCreator();
/**
* \returns the isolate prepared by the snapshot creator.
*/
Isolate* GetIsolate();
/**
* Set the default context to be included in the snapshot blob.
* The snapshot will not contain the global proxy, and we expect one or a
* global object template to create one, to be provided upon deserialization.
*
* \param callback optional callback to serialize internal fields.
*/
void SetDefaultContext(Local<Context> context,
SerializeInternalFieldsCallback callback =
SerializeInternalFieldsCallback());
/**
* Add additional context to be included in the snapshot blob.
* The snapshot will include the global proxy.
*
* \param callback optional callback to serialize internal fields.
*
* \returns the index of the context in the snapshot blob.
*/
size_t AddContext(Local<Context> context,
SerializeInternalFieldsCallback callback =
SerializeInternalFieldsCallback());
/**
* Attach arbitrary V8::Data to the context snapshot, which can be retrieved
* via Context::GetDataFromSnapshotOnce after deserialization. This data does
* not survive when a new snapshot is created from an existing snapshot.
* \returns the index for retrieval.
*/
template <class T>
V8_INLINE size_t AddData(Local<Context> context, Local<T> object);
/**
* Attach arbitrary V8::Data to the isolate snapshot, which can be retrieved
* via Isolate::GetDataFromSnapshotOnce after deserialization. This data does
* not survive when a new snapshot is created from an existing snapshot.
* \returns the index for retrieval.
*/
template <class T>
V8_INLINE size_t AddData(Local<T> object);
/**
* Created a snapshot data blob.
* This must not be called from within a handle scope.
* \param function_code_handling whether to include compiled function code
* in the snapshot.
* \returns { nullptr, 0 } on failure, and a startup snapshot on success. The
* caller acquires ownership of the data array in the return value.
*/
StartupData CreateBlob(FunctionCodeHandling function_code_handling);
// Disallow copying and assigning.
SnapshotCreator(const SnapshotCreator&) = delete;
void operator=(const SnapshotCreator&) = delete;
private:
size_t AddData(Local<Context> context, internal::Address object);
size_t AddData(internal::Address object);
void* data_;
};
template <class T>
size_t SnapshotCreator::AddData(Local<Context> context, Local<T> object) {
return AddData(context, internal::ValueHelper::ValueAsAddress(*object));
}
template <class T>
size_t SnapshotCreator::AddData(Local<T> object) {
return AddData(internal::ValueHelper::ValueAsAddress(*object));
}
} // namespace v8
#endif // INCLUDE_V8_SNAPSHOT_H_

View File

@ -0,0 +1,92 @@
// Copyright 2020 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_SOURCE_LOCATION_H_
#define INCLUDE_SOURCE_LOCATION_H_
#include <cstddef>
#include <string>
#include "v8config.h" // NOLINT(build/include_directory)
#if defined(__has_builtin)
#define V8_SUPPORTS_SOURCE_LOCATION \
(__has_builtin(__builtin_FUNCTION) && __has_builtin(__builtin_FILE) && \
__has_builtin(__builtin_LINE)) // NOLINT
#elif defined(V8_CC_GNU) && __GNUC__ >= 7
#define V8_SUPPORTS_SOURCE_LOCATION 1
#elif defined(V8_CC_INTEL) && __ICC >= 1800
#define V8_SUPPORTS_SOURCE_LOCATION 1
#else
#define V8_SUPPORTS_SOURCE_LOCATION 0
#endif
namespace v8 {
/**
* Encapsulates source location information. Mimics C++20's
* `std::source_location`.
*/
class V8_EXPORT SourceLocation final {
public:
/**
* Construct source location information corresponding to the location of the
* call site.
*/
#if V8_SUPPORTS_SOURCE_LOCATION
static constexpr SourceLocation Current(
const char* function = __builtin_FUNCTION(),
const char* file = __builtin_FILE(), size_t line = __builtin_LINE()) {
return SourceLocation(function, file, line);
}
#else
static constexpr SourceLocation Current() { return SourceLocation(); }
#endif // V8_SUPPORTS_SOURCE_LOCATION
/**
* Constructs unspecified source location information.
*/
constexpr SourceLocation() = default;
/**
* Returns the name of the function associated with the position represented
* by this object, if any.
*
* \returns the function name as cstring.
*/
constexpr const char* Function() const { return function_; }
/**
* Returns the name of the current source file represented by this object.
*
* \returns the file name as cstring.
*/
constexpr const char* FileName() const { return file_; }
/**
* Returns the line number represented by this object.
*
* \returns the line number.
*/
constexpr size_t Line() const { return line_; }
/**
* Returns a human-readable string representing this object.
*
* \returns a human-readable string representing source location information.
*/
std::string ToString() const;
private:
constexpr SourceLocation(const char* function, const char* file, size_t line)
: function_(function), file_(file), line_(line) {}
const char* function_ = nullptr;
const char* file_ = nullptr;
size_t line_ = 0u;
};
} // namespace v8
#endif // INCLUDE_SOURCE_LOCATION_H_

View File

@ -0,0 +1,217 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_STATISTICS_H_
#define INCLUDE_V8_STATISTICS_H_
#include <stddef.h>
#include <stdint.h>
#include <memory>
#include <utility>
#include <vector>
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-promise.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Context;
class Isolate;
namespace internal {
class ReadOnlyHeap;
} // namespace internal
/**
* Controls how the default MeasureMemoryDelegate reports the result of
* the memory measurement to JS. With kSummary only the total size is reported.
* With kDetailed the result includes the size of each native context.
*/
enum class MeasureMemoryMode { kSummary, kDetailed };
/**
* Controls how promptly a memory measurement request is executed.
* By default the measurement is folded with the next scheduled GC which may
* happen after a while and is forced after some timeout.
* The kEager mode starts incremental GC right away and is useful for testing.
* The kLazy mode does not force GC.
*/
enum class MeasureMemoryExecution { kDefault, kEager, kLazy };
/**
* The delegate is used in Isolate::MeasureMemory API.
*
* It specifies the contexts that need to be measured and gets called when
* the measurement is completed to report the results.
*/
class V8_EXPORT MeasureMemoryDelegate {
public:
virtual ~MeasureMemoryDelegate() = default;
/**
* Returns true if the size of the given context needs to be measured.
*/
virtual bool ShouldMeasure(Local<Context> context) = 0;
/**
* This function is called when memory measurement finishes.
*
* \param context_sizes_in_bytes a vector of (context, size) pairs that
* includes each context for which ShouldMeasure returned true and that
* was not garbage collected while the memory measurement was in progress.
*
* \param unattributed_size_in_bytes total size of objects that were not
* attributed to any context (i.e. are likely shared objects).
*/
virtual void MeasurementComplete(
const std::vector<std::pair<Local<Context>, size_t>>&
context_sizes_in_bytes,
size_t unattributed_size_in_bytes) = 0;
/**
* Returns a default delegate that resolves the given promise when
* the memory measurement completes.
*
* \param isolate the current isolate
* \param context the current context
* \param promise_resolver the promise resolver that is given the
* result of the memory measurement.
* \param mode the detail level of the result.
*/
static std::unique_ptr<MeasureMemoryDelegate> Default(
Isolate* isolate, Local<Context> context,
Local<Promise::Resolver> promise_resolver, MeasureMemoryMode mode);
};
/**
* Collection of shared per-process V8 memory information.
*
* Instances of this class can be passed to
* v8::V8::GetSharedMemoryStatistics to get shared memory statistics from V8.
*/
class V8_EXPORT SharedMemoryStatistics {
public:
SharedMemoryStatistics();
size_t read_only_space_size() { return read_only_space_size_; }
size_t read_only_space_used_size() { return read_only_space_used_size_; }
size_t read_only_space_physical_size() {
return read_only_space_physical_size_;
}
private:
size_t read_only_space_size_;
size_t read_only_space_used_size_;
size_t read_only_space_physical_size_;
friend class V8;
friend class internal::ReadOnlyHeap;
};
/**
* Collection of V8 heap information.
*
* Instances of this class can be passed to v8::Isolate::GetHeapStatistics to
* get heap statistics from V8.
*/
class V8_EXPORT HeapStatistics {
public:
HeapStatistics();
size_t total_heap_size() { return total_heap_size_; }
size_t total_heap_size_executable() { return total_heap_size_executable_; }
size_t total_physical_size() { return total_physical_size_; }
size_t total_available_size() { return total_available_size_; }
size_t total_global_handles_size() { return total_global_handles_size_; }
size_t used_global_handles_size() { return used_global_handles_size_; }
size_t used_heap_size() { return used_heap_size_; }
size_t heap_size_limit() { return heap_size_limit_; }
size_t malloced_memory() { return malloced_memory_; }
size_t external_memory() { return external_memory_; }
size_t peak_malloced_memory() { return peak_malloced_memory_; }
size_t number_of_native_contexts() { return number_of_native_contexts_; }
size_t number_of_detached_contexts() { return number_of_detached_contexts_; }
/**
* Returns a 0/1 boolean, which signifies whether the V8 overwrite heap
* garbage with a bit pattern.
*/
size_t does_zap_garbage() { return does_zap_garbage_; }
private:
size_t total_heap_size_;
size_t total_heap_size_executable_;
size_t total_physical_size_;
size_t total_available_size_;
size_t used_heap_size_;
size_t heap_size_limit_;
size_t malloced_memory_;
size_t external_memory_;
size_t peak_malloced_memory_;
bool does_zap_garbage_;
size_t number_of_native_contexts_;
size_t number_of_detached_contexts_;
size_t total_global_handles_size_;
size_t used_global_handles_size_;
friend class V8;
friend class Isolate;
};
class V8_EXPORT HeapSpaceStatistics {
public:
HeapSpaceStatistics();
const char* space_name() { return space_name_; }
size_t space_size() { return space_size_; }
size_t space_used_size() { return space_used_size_; }
size_t space_available_size() { return space_available_size_; }
size_t physical_space_size() { return physical_space_size_; }
private:
const char* space_name_;
size_t space_size_;
size_t space_used_size_;
size_t space_available_size_;
size_t physical_space_size_;
friend class Isolate;
};
class V8_EXPORT HeapObjectStatistics {
public:
HeapObjectStatistics();
const char* object_type() { return object_type_; }
const char* object_sub_type() { return object_sub_type_; }
size_t object_count() { return object_count_; }
size_t object_size() { return object_size_; }
private:
const char* object_type_;
const char* object_sub_type_;
size_t object_count_;
size_t object_size_;
friend class Isolate;
};
class V8_EXPORT HeapCodeStatistics {
public:
HeapCodeStatistics();
size_t code_and_metadata_size() { return code_and_metadata_size_; }
size_t bytecode_and_metadata_size() { return bytecode_and_metadata_size_; }
size_t external_script_source_size() { return external_script_source_size_; }
size_t cpu_profiler_metadata_size() { return cpu_profiler_metadata_size_; }
private:
size_t code_and_metadata_size_;
size_t bytecode_and_metadata_size_;
size_t external_script_source_size_;
size_t cpu_profiler_metadata_size_;
friend class Isolate;
};
} // namespace v8
#endif // INCLUDE_V8_STATISTICS_H_

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,397 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_TRACED_HANDLE_H_
#define INCLUDE_V8_TRACED_HANDLE_H_
#include <stddef.h>
#include <stdint.h>
#include <stdio.h>
#include <atomic>
#include <memory>
#include <type_traits>
#include <utility>
#include "v8-internal.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-weak-callback-info.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class Value;
namespace internal {
class BasicTracedReferenceExtractor;
enum class GlobalHandleStoreMode {
kInitializingStore,
kAssigningStore,
};
V8_EXPORT internal::Address* GlobalizeTracedReference(
internal::Isolate* isolate, internal::Address value,
internal::Address* slot, GlobalHandleStoreMode store_mode);
V8_EXPORT void MoveTracedReference(internal::Address** from,
internal::Address** to);
V8_EXPORT void CopyTracedReference(const internal::Address* const* from,
internal::Address** to);
V8_EXPORT void DisposeTracedReference(internal::Address* global_handle);
} // namespace internal
/**
* An indirect handle, where the indirect pointer points to a GlobalHandles
* node.
*/
class TracedReferenceBase : public IndirectHandleBase {
public:
/**
* If non-empty, destroy the underlying storage cell. |IsEmpty| will return
* true after this call.
*/
V8_INLINE void Reset();
/**
* Construct a Local<Value> from this handle.
*/
V8_INLINE Local<Value> Get(Isolate* isolate) const {
if (IsEmpty()) return Local<Value>();
return Local<Value>::New(isolate, this->value<Value>());
}
/**
* Returns true if this TracedReference is empty, i.e., has not been
* assigned an object. This version of IsEmpty is thread-safe.
*/
bool IsEmptyThreadSafe() const {
return this->GetSlotThreadSafe() == nullptr;
}
/**
* Assigns a wrapper class ID to the handle.
*/
V8_INLINE void SetWrapperClassId(uint16_t class_id);
/**
* Returns the class ID previously assigned to this handle or 0 if no class ID
* was previously assigned.
*/
V8_INLINE uint16_t WrapperClassId() const;
protected:
V8_INLINE TracedReferenceBase() = default;
/**
* Update this reference in a thread-safe way.
*/
void SetSlotThreadSafe(void* new_val) {
reinterpret_cast<std::atomic<void*>*>(&slot())->store(
new_val, std::memory_order_relaxed);
}
/**
* Get this reference in a thread-safe way
*/
const void* GetSlotThreadSafe() const {
return reinterpret_cast<std::atomic<const void*> const*>(&slot())->load(
std::memory_order_relaxed);
}
V8_EXPORT void CheckValue() const;
friend class internal::BasicTracedReferenceExtractor;
template <typename F>
friend class Local;
template <typename U>
friend bool operator==(const TracedReferenceBase&, const Local<U>&);
friend bool operator==(const TracedReferenceBase&,
const TracedReferenceBase&);
};
/**
* A traced handle with copy and move semantics. The handle is to be used
* together as part of GarbageCollected objects (see v8-cppgc.h) or from stack
* and specifies edges from C++ objects to JavaScript.
*
* The exact semantics are:
* - Tracing garbage collections using CppHeap.
* - Non-tracing garbage collections refer to
* |v8::EmbedderRootsHandler::IsRoot()| whether the handle should
* be treated as root or not.
*
* Note that the base class cannot be instantiated itself, use |TracedReference|
* instead.
*/
template <typename T>
class BasicTracedReference : public TracedReferenceBase {
public:
/**
* Construct a Local<T> from this handle.
*/
Local<T> Get(Isolate* isolate) const { return Local<T>::New(isolate, *this); }
template <class S>
V8_INLINE BasicTracedReference<S>& As() const {
return reinterpret_cast<BasicTracedReference<S>&>(
const_cast<BasicTracedReference<T>&>(*this));
}
V8_DEPRECATE_SOON("Use Get to convert to Local instead")
V8_INLINE T* operator->() const {
#ifdef V8_ENABLE_CHECKS
CheckValue();
#endif // V8_ENABLE_CHECKS
return this->template value<T>();
}
V8_DEPRECATE_SOON("Use Get to convert to Local instead")
V8_INLINE T* operator*() const { return this->operator->(); }
private:
/**
* An empty BasicTracedReference without storage cell.
*/
BasicTracedReference() = default;
V8_INLINE static internal::Address* New(
Isolate* isolate, T* that, internal::Address** slot,
internal::GlobalHandleStoreMode store_mode);
template <typename F>
friend class Local;
friend class Object;
template <typename F>
friend class TracedReference;
template <typename F>
friend class BasicTracedReference;
template <typename F>
friend class ReturnValue;
};
/**
* A traced handle without destructor that clears the handle. The embedder needs
* to ensure that the handle is not accessed once the V8 object has been
* reclaimed. For more details see BasicTracedReference.
*/
template <typename T>
class TracedReference : public BasicTracedReference<T> {
public:
using BasicTracedReference<T>::Reset;
/**
* An empty TracedReference without storage cell.
*/
V8_INLINE TracedReference() = default;
/**
* Construct a TracedReference from a Local.
*
* When the Local is non-empty, a new storage cell is created
* pointing to the same object.
*/
template <class S>
TracedReference(Isolate* isolate, Local<S> that) : BasicTracedReference<T>() {
this->slot() =
this->New(isolate, *that, &this->slot(),
internal::GlobalHandleStoreMode::kInitializingStore);
static_assert(std::is_base_of<T, S>::value, "type check");
}
/**
* Move constructor initializing TracedReference from an
* existing one.
*/
V8_INLINE TracedReference(TracedReference&& other) noexcept {
// Forward to operator=.
*this = std::move(other);
}
/**
* Move constructor initializing TracedReference from an
* existing one.
*/
template <typename S>
V8_INLINE TracedReference(TracedReference<S>&& other) noexcept {
// Forward to operator=.
*this = std::move(other);
}
/**
* Copy constructor initializing TracedReference from an
* existing one.
*/
V8_INLINE TracedReference(const TracedReference& other) {
// Forward to operator=;
*this = other;
}
/**
* Copy constructor initializing TracedReference from an
* existing one.
*/
template <typename S>
V8_INLINE TracedReference(const TracedReference<S>& other) {
// Forward to operator=;
*this = other;
}
/**
* Move assignment operator initializing TracedReference from an existing one.
*/
V8_INLINE TracedReference& operator=(TracedReference&& rhs) noexcept;
/**
* Move assignment operator initializing TracedReference from an existing one.
*/
template <class S>
V8_INLINE TracedReference& operator=(TracedReference<S>&& rhs) noexcept;
/**
* Copy assignment operator initializing TracedReference from an existing one.
*/
V8_INLINE TracedReference& operator=(const TracedReference& rhs);
/**
* Copy assignment operator initializing TracedReference from an existing one.
*/
template <class S>
V8_INLINE TracedReference& operator=(const TracedReference<S>& rhs);
/**
* If non-empty, destroy the underlying storage cell and create a new one with
* the contents of other if other is non empty
*/
template <class S>
V8_INLINE void Reset(Isolate* isolate, const Local<S>& other);
template <class S>
V8_INLINE TracedReference<S>& As() const {
return reinterpret_cast<TracedReference<S>&>(
const_cast<TracedReference<T>&>(*this));
}
};
// --- Implementation ---
template <class T>
internal::Address* BasicTracedReference<T>::New(
Isolate* isolate, T* that, internal::Address** slot,
internal::GlobalHandleStoreMode store_mode) {
if (internal::ValueHelper::IsEmpty(that)) return nullptr;
return internal::GlobalizeTracedReference(
reinterpret_cast<internal::Isolate*>(isolate),
internal::ValueHelper::ValueAsAddress(that),
reinterpret_cast<internal::Address*>(slot), store_mode);
}
void TracedReferenceBase::Reset() {
if (IsEmpty()) return;
internal::DisposeTracedReference(slot());
SetSlotThreadSafe(nullptr);
}
V8_INLINE bool operator==(const TracedReferenceBase& lhs,
const TracedReferenceBase& rhs) {
return internal::HandleHelper::EqualHandles(lhs, rhs);
}
template <typename U>
V8_INLINE bool operator==(const TracedReferenceBase& lhs,
const v8::Local<U>& rhs) {
return internal::HandleHelper::EqualHandles(lhs, rhs);
}
template <typename U>
V8_INLINE bool operator==(const v8::Local<U>& lhs,
const TracedReferenceBase& rhs) {
return rhs == lhs;
}
V8_INLINE bool operator!=(const TracedReferenceBase& lhs,
const TracedReferenceBase& rhs) {
return !(lhs == rhs);
}
template <typename U>
V8_INLINE bool operator!=(const TracedReferenceBase& lhs,
const v8::Local<U>& rhs) {
return !(lhs == rhs);
}
template <typename U>
V8_INLINE bool operator!=(const v8::Local<U>& lhs,
const TracedReferenceBase& rhs) {
return !(rhs == lhs);
}
template <class T>
template <class S>
void TracedReference<T>::Reset(Isolate* isolate, const Local<S>& other) {
static_assert(std::is_base_of<T, S>::value, "type check");
this->Reset();
if (other.IsEmpty()) return;
this->SetSlotThreadSafe(
this->New(isolate, *other, &this->slot(),
internal::GlobalHandleStoreMode::kAssigningStore));
}
template <class T>
template <class S>
TracedReference<T>& TracedReference<T>::operator=(
TracedReference<S>&& rhs) noexcept {
static_assert(std::is_base_of<T, S>::value, "type check");
*this = std::move(rhs.template As<T>());
return *this;
}
template <class T>
template <class S>
TracedReference<T>& TracedReference<T>::operator=(
const TracedReference<S>& rhs) {
static_assert(std::is_base_of<T, S>::value, "type check");
*this = rhs.template As<T>();
return *this;
}
template <class T>
TracedReference<T>& TracedReference<T>::operator=(
TracedReference&& rhs) noexcept {
if (this != &rhs) {
internal::MoveTracedReference(&rhs.slot(), &this->slot());
}
return *this;
}
template <class T>
TracedReference<T>& TracedReference<T>::operator=(const TracedReference& rhs) {
if (this != &rhs) {
this->Reset();
if (!rhs.IsEmpty()) {
internal::CopyTracedReference(&rhs.slot(), &this->slot());
}
}
return *this;
}
void TracedReferenceBase::SetWrapperClassId(uint16_t class_id) {
using I = internal::Internals;
if (IsEmpty()) return;
uint8_t* addr =
reinterpret_cast<uint8_t*>(slot()) + I::kTracedNodeClassIdOffset;
*reinterpret_cast<uint16_t*>(addr) = class_id;
}
uint16_t TracedReferenceBase::WrapperClassId() const {
using I = internal::Internals;
if (IsEmpty()) return 0;
uint8_t* addr =
reinterpret_cast<uint8_t*>(slot()) + I::kTracedNodeClassIdOffset;
return *reinterpret_cast<uint16_t*>(addr);
}
} // namespace v8
#endif // INCLUDE_V8_TRACED_HANDLE_H_

View File

@ -0,0 +1,282 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_TYPED_ARRAY_H_
#define INCLUDE_V8_TYPED_ARRAY_H_
#include "v8-array-buffer.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class SharedArrayBuffer;
/**
* A base class for an instance of TypedArray series of constructors
* (ES6 draft 15.13.6).
*/
class V8_EXPORT TypedArray : public ArrayBufferView {
public:
/*
* The largest typed array size that can be constructed using New.
*/
static constexpr size_t kMaxLength =
internal::kApiSystemPointerSize == 4
? internal::kSmiMaxValue
: static_cast<size_t>(uint64_t{1} << 32);
/**
* Number of elements in this typed array
* (e.g. for Int16Array, |ByteLength|/2).
*/
size_t Length();
V8_INLINE static TypedArray* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<TypedArray*>(value);
}
private:
TypedArray();
static void CheckCast(Value* obj);
};
/**
* An instance of Uint8Array constructor (ES6 draft 15.13.6).
*/
class V8_EXPORT Uint8Array : public TypedArray {
public:
static Local<Uint8Array> New(Local<ArrayBuffer> array_buffer,
size_t byte_offset, size_t length);
static Local<Uint8Array> New(Local<SharedArrayBuffer> shared_array_buffer,
size_t byte_offset, size_t length);
V8_INLINE static Uint8Array* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Uint8Array*>(value);
}
private:
Uint8Array();
static void CheckCast(Value* obj);
};
/**
* An instance of Uint8ClampedArray constructor (ES6 draft 15.13.6).
*/
class V8_EXPORT Uint8ClampedArray : public TypedArray {
public:
static Local<Uint8ClampedArray> New(Local<ArrayBuffer> array_buffer,
size_t byte_offset, size_t length);
static Local<Uint8ClampedArray> New(
Local<SharedArrayBuffer> shared_array_buffer, size_t byte_offset,
size_t length);
V8_INLINE static Uint8ClampedArray* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Uint8ClampedArray*>(value);
}
private:
Uint8ClampedArray();
static void CheckCast(Value* obj);
};
/**
* An instance of Int8Array constructor (ES6 draft 15.13.6).
*/
class V8_EXPORT Int8Array : public TypedArray {
public:
static Local<Int8Array> New(Local<ArrayBuffer> array_buffer,
size_t byte_offset, size_t length);
static Local<Int8Array> New(Local<SharedArrayBuffer> shared_array_buffer,
size_t byte_offset, size_t length);
V8_INLINE static Int8Array* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Int8Array*>(value);
}
private:
Int8Array();
static void CheckCast(Value* obj);
};
/**
* An instance of Uint16Array constructor (ES6 draft 15.13.6).
*/
class V8_EXPORT Uint16Array : public TypedArray {
public:
static Local<Uint16Array> New(Local<ArrayBuffer> array_buffer,
size_t byte_offset, size_t length);
static Local<Uint16Array> New(Local<SharedArrayBuffer> shared_array_buffer,
size_t byte_offset, size_t length);
V8_INLINE static Uint16Array* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Uint16Array*>(value);
}
private:
Uint16Array();
static void CheckCast(Value* obj);
};
/**
* An instance of Int16Array constructor (ES6 draft 15.13.6).
*/
class V8_EXPORT Int16Array : public TypedArray {
public:
static Local<Int16Array> New(Local<ArrayBuffer> array_buffer,
size_t byte_offset, size_t length);
static Local<Int16Array> New(Local<SharedArrayBuffer> shared_array_buffer,
size_t byte_offset, size_t length);
V8_INLINE static Int16Array* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Int16Array*>(value);
}
private:
Int16Array();
static void CheckCast(Value* obj);
};
/**
* An instance of Uint32Array constructor (ES6 draft 15.13.6).
*/
class V8_EXPORT Uint32Array : public TypedArray {
public:
static Local<Uint32Array> New(Local<ArrayBuffer> array_buffer,
size_t byte_offset, size_t length);
static Local<Uint32Array> New(Local<SharedArrayBuffer> shared_array_buffer,
size_t byte_offset, size_t length);
V8_INLINE static Uint32Array* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Uint32Array*>(value);
}
private:
Uint32Array();
static void CheckCast(Value* obj);
};
/**
* An instance of Int32Array constructor (ES6 draft 15.13.6).
*/
class V8_EXPORT Int32Array : public TypedArray {
public:
static Local<Int32Array> New(Local<ArrayBuffer> array_buffer,
size_t byte_offset, size_t length);
static Local<Int32Array> New(Local<SharedArrayBuffer> shared_array_buffer,
size_t byte_offset, size_t length);
V8_INLINE static Int32Array* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Int32Array*>(value);
}
private:
Int32Array();
static void CheckCast(Value* obj);
};
/**
* An instance of Float32Array constructor (ES6 draft 15.13.6).
*/
class V8_EXPORT Float32Array : public TypedArray {
public:
static Local<Float32Array> New(Local<ArrayBuffer> array_buffer,
size_t byte_offset, size_t length);
static Local<Float32Array> New(Local<SharedArrayBuffer> shared_array_buffer,
size_t byte_offset, size_t length);
V8_INLINE static Float32Array* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Float32Array*>(value);
}
private:
Float32Array();
static void CheckCast(Value* obj);
};
/**
* An instance of Float64Array constructor (ES6 draft 15.13.6).
*/
class V8_EXPORT Float64Array : public TypedArray {
public:
static Local<Float64Array> New(Local<ArrayBuffer> array_buffer,
size_t byte_offset, size_t length);
static Local<Float64Array> New(Local<SharedArrayBuffer> shared_array_buffer,
size_t byte_offset, size_t length);
V8_INLINE static Float64Array* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Float64Array*>(value);
}
private:
Float64Array();
static void CheckCast(Value* obj);
};
/**
* An instance of BigInt64Array constructor.
*/
class V8_EXPORT BigInt64Array : public TypedArray {
public:
static Local<BigInt64Array> New(Local<ArrayBuffer> array_buffer,
size_t byte_offset, size_t length);
static Local<BigInt64Array> New(Local<SharedArrayBuffer> shared_array_buffer,
size_t byte_offset, size_t length);
V8_INLINE static BigInt64Array* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<BigInt64Array*>(value);
}
private:
BigInt64Array();
static void CheckCast(Value* obj);
};
/**
* An instance of BigUint64Array constructor.
*/
class V8_EXPORT BigUint64Array : public TypedArray {
public:
static Local<BigUint64Array> New(Local<ArrayBuffer> array_buffer,
size_t byte_offset, size_t length);
static Local<BigUint64Array> New(Local<SharedArrayBuffer> shared_array_buffer,
size_t byte_offset, size_t length);
V8_INLINE static BigUint64Array* Cast(Value* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<BigUint64Array*>(value);
}
private:
BigUint64Array();
static void CheckCast(Value* obj);
};
} // namespace v8
#endif // INCLUDE_V8_TYPED_ARRAY_H_

View File

@ -17,9 +17,10 @@ struct CalleeSavedRegisters {
void* arm_r9;
void* arm_r10;
};
#elif V8_TARGET_ARCH_X64 || V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_ARM64 || \
V8_TARGET_ARCH_MIPS || V8_TARGET_ARCH_MIPS64 || V8_TARGET_ARCH_PPC || \
V8_TARGET_ARCH_PPC64 || V8_TARGET_ARCH_RISCV64 || V8_TARGET_ARCH_S390
#elif V8_TARGET_ARCH_X64 || V8_TARGET_ARCH_IA32 || V8_TARGET_ARCH_ARM64 || \
V8_TARGET_ARCH_MIPS64 || V8_TARGET_ARCH_PPC || V8_TARGET_ARCH_PPC64 || \
V8_TARGET_ARCH_RISCV64 || V8_TARGET_ARCH_S390 || V8_TARGET_ARCH_LOONG64 || \
V8_TARGET_ARCH_RISCV32
struct CalleeSavedRegisters {};
#else
#error Target architecture was not detected as supported by v8

View File

@ -0,0 +1,132 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_UNWINDER_H_
#define INCLUDE_V8_UNWINDER_H_
#include <memory>
#include "v8-embedder-state-scope.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
// Holds the callee saved registers needed for the stack unwinder. It is the
// empty struct if no registers are required. Implemented in
// include/v8-unwinder-state.h.
struct CalleeSavedRegisters;
// A RegisterState represents the current state of registers used
// by the sampling profiler API.
struct V8_EXPORT RegisterState {
RegisterState();
~RegisterState();
RegisterState(const RegisterState& other);
RegisterState& operator=(const RegisterState& other);
void* pc; // Instruction pointer.
void* sp; // Stack pointer.
void* fp; // Frame pointer.
void* lr; // Link register (or nullptr on platforms without a link register).
// Callee saved registers (or null if no callee saved registers were stored)
std::unique_ptr<CalleeSavedRegisters> callee_saved;
};
// A StateTag represents a possible state of the VM.
enum StateTag : uint16_t {
JS,
GC,
PARSER,
BYTECODE_COMPILER,
COMPILER,
OTHER,
EXTERNAL,
ATOMICS_WAIT,
IDLE
};
// The output structure filled up by GetStackSample API function.
struct SampleInfo {
size_t frames_count; // Number of frames collected.
void* external_callback_entry; // External callback address if VM is
// executing an external callback.
void* context; // Incumbent native context address.
void* embedder_context; // Native context address for embedder state
StateTag vm_state; // Current VM state.
EmbedderStateTag embedder_state; // Current Embedder state
};
struct MemoryRange {
const void* start = nullptr;
size_t length_in_bytes = 0;
};
struct JSEntryStub {
MemoryRange code;
};
struct JSEntryStubs {
JSEntryStub js_entry_stub;
JSEntryStub js_construct_entry_stub;
JSEntryStub js_run_microtasks_entry_stub;
};
/**
* Various helpers for skipping over V8 frames in a given stack.
*
* The unwinder API is only supported on the x64, ARM64 and ARM32 architectures.
*/
class V8_EXPORT Unwinder {
public:
/**
* Attempt to unwind the stack to the most recent C++ frame. This function is
* signal-safe and does not access any V8 state and thus doesn't require an
* Isolate.
*
* The unwinder needs to know the location of the JS Entry Stub (a piece of
* code that is run when C++ code calls into generated JS code). This is used
* for edge cases where the current frame is being constructed or torn down
* when the stack sample occurs.
*
* The unwinder also needs the virtual memory range of all possible V8 code
* objects. There are two ranges required - the heap code range and the range
* for code embedded in the binary.
*
* Available on x64, ARM64 and ARM32.
*
* \param code_pages A list of all of the ranges in which V8 has allocated
* executable code. The caller should obtain this list by calling
* Isolate::CopyCodePages() during the same interrupt/thread suspension that
* captures the stack.
* \param register_state The current registers. This is an in-out param that
* will be overwritten with the register values after unwinding, on success.
* \param stack_base The resulting stack pointer and frame pointer values are
* bounds-checked against the stack_base and the original stack pointer value
* to ensure that they are valid locations in the given stack. If these values
* or any intermediate frame pointer values used during unwinding are ever out
* of these bounds, unwinding will fail.
*
* \return True on success.
*/
static bool TryUnwindV8Frames(const JSEntryStubs& entry_stubs,
size_t code_pages_length,
const MemoryRange* code_pages,
RegisterState* register_state,
const void* stack_base);
/**
* Whether the PC is within the V8 code range represented by code_pages.
*
* If this returns false, then calling UnwindV8Frames() with the same PC
* and unwind_state will always fail. If it returns true, then unwinding may
* (but not necessarily) be successful.
*
* Available on x64, ARM64 and ARM32
*/
static bool PCIsInV8(size_t code_pages_length, const MemoryRange* code_pages,
void* pc);
};
} // namespace v8
#endif // INCLUDE_V8_UNWINDER_H_

View File

@ -5,11 +5,14 @@
#ifndef V8_UTIL_H_
#define V8_UTIL_H_
#include "v8.h" // NOLINT(build/include_directory)
#include <assert.h>
#include <map>
#include <vector>
#include "v8-function-callback.h" // NOLINT(build/include_directory)
#include "v8-persistent-handle.h" // NOLINT(build/include_directory)
/**
* Support for Persistent containers.
*
@ -19,6 +22,9 @@
*/
namespace v8 {
template <typename K, typename V, typename Traits>
class GlobalValueMap;
typedef uintptr_t PersistentContainerValue;
static const uintptr_t kPersistentContainerNotFound = 0;
enum PersistentContainerCallbackType {
@ -43,7 +49,7 @@ class StdMapTraits {
static bool Empty(Impl* impl) { return impl->empty(); }
static size_t Size(Impl* impl) { return impl->size(); }
static void Swap(Impl& a, Impl& b) { std::swap(a, b); } // NOLINT
static void Swap(Impl& a, Impl& b) { std::swap(a, b); }
static Iterator Begin(Impl* impl) { return impl->begin(); }
static Iterator End(Impl* impl) { return impl->end(); }
static K Key(Iterator it) { return it->first; }
@ -175,7 +181,11 @@ class PersistentValueMapBase {
* Get value stored in map.
*/
Local<V> Get(const K& key) {
return Local<V>::New(isolate_, FromVal(Traits::Get(&impl_, key)));
V* p = FromVal(Traits::Get(&impl_, key));
#ifdef V8_ENABLE_DIRECT_LOCAL
if (p == nullptr) return Local<V>();
#endif
return Local<V>::New(isolate_, p);
}
/**
@ -230,7 +240,8 @@ class PersistentValueMapBase {
: value_(other.value_) { }
Local<V> NewLocal(Isolate* isolate) const {
return Local<V>::New(isolate, FromVal(value_));
return Local<V>::New(
isolate, internal::ValueHelper::SlotAsValue<V>(FromVal(value_)));
}
bool IsEmpty() const {
return value_ == kPersistentContainerNotFound;
@ -291,13 +302,13 @@ class PersistentValueMapBase {
}
static PersistentContainerValue ClearAndLeak(Global<V>* persistent) {
V* v = persistent->val_;
persistent->val_ = nullptr;
return reinterpret_cast<PersistentContainerValue>(v);
internal::Address* address = persistent->slot();
persistent->Clear();
return reinterpret_cast<PersistentContainerValue>(address);
}
static PersistentContainerValue Leak(Global<V>* persistent) {
return reinterpret_cast<PersistentContainerValue>(persistent->val_);
return reinterpret_cast<PersistentContainerValue>(persistent->slot());
}
/**
@ -307,7 +318,7 @@ class PersistentValueMapBase {
*/
static Global<V> Release(PersistentContainerValue v) {
Global<V> p;
p.val_ = FromVal(v);
p.slot() = reinterpret_cast<internal::Address*>(FromVal(v));
if (Traits::kCallbackType != kNotWeak && p.IsWeak()) {
Traits::DisposeCallbackData(
p.template ClearWeak<typename Traits::WeakCallbackDataType>());
@ -317,7 +328,8 @@ class PersistentValueMapBase {
void RemoveWeak(const K& key) {
Global<V> p;
p.val_ = FromVal(Traits::Remove(&impl_, key));
p.slot() = reinterpret_cast<internal::Address*>(
FromVal(Traits::Remove(&impl_, key)));
p.Reset();
}
@ -385,7 +397,7 @@ class PersistentValueMap : public PersistentValueMapBase<K, V, Traits> {
Traits::kCallbackType == kWeakWithInternalFields
? WeakCallbackType::kInternalFields
: WeakCallbackType::kParameter;
Local<V> value(Local<V>::New(this->isolate(), *persistent));
auto value = Local<V>::New(this->isolate(), *persistent);
persistent->template SetWeak<typename Traits::WeakCallbackDataType>(
Traits::WeakCallbackParameter(this, key, value), WeakCallback,
callback_type);
@ -461,7 +473,7 @@ class GlobalValueMap : public PersistentValueMapBase<K, V, Traits> {
Traits::kCallbackType == kWeakWithInternalFields
? WeakCallbackType::kInternalFields
: WeakCallbackType::kParameter;
Local<V> value(Local<V>::New(this->isolate(), *persistent));
auto value = Local<V>::New(this->isolate(), *persistent);
persistent->template SetWeak<typename Traits::WeakCallbackDataType>(
Traits::WeakCallbackParameter(this, key, value), OnWeakCallback,
callback_type);
@ -531,7 +543,6 @@ class StdGlobalValueMap : public GlobalValueMap<K, V, Traits> {
: GlobalValueMap<K, V, Traits>(isolate) {}
};
class DefaultPersistentValueVectorTraits {
public:
typedef std::vector<PersistentContainerValue> Impl;
@ -556,7 +567,6 @@ class DefaultPersistentValueVectorTraits {
}
};
/**
* A vector wrapper that safely stores Global values.
* C++11 embedders don't need this class, as they can use Global
@ -567,8 +577,8 @@ class DefaultPersistentValueVectorTraits {
* PersistentContainerValue, with all conversion into and out of V8
* handles being transparently handled by this class.
*/
template<typename V, typename Traits = DefaultPersistentValueVectorTraits>
class PersistentValueVector {
template <typename V, typename Traits = DefaultPersistentValueVectorTraits>
class V8_DEPRECATE_SOON("Use std::vector<Global<V>>.") PersistentValueVector {
public:
explicit PersistentValueVector(Isolate* isolate) : isolate_(isolate) { }
@ -609,7 +619,8 @@ class PersistentValueVector {
* Retrieve the i-th value in the vector.
*/
Local<V> Get(size_t index) const {
return Local<V>::New(isolate_, FromVal(Traits::Get(&impl_, index)));
return Local<V>::New(isolate_, internal::ValueHelper::SlotAsValue<V>(
FromVal(Traits::Get(&impl_, index))));
}
/**
@ -619,7 +630,8 @@ class PersistentValueVector {
size_t length = Traits::Size(&impl_);
for (size_t i = 0; i < length; i++) {
Global<V> p;
p.val_ = FromVal(Traits::Get(&impl_, i));
p.slot() =
reinterpret_cast<internal::Address>(FromVal(Traits::Get(&impl_, i)));
}
Traits::Clear(&impl_);
}
@ -634,9 +646,9 @@ class PersistentValueVector {
private:
static PersistentContainerValue ClearAndLeak(Global<V>* persistent) {
V* v = persistent->val_;
persistent->val_ = nullptr;
return reinterpret_cast<PersistentContainerValue>(v);
auto slot = persistent->slot();
persistent->Clear();
return reinterpret_cast<PersistentContainerValue>(slot);
}
static V* FromVal(PersistentContainerValue v) {

View File

@ -17,7 +17,7 @@
namespace v8 {
constexpr uint32_t CurrentValueSerializerFormatVersion() { return 13; }
constexpr uint32_t CurrentValueSerializerFormatVersion() { return 15; }
} // namespace v8

View File

@ -0,0 +1,316 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_VALUE_SERIALIZER_H_
#define INCLUDE_V8_VALUE_SERIALIZER_H_
#include <stddef.h>
#include <stdint.h>
#include <memory>
#include <utility>
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-maybe.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
namespace v8 {
class ArrayBuffer;
class Isolate;
class Object;
class SharedArrayBuffer;
class String;
class WasmModuleObject;
class Value;
namespace internal {
struct ScriptStreamingData;
class SharedObjectConveyorHandles;
class ValueDeserializer;
class ValueSerializer;
} // namespace internal
/**
* A move-only class for managing the lifetime of shared value conveyors used
* by V8 to keep JS shared values alive in transit when serialized.
*
* This class is not directly constructible and is always passed to a
* ValueSerializer::Delegate via ValueSerializer::SetSharedValueConveyor.
*
* The embedder must not destruct the SharedValueConveyor until the associated
* serialized data will no longer be deserialized.
*/
class V8_EXPORT SharedValueConveyor final {
public:
SharedValueConveyor(SharedValueConveyor&&) noexcept;
~SharedValueConveyor();
SharedValueConveyor& operator=(SharedValueConveyor&&) noexcept;
private:
friend class internal::ValueSerializer;
friend class internal::ValueDeserializer;
explicit SharedValueConveyor(Isolate* isolate);
std::unique_ptr<internal::SharedObjectConveyorHandles> private_;
};
/**
* Value serialization compatible with the HTML structured clone algorithm.
* The format is backward-compatible (i.e. safe to store to disk).
*/
class V8_EXPORT ValueSerializer {
public:
class V8_EXPORT Delegate {
public:
virtual ~Delegate() = default;
/**
* Handles the case where a DataCloneError would be thrown in the structured
* clone spec. Other V8 embedders may throw some other appropriate exception
* type.
*/
virtual void ThrowDataCloneError(Local<String> message) = 0;
/**
* The embedder overrides this method to enable custom host object filter
* with Delegate::IsHostObject.
*
* This method is called at most once per serializer.
*/
virtual bool HasCustomHostObject(Isolate* isolate);
/**
* The embedder overrides this method to determine if an JS object is a
* host object and needs to be serialized by the host.
*/
virtual Maybe<bool> IsHostObject(Isolate* isolate, Local<Object> object);
/**
* The embedder overrides this method to write some kind of host object, if
* possible. If not, a suitable exception should be thrown and
* Nothing<bool>() returned.
*/
virtual Maybe<bool> WriteHostObject(Isolate* isolate, Local<Object> object);
/**
* Called when the ValueSerializer is going to serialize a
* SharedArrayBuffer object. The embedder must return an ID for the
* object, using the same ID if this SharedArrayBuffer has already been
* serialized in this buffer. When deserializing, this ID will be passed to
* ValueDeserializer::GetSharedArrayBufferFromId as |clone_id|.
*
* If the object cannot be serialized, an
* exception should be thrown and Nothing<uint32_t>() returned.
*/
virtual Maybe<uint32_t> GetSharedArrayBufferId(
Isolate* isolate, Local<SharedArrayBuffer> shared_array_buffer);
virtual Maybe<uint32_t> GetWasmModuleTransferId(
Isolate* isolate, Local<WasmModuleObject> module);
/**
* Called when the first shared value is serialized. All subsequent shared
* values will use the same conveyor.
*
* The embedder must ensure the lifetime of the conveyor matches the
* lifetime of the serialized data.
*
* If the embedder supports serializing shared values, this method should
* return true. Otherwise the embedder should throw an exception and return
* false.
*
* This method is called at most once per serializer.
*/
virtual bool AdoptSharedValueConveyor(Isolate* isolate,
SharedValueConveyor&& conveyor);
/**
* Allocates memory for the buffer of at least the size provided. The actual
* size (which may be greater or equal) is written to |actual_size|. If no
* buffer has been allocated yet, nullptr will be provided.
*
* If the memory cannot be allocated, nullptr should be returned.
* |actual_size| will be ignored. It is assumed that |old_buffer| is still
* valid in this case and has not been modified.
*
* The default implementation uses the stdlib's `realloc()` function.
*/
virtual void* ReallocateBufferMemory(void* old_buffer, size_t size,
size_t* actual_size);
/**
* Frees a buffer allocated with |ReallocateBufferMemory|.
*
* The default implementation uses the stdlib's `free()` function.
*/
virtual void FreeBufferMemory(void* buffer);
};
explicit ValueSerializer(Isolate* isolate);
ValueSerializer(Isolate* isolate, Delegate* delegate);
~ValueSerializer();
/**
* Writes out a header, which includes the format version.
*/
void WriteHeader();
/**
* Serializes a JavaScript value into the buffer.
*/
V8_WARN_UNUSED_RESULT Maybe<bool> WriteValue(Local<Context> context,
Local<Value> value);
/**
* Returns the stored data (allocated using the delegate's
* ReallocateBufferMemory) and its size. This serializer should not be used
* once the buffer is released. The contents are undefined if a previous write
* has failed. Ownership of the buffer is transferred to the caller.
*/
V8_WARN_UNUSED_RESULT std::pair<uint8_t*, size_t> Release();
/**
* Marks an ArrayBuffer as havings its contents transferred out of band.
* Pass the corresponding ArrayBuffer in the deserializing context to
* ValueDeserializer::TransferArrayBuffer.
*/
void TransferArrayBuffer(uint32_t transfer_id,
Local<ArrayBuffer> array_buffer);
/**
* Indicate whether to treat ArrayBufferView objects as host objects,
* i.e. pass them to Delegate::WriteHostObject. This should not be
* called when no Delegate was passed.
*
* The default is not to treat ArrayBufferViews as host objects.
*/
void SetTreatArrayBufferViewsAsHostObjects(bool mode);
/**
* Write raw data in various common formats to the buffer.
* Note that integer types are written in base-128 varint format, not with a
* binary copy. For use during an override of Delegate::WriteHostObject.
*/
void WriteUint32(uint32_t value);
void WriteUint64(uint64_t value);
void WriteDouble(double value);
void WriteRawBytes(const void* source, size_t length);
ValueSerializer(const ValueSerializer&) = delete;
void operator=(const ValueSerializer&) = delete;
private:
struct PrivateData;
PrivateData* private_;
};
/**
* Deserializes values from data written with ValueSerializer, or a compatible
* implementation.
*/
class V8_EXPORT ValueDeserializer {
public:
class V8_EXPORT Delegate {
public:
virtual ~Delegate() = default;
/**
* The embedder overrides this method to read some kind of host object, if
* possible. If not, a suitable exception should be thrown and
* MaybeLocal<Object>() returned.
*/
virtual MaybeLocal<Object> ReadHostObject(Isolate* isolate);
/**
* Get a WasmModuleObject given a transfer_id previously provided
* by ValueSerializer::Delegate::GetWasmModuleTransferId
*/
virtual MaybeLocal<WasmModuleObject> GetWasmModuleFromId(
Isolate* isolate, uint32_t transfer_id);
/**
* Get a SharedArrayBuffer given a clone_id previously provided
* by ValueSerializer::Delegate::GetSharedArrayBufferId
*/
virtual MaybeLocal<SharedArrayBuffer> GetSharedArrayBufferFromId(
Isolate* isolate, uint32_t clone_id);
/**
* Get the SharedValueConveyor previously provided by
* ValueSerializer::Delegate::AdoptSharedValueConveyor.
*/
virtual const SharedValueConveyor* GetSharedValueConveyor(Isolate* isolate);
};
ValueDeserializer(Isolate* isolate, const uint8_t* data, size_t size);
ValueDeserializer(Isolate* isolate, const uint8_t* data, size_t size,
Delegate* delegate);
~ValueDeserializer();
/**
* Reads and validates a header (including the format version).
* May, for example, reject an invalid or unsupported wire format.
*/
V8_WARN_UNUSED_RESULT Maybe<bool> ReadHeader(Local<Context> context);
/**
* Deserializes a JavaScript value from the buffer.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Value> ReadValue(Local<Context> context);
/**
* Accepts the array buffer corresponding to the one passed previously to
* ValueSerializer::TransferArrayBuffer.
*/
void TransferArrayBuffer(uint32_t transfer_id,
Local<ArrayBuffer> array_buffer);
/**
* Similar to TransferArrayBuffer, but for SharedArrayBuffer.
* The id is not necessarily in the same namespace as unshared ArrayBuffer
* objects.
*/
void TransferSharedArrayBuffer(uint32_t id,
Local<SharedArrayBuffer> shared_array_buffer);
/**
* Must be called before ReadHeader to enable support for reading the legacy
* wire format (i.e., which predates this being shipped).
*
* Don't use this unless you need to read data written by previous versions of
* blink::ScriptValueSerializer.
*/
void SetSupportsLegacyWireFormat(bool supports_legacy_wire_format);
/**
* Reads the underlying wire format version. Likely mostly to be useful to
* legacy code reading old wire format versions. Must be called after
* ReadHeader.
*/
uint32_t GetWireFormatVersion() const;
/**
* Reads raw data in various common formats to the buffer.
* Note that integer types are read in base-128 varint format, not with a
* binary copy. For use during an override of Delegate::ReadHostObject.
*/
V8_WARN_UNUSED_RESULT bool ReadUint32(uint32_t* value);
V8_WARN_UNUSED_RESULT bool ReadUint64(uint64_t* value);
V8_WARN_UNUSED_RESULT bool ReadDouble(double* value);
V8_WARN_UNUSED_RESULT bool ReadRawBytes(size_t length, const void** data);
ValueDeserializer(const ValueDeserializer&) = delete;
void operator=(const ValueDeserializer&) = delete;
private:
struct PrivateData;
PrivateData* private_;
};
} // namespace v8
#endif // INCLUDE_V8_VALUE_SERIALIZER_H_

View File

@ -0,0 +1,553 @@
// Copyright 2021 the V8 project authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef INCLUDE_V8_VALUE_H_
#define INCLUDE_V8_VALUE_H_
#include "v8-data.h" // NOLINT(build/include_directory)
#include "v8-internal.h" // NOLINT(build/include_directory)
#include "v8-local-handle.h" // NOLINT(build/include_directory)
#include "v8-maybe.h" // NOLINT(build/include_directory)
#include "v8config.h" // NOLINT(build/include_directory)
/**
* The v8 JavaScript engine.
*/
namespace v8 {
class BigInt;
class Int32;
class Integer;
class Number;
class Object;
class String;
class Uint32;
/**
* The superclass of all JavaScript values and objects.
*/
class V8_EXPORT Value : public Data {
public:
/**
* Returns true if this value is the undefined value. See ECMA-262
* 4.3.10.
*
* This is equivalent to `value === undefined` in JS.
*/
V8_INLINE bool IsUndefined() const;
/**
* Returns true if this value is the null value. See ECMA-262
* 4.3.11.
*
* This is equivalent to `value === null` in JS.
*/
V8_INLINE bool IsNull() const;
/**
* Returns true if this value is either the null or the undefined value.
* See ECMA-262
* 4.3.11. and 4.3.12
*
* This is equivalent to `value == null` in JS.
*/
V8_INLINE bool IsNullOrUndefined() const;
/**
* Returns true if this value is true.
*
* This is not the same as `BooleanValue()`. The latter performs a
* conversion to boolean, i.e. the result of `Boolean(value)` in JS, whereas
* this checks `value === true`.
*/
bool IsTrue() const;
/**
* Returns true if this value is false.
*
* This is not the same as `!BooleanValue()`. The latter performs a
* conversion to boolean, i.e. the result of `!Boolean(value)` in JS, whereas
* this checks `value === false`.
*/
bool IsFalse() const;
/**
* Returns true if this value is a symbol or a string.
*
* This is equivalent to
* `typeof value === 'string' || typeof value === 'symbol'` in JS.
*/
bool IsName() const;
/**
* Returns true if this value is an instance of the String type.
* See ECMA-262 8.4.
*
* This is equivalent to `typeof value === 'string'` in JS.
*/
V8_INLINE bool IsString() const;
/**
* Returns true if this value is a symbol.
*
* This is equivalent to `typeof value === 'symbol'` in JS.
*/
bool IsSymbol() const;
/**
* Returns true if this value is a function.
*
* This is equivalent to `typeof value === 'function'` in JS.
*/
bool IsFunction() const;
/**
* Returns true if this value is an array. Note that it will return false for
* an Proxy for an array.
*/
bool IsArray() const;
/**
* Returns true if this value is an object.
*/
bool IsObject() const;
/**
* Returns true if this value is a bigint.
*
* This is equivalent to `typeof value === 'bigint'` in JS.
*/
bool IsBigInt() const;
/**
* Returns true if this value is boolean.
*
* This is equivalent to `typeof value === 'boolean'` in JS.
*/
bool IsBoolean() const;
/**
* Returns true if this value is a number.
*
* This is equivalent to `typeof value === 'number'` in JS.
*/
bool IsNumber() const;
/**
* Returns true if this value is an `External` object.
*/
bool IsExternal() const;
/**
* Returns true if this value is a 32-bit signed integer.
*/
bool IsInt32() const;
/**
* Returns true if this value is a 32-bit unsigned integer.
*/
bool IsUint32() const;
/**
* Returns true if this value is a Date.
*/
bool IsDate() const;
/**
* Returns true if this value is an Arguments object.
*/
bool IsArgumentsObject() const;
/**
* Returns true if this value is a BigInt object.
*/
bool IsBigIntObject() const;
/**
* Returns true if this value is a Boolean object.
*/
bool IsBooleanObject() const;
/**
* Returns true if this value is a Number object.
*/
bool IsNumberObject() const;
/**
* Returns true if this value is a String object.
*/
bool IsStringObject() const;
/**
* Returns true if this value is a Symbol object.
*/
bool IsSymbolObject() const;
/**
* Returns true if this value is a NativeError.
*/
bool IsNativeError() const;
/**
* Returns true if this value is a RegExp.
*/
bool IsRegExp() const;
/**
* Returns true if this value is an async function.
*/
bool IsAsyncFunction() const;
/**
* Returns true if this value is a Generator function.
*/
bool IsGeneratorFunction() const;
/**
* Returns true if this value is a Generator object (iterator).
*/
bool IsGeneratorObject() const;
/**
* Returns true if this value is a Promise.
*/
bool IsPromise() const;
/**
* Returns true if this value is a Map.
*/
bool IsMap() const;
/**
* Returns true if this value is a Set.
*/
bool IsSet() const;
/**
* Returns true if this value is a Map Iterator.
*/
bool IsMapIterator() const;
/**
* Returns true if this value is a Set Iterator.
*/
bool IsSetIterator() const;
/**
* Returns true if this value is a WeakMap.
*/
bool IsWeakMap() const;
/**
* Returns true if this value is a WeakSet.
*/
bool IsWeakSet() const;
/**
* Returns true if this value is a WeakRef.
*/
bool IsWeakRef() const;
/**
* Returns true if this value is an ArrayBuffer.
*/
bool IsArrayBuffer() const;
/**
* Returns true if this value is an ArrayBufferView.
*/
bool IsArrayBufferView() const;
/**
* Returns true if this value is one of TypedArrays.
*/
bool IsTypedArray() const;
/**
* Returns true if this value is an Uint8Array.
*/
bool IsUint8Array() const;
/**
* Returns true if this value is an Uint8ClampedArray.
*/
bool IsUint8ClampedArray() const;
/**
* Returns true if this value is an Int8Array.
*/
bool IsInt8Array() const;
/**
* Returns true if this value is an Uint16Array.
*/
bool IsUint16Array() const;
/**
* Returns true if this value is an Int16Array.
*/
bool IsInt16Array() const;
/**
* Returns true if this value is an Uint32Array.
*/
bool IsUint32Array() const;
/**
* Returns true if this value is an Int32Array.
*/
bool IsInt32Array() const;
/**
* Returns true if this value is a Float32Array.
*/
bool IsFloat32Array() const;
/**
* Returns true if this value is a Float64Array.
*/
bool IsFloat64Array() const;
/**
* Returns true if this value is a BigInt64Array.
*/
bool IsBigInt64Array() const;
/**
* Returns true if this value is a BigUint64Array.
*/
bool IsBigUint64Array() const;
/**
* Returns true if this value is a DataView.
*/
bool IsDataView() const;
/**
* Returns true if this value is a SharedArrayBuffer.
*/
bool IsSharedArrayBuffer() const;
/**
* Returns true if this value is a JavaScript Proxy.
*/
bool IsProxy() const;
/**
* Returns true if this value is a WasmMemoryObject.
*/
bool IsWasmMemoryObject() const;
/**
* Returns true if this value is a WasmModuleObject.
*/
bool IsWasmModuleObject() const;
/**
* Returns true if this value is the WasmNull object.
*/
bool IsWasmNull() const;
/**
* Returns true if the value is a Module Namespace Object.
*/
bool IsModuleNamespaceObject() const;
/**
* Perform the equivalent of `BigInt(value)` in JS.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<BigInt> ToBigInt(
Local<Context> context) const;
/**
* Perform the equivalent of `Number(value)` in JS.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Number> ToNumber(
Local<Context> context) const;
/**
* Perform the equivalent of `String(value)` in JS.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<String> ToString(
Local<Context> context) const;
/**
* Provide a string representation of this value usable for debugging.
* This operation has no observable side effects and will succeed
* unless e.g. execution is being terminated.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<String> ToDetailString(
Local<Context> context) const;
/**
* Perform the equivalent of `Object(value)` in JS.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Object> ToObject(
Local<Context> context) const;
/**
* Perform the equivalent of `Number(value)` in JS and convert the result
* to an integer. Negative values are rounded up, positive values are rounded
* down. NaN is converted to 0. Infinite values yield undefined results.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Integer> ToInteger(
Local<Context> context) const;
/**
* Perform the equivalent of `Number(value)` in JS and convert the result
* to an unsigned 32-bit integer by performing the steps in
* https://tc39.es/ecma262/#sec-touint32.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Uint32> ToUint32(
Local<Context> context) const;
/**
* Perform the equivalent of `Number(value)` in JS and convert the result
* to a signed 32-bit integer by performing the steps in
* https://tc39.es/ecma262/#sec-toint32.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Int32> ToInt32(Local<Context> context) const;
/**
* Perform the equivalent of `Boolean(value)` in JS. This can never fail.
*/
Local<Boolean> ToBoolean(Isolate* isolate) const;
/**
* Attempts to convert a string to an array index.
* Returns an empty handle if the conversion fails.
*/
V8_WARN_UNUSED_RESULT MaybeLocal<Uint32> ToArrayIndex(
Local<Context> context) const;
/** Returns the equivalent of `ToBoolean()->Value()`. */
bool BooleanValue(Isolate* isolate) const;
/** Returns the equivalent of `ToNumber()->Value()`. */
V8_WARN_UNUSED_RESULT Maybe<double> NumberValue(Local<Context> context) const;
/** Returns the equivalent of `ToInteger()->Value()`. */
V8_WARN_UNUSED_RESULT Maybe<int64_t> IntegerValue(
Local<Context> context) const;
/** Returns the equivalent of `ToUint32()->Value()`. */
V8_WARN_UNUSED_RESULT Maybe<uint32_t> Uint32Value(
Local<Context> context) const;
/** Returns the equivalent of `ToInt32()->Value()`. */
V8_WARN_UNUSED_RESULT Maybe<int32_t> Int32Value(Local<Context> context) const;
/** JS == */
V8_WARN_UNUSED_RESULT Maybe<bool> Equals(Local<Context> context,
Local<Value> that) const;
bool StrictEquals(Local<Value> that) const;
bool SameValue(Local<Value> that) const;
template <class T>
V8_INLINE static Value* Cast(T* value) {
return static_cast<Value*>(value);
}
Local<String> TypeOf(Isolate*);
Maybe<bool> InstanceOf(Local<Context> context, Local<Object> object);
private:
V8_INLINE bool QuickIsUndefined() const;
V8_INLINE bool QuickIsNull() const;
V8_INLINE bool QuickIsNullOrUndefined() const;
V8_INLINE bool QuickIsString() const;
bool FullIsUndefined() const;
bool FullIsNull() const;
bool FullIsString() const;
static void CheckCast(Data* that);
};
template <>
V8_INLINE Value* Value::Cast(Data* value) {
#ifdef V8_ENABLE_CHECKS
CheckCast(value);
#endif
return static_cast<Value*>(value);
}
bool Value::IsUndefined() const {
#ifdef V8_ENABLE_CHECKS
return FullIsUndefined();
#else
return QuickIsUndefined();
#endif
}
bool Value::QuickIsUndefined() const {
using A = internal::Address;
using I = internal::Internals;
A obj = internal::ValueHelper::ValueAsAddress(this);
#if V8_STATIC_ROOTS_BOOL
return I::is_identical(obj, I::StaticReadOnlyRoot::kUndefinedValue);
#else
if (!I::HasHeapObjectTag(obj)) return false;
if (I::GetInstanceType(obj) != I::kOddballType) return false;
return (I::GetOddballKind(obj) == I::kUndefinedOddballKind);
#endif // V8_STATIC_ROOTS_BOOL
}
bool Value::IsNull() const {
#ifdef V8_ENABLE_CHECKS
return FullIsNull();
#else
return QuickIsNull();
#endif
}
bool Value::QuickIsNull() const {
using A = internal::Address;
using I = internal::Internals;
A obj = internal::ValueHelper::ValueAsAddress(this);
#if V8_STATIC_ROOTS_BOOL
return I::is_identical(obj, I::StaticReadOnlyRoot::kNullValue);
#else
if (!I::HasHeapObjectTag(obj)) return false;
if (I::GetInstanceType(obj) != I::kOddballType) return false;
return (I::GetOddballKind(obj) == I::kNullOddballKind);
#endif // V8_STATIC_ROOTS_BOOL
}
bool Value::IsNullOrUndefined() const {
#ifdef V8_ENABLE_CHECKS
return FullIsNull() || FullIsUndefined();
#else
return QuickIsNullOrUndefined();
#endif
}
bool Value::QuickIsNullOrUndefined() const {
#if V8_STATIC_ROOTS_BOOL
return QuickIsNull() || QuickIsUndefined();
#else
using A = internal::Address;
using I = internal::Internals;
A obj = internal::ValueHelper::ValueAsAddress(this);
if (!I::HasHeapObjectTag(obj)) return false;
if (I::GetInstanceType(obj) != I::kOddballType) return false;
int kind = I::GetOddballKind(obj);
return kind == I::kNullOddballKind || kind == I::kUndefinedOddballKind;
#endif // V8_STATIC_ROOTS_BOOL
}
bool Value::IsString() const {
#ifdef V8_ENABLE_CHECKS
return FullIsString();
#else
return QuickIsString();
#endif
}
bool Value::QuickIsString() const {
using A = internal::Address;
using I = internal::Internals;
A obj = internal::ValueHelper::ValueAsAddress(this);
if (!I::HasHeapObjectTag(obj)) return false;
#if V8_STATIC_ROOTS_BOOL && !V8_MAP_PACKING
return I::CheckInstanceMapRange(obj, I::StaticReadOnlyRoot::kFirstStringMap,
I::StaticReadOnlyRoot::kLastStringMap);
#else
return (I::GetInstanceType(obj) < I::kFirstNonstringType);
#endif // V8_STATIC_ROOTS_BOOL
}
} // namespace v8
#endif // INCLUDE_V8_VALUE_H_

View File

@ -8,10 +8,10 @@
// These macros define the version number for the current version.
// NOTE these macros are used by some of the tool scripts and the build
// system so their names cannot be changed without changing the scripts.
#define V8_MAJOR_VERSION 9
#define V8_MINOR_VERSION 1
#define V8_BUILD_NUMBER 269
#define V8_PATCH_LEVEL 0
#define V8_MAJOR_VERSION 11
#define V8_MINOR_VERSION 6
#define V8_BUILD_NUMBER 189
#define V8_PATCH_LEVEL 22
// Use 1 for candidates and 0 otherwise.
// (Boolean macro values are not supported by all preprocessors.)

Some files were not shown because too many files have changed in this diff Show More