Mainstream astrophysics is couched in Newtonian concepts, some of which have no well defined extension to general relativity. In order to provide a sound basis for relativistic astrophysics, it is crucial to develop general relativistic concepts which have well defined and useful Newtonian limits. Mass and radiation flux are fundamental in this regard. The results of characteristic codes show that the energy of a radiating system can be evaluated rigorously and accurately according to the rules for asymptotically flat spacetimes, while avoiding the deficiencies that plagued the “prenumerical” era of relativity: (i) the use of coordinate dependent concepts such as gravitational energymomentum pseudotensors; (ii) a rather loose notion of asymptotic flatness, particularly for radiative spacetimes; (iii) the appearance of divergent integrals; and (iv) the use of approximation formalisms, such as weak field or slow motion expansions, whose errors have not been rigorously estimated.
Characteristic codes have extended the role of the Bondi mass from that of a geometrical construct in the theory of isolated systems to that of a highly accurate computational tool.
The Bondi mass loss formula provides an important global check on the preservation of the Bianchi identities. The mass loss rates themselves have important astrophysical significance.
The numerical results demonstrate that computational approaches, rigorously based upon the geometrical definition of mass in general relativity, can be used to calculate radiation losses in highly nonlinear processes where perturbation calculations would not be meaningful.
Numerical calculation of the Bondi mass has been used to explore both the Newtonian and the strong field limits of general relativity [
100]
. For a quasiNewtonian system of radiating dust, the numerical calculation joins smoothly on to a postNewtonian expansion of the energy in powers of
$1/c$
, beginning with the Newtonian mass and mechanical energy as the leading terms. This comparison with perturbation theory has been carried out to
$\mathcal{O}(1/{c}^{7})$
, at which stage the computed Bondi mass peels away from the postNewtonian expansion. It remains strictly positive, in contrast to the truncated postNewtonian behavior which leads to negative values.
A subtle feature of the Bondi mass stems from its role as one component of the total energymomentum 4vector, whose calculation requires identification of the translation subgroup of the Bondi–Metzner–Sachs group [
183]
. This introduces boost freedom into the problem. Identifying the translation subgroup is tantamount to knowing the conformal transformation to a standard Bondi frame [
209]
in which the time slices of
$\mathcal{\mathcal{I}}$
have unit sphere geometry. Both Stewart's code and the Pittsburgh code adapt the coordinates to simplify the description of the interior sources. This results in a nonstandard foliation of
$\mathcal{\mathcal{I}}$
. The determination of the conformal factor which relates the 2metric
${h}_{AB}$
of a slice of
$\mathcal{\mathcal{I}}$
to the unit sphere metric is an elliptic problem equivalent to solving the second order partial differential equation for the conformal transformation of Gaussian curvature. In the axisymmetric case, the PDE reduces to an ODE with respect to the angle
$\theta $
, which is straightforward to solve [
100]
.
The integration constants determine the boost freedom along the axis of symmetry.
The nonaxisymmetric case is more complicated. Stewart [
203]
proposes an approach based upon the dyad decomposition
$$\begin{array}{c}{h}_{AB}d{x}^{A}d{x}^{B}={m}_{A}d{x}^{A}{\overline{m}}_{B}d{x}^{B}.\end{array}$$ 
(19)

The desired conformal transformation is obtained by first relating
${h}_{AB}$
conformally to the flat metric of the complex plane. Denoting the complex coordinate of the plane by
$\zeta $
, this relationship can be expressed as
$d\zeta ={e}^{f}{m}_{A}d{x}^{A}$
. The conformal factor
$f$
can then be determined from the integrability condition
$$\begin{array}{c}{m}_{[A}{\partial}_{B]}f={\partial}_{[A}{m}_{B]}.\end{array}$$ 
(20)

This is equivalent to the classic Beltrami equation for finding isothermal coordinates. It would appear to be a more effective scheme than tackling the second order PDE directly, but numerical implementation has not yet been carried out.
3.5 3D characteristic evolution
There has been rapid progress in 3D characteristic evolution. There are now two independent 3D codes, one developed at Canberra and the other at Pittsburgh (the PITT code), which have the capability to study gravitational waves in single black hole spacetimes, at a level still not mastered by Cauchy codes. Several years ago the Pittsburgh group established robust stability and second order accuracy of a fully nonlinear 3D code which calculates waveforms at null infinity [
42,
31]
and also tracks a dynamical black hole and excises its internal singularity from the computational grid [
105,
102]
. The Canberra group has implemented an independent nonlinear 3D code which accurately evolves the exterior region of a Schwarzschild black hole. Both codes pose data on an initial null hypersurface and on a worldtube boundary, and evolve the exterior spacetime out to a compactified version of null infinity, where the waveform is computed. However, there are essential differences in the underlying geometrical formalisms and numerical techniques used in the two codes and in their success in evolving generic black hole spacetimes.
3.5.1 Geometrical formalism
The PITT code uses a standard Bondi–Sachs null coordinate system,
$$\begin{array}{c}d{s}^{2}=\left({e}^{2\beta}\frac{V}{r}{r}^{2}{h}_{AB}{U}^{A}{U}^{B}\right)d{u}^{2}2{e}^{2\beta}dudr2{r}^{2}{h}_{AB}{U}^{B}dud{x}^{A}+{r}^{2}{h}_{AB}d{x}^{A}d{x}^{B},\end{array}$$ 
(21)

where
$det\left({h}_{AB}\right)=det\left({q}_{AB}\right)$
for some standard choice
${q}_{AB}$
of the unit sphere metric. This generalizes Equation ( 13 ) to the threedimensional case. The hypersurface equations derive from the
${{G}_{\mu}}^{\nu}{\nabla}_{\nu}u$
components of the Einstein tensor. They take the explicit form
$$\begin{array}{ccc}{\beta}_{,r}& =& \frac{1}{16}r{h}^{AC}{h}^{BD}{h}_{AB,r}{h}_{CD,r},\end{array}$$ 
(22)

$$\begin{array}{ccc}{\left({r}^{4}{e}^{2\beta}{h}_{AB}{U}_{,r}^{B}\right)}_{,r}& =& 2{r}^{4}{\left({r}^{2}{\beta}_{,A}\right)}_{,r}{r}^{2}{h}^{BC}{D}_{C}\left({h}_{AB,r}\right)\end{array}$$ 
(23)

$$\begin{array}{ccc}2{e}^{2\beta}{V}_{,r}& =& \mathcal{\mathcal{R}}2{D}^{A}{D}_{A}\beta 2\left({D}^{A}\beta \right){D}_{A}\beta +{r}^{2}{e}^{2\beta}{D}_{A}\left({\left({r}^{4}{U}^{A}\right)}_{,r}\right)\end{array}$$  
$$\begin{array}{ccc}& & \frac{1}{2}{r}^{4}{e}^{4\beta}{h}_{AB}{U}_{,r}^{A}{U}_{,r}^{B},\end{array}$$ 
(24)

where
${D}_{A}$
is the covariant derivative and
$\mathcal{\mathcal{R}}$
the curvature scalar of the conformal 2metric
${h}_{AB}$
of the
$r=const.$
surfaces, and capital Latin indices are raised and lowered with
${h}_{AB}$
. Given the null data
${h}_{AB}$
on an outgoing null hypersurface, this hierarchy of equations can be integrated radially in order to determine
$\beta $
,
${U}^{A}$
and
$V$
on the hypersurface in terms of integration constants on an inner boundary. The evolution equations for the
$u$
derivative of the null data derive from the tracefree part of the angular components of the Einstein tensor, i.e. the components
${m}^{A}{m}^{B}{G}_{AB}$
where
${h}^{AB}=2{m}^{(A}{\overline{m}}^{B)}$
. They take the explicit form
$$\begin{array}{ccc}{m}^{A}{m}^{B}(& & (r{h}_{AB,u}{)}_{,r}\frac{1}{2r}(rV{h}_{AB,r}{)}_{,r}\frac{2}{r}{e}^{\beta}{D}_{A}{D}_{B}{e}^{\beta}+r{h}_{AC}{D}_{B}\left({U}_{,r}^{C}\right)\end{array}$$  
$$\begin{array}{ccc}& & \frac{{r}^{3}}{2}{e}^{2\beta}{h}_{AC}{h}_{BD}{U}_{,r}^{C}{U}_{,r}^{D}+2{D}_{A}{U}_{B}+\frac{r}{2}{h}_{AB,r}{D}_{C}{U}^{C}\end{array}$$  
$$\begin{array}{ccc}& & +r{U}^{C}{D}_{C}\left({h}_{AB,r}\right)+r{h}_{AD,r}{h}^{CD}({D}_{B}{U}_{C}{D}_{C}{U}_{B}))=0.\end{array}$$ 
(25)

The Canberra code employs a null quasispherical (NQS) gauge (not to be confused with the quasispherical approximation in which quadratically aspherical terms are ignored [
42]
). The NQS gauge takes advantage of the possibility of mapping the angular part of the Bondi metric conformally onto a unit sphere metric, so that
${h}_{AB}\to {q}_{AB}$
. The required transformation
${x}^{A}\to {y}^{A}(u,r,{x}^{A})$
is in general dependent upon
$u$
and
$r$
so that the NQS angular coordinates
${y}^{A}$
are not constant along the outgoing null rays, unlike the Bondi–Sachs angular coordinates. Instead the coordinates
${y}^{A}$
display the analogue of a shift on the null hypersurfaces
$u=const$
. In addition, the NQS spheres
$(u,r)=const.$
are not the same as the Bondi spheres. The radiation content of the metric is contained in a shear vector describing this shift. This results in the description of the radiation in terms of a spinweight 1 field, rather than the spinweight 2 field associated with
${h}_{AB}$
in the Bondi–Sachs formalism. In both the Bondi–Sachs and NQS gauges, the independent gravitational data on a null hypersurface is the conformal part of its degenerate 3metric. The Bondi–Sachs null data consists of
${h}_{AB}$
, which determines the intrinsic conformal metric of the null hypersurface. In the NQS case,
${h}_{AB}={q}_{AB}$
and the shear vector comprises the only nontrivial part of the conformal 3metric. Both the Bondi–Sachs and NQS gauges can be arranged to coincide in the special case of shearfree Robinson–Trautman metrics [
71,
20]
.
The formulation of Einstein's equations in the NQS gauge is presented in [
19]
, and the associated gauge freedom arising from
$(u,r)$
dependent rotation and boosts of the unit sphere is discussed in [
20]
. As in the PITT code, the main equations involve integrating a hierarchy of hypersurface equations along the radial null geodesics extending from the inner boundary to null infinity. In the NQS gauge the source terms for these radial ODE's are rather simple when the unknowns are chosen to be the connection coefficients. However, as a price to pay for this simplicity, after the radial integrations are performed on each null hypersurface a first order elliptic equation must be solved on each
$r=const.$
crosssection to reconstruct the underlying metric.
3.5.2 Numerical methodology
The PITT code is an explicit second order finite difference evolution algorithm based upon retarded time steps on a uniform threedimensional null coordinate grid. The straightforward numerical approach and the second order convergence of the finite difference equations has facilitated code development. The Canberra code uses an assortment of novel and elegant numerical methods.
Most of these involve smoothing or filtering and have obvious advantage for removing short wavelength noise but would be unsuitable for modeling shocks.
Both codes require the ability to handle tensor fields and their derivatives on the sphere.
Spherical coordinates and spherical harmonics are natural analytic tools for the description of radiation, but their implementation in computational work requires dealing with the impossibility of smoothly covering the sphere with a single coordinate grid. Polar coordinate singularities in axisymmetric systems can be regularized by standard tricks. In the absence of symmetry, these techniques do not generalize and would be especially prohibitive to develop for tensor fields.
A crucial ingredient of the PITT code is the
ethmodule [
104]
which incorporates a computational version of the Newman–Penrose ethformalism [
159]
. The ethmodule covers the sphere with two overlapping stereographic coordinate grids (North and South). It provides everywhere regular, second order accurate, finite difference expressions for tensor fields on the sphere and their covariant derivatives. The ethcalculus simplifies the underlying equations, avoids spurious coordinate singularities and allows accurate differentiation of tensor fields on the sphere in a computationally efficient and clean way. Its main weakness is the numerical noise introduced by interpolations (fourth order accurate) between the North and South patches. For parabolic or elliptic equations on the sphere, the finite difference approach of the ethcalculus would be less efficient than a spectral approach, but no parabolic or elliptic equations appear in the Bondi–Sachs evolution scheme.
The Canberra code handles fields on the sphere by means of a 3fold representation: (i) as discretized functions on a spherical grid uniformly spaced in standard
$(\theta ,\phi )$
coordinates, (ii) as fastFourier transforms with respect to
$(\theta ,\phi )$
(based upon a smooth map of the torus onto the sphere), and (iii) as a spectral decomposition of scalar, vector, and tensor fields in terms of spinweighted spherical harmonics. The grid values are used in carrying out nonlinear algebraic operations; the Fourier representation is used to calculate
$(\theta ,\phi )$
derivatives; and the spherical harmonic representation is used to solve global problems, such as the solution of the first order elliptic equation for the reconstruction of the metric, whose unique solution requires pinning down the
$\ell =1$
gauge freedom. The sizes of the grid and of the Fourier and spherical harmonic representations are coordinated. In practice, the spherical harmonic expansion is carried out to 15th order in
$\ell $
, but the resulting coefficients must then be projected into the
$\ell \le 10$
subspace in order to avoid inconsistencies between the spherical harmonic, grid, and Fourier representations.
The Canberra code solves the null hypersurface equations by combining an 8th order Runge–Kutta integration with a convolution spline to interpolate field values. The radial grid points are dynamically positioned to approximate ingoing null geodesics, a technique originally due to Goldwirth and Piran [
95]
to avoid the problems with a uniform
$r$
grid near a horizon which arise from the degeneracy of an areal coordinate on a stationary horizon. The time evolution uses the method of lines with a fourth order Runge–Kutta integrator, which introduces further high frequency filtering.
3.5.3 Stability

PITT code
Analytic stability analysis of the finite difference equations has been crucial in the development of a stable evolution algorithm, subject to the standard Courant–Friedrichs–Lewy (CFL) condition for an explicit code. Linear stability analysis on Minkowski and Schwarzschild backgrounds showed that certain field variables must be represented on the halfgrid [106, 42] .
Nonlinear stability analysis was essential in revealing and curing a mode coupling instability that was not present in the original axisymmetric version of the code [
31,
142]
. This has led to a code whose stability persists even in the regime that the
$u$
direction, along which the grid flows, becomes spacelike, such as outside the velocity of light cone in a rotating coordinate system. Severe tests were used to verify stability. In the linear regime, robust stability was established by imposing random initial data on the initial characteristic hypersurface and random constraint violating boundary data on an inner worldtube. (Robust stability was later adopted as one of the standardized tests for Cauchy codes [
5]
.) The code ran stably for 10,000 grid crossing times under these conditions [
42]
. The PITT code was the first 3D general relativistic code to pass this robust stability test. The use of random data is only possible in sufficiently weak cases where effective energy terms quadratic in the field gradients are not dominant. Stability in the highly nonlinear regime was tested in two ways. Runs for a time of
$60,000M$
were carried out for a moving, distorted Schwarzschild black hole (of mass
$M$
), with the marginally trapped surface at the inner boundary tracked and its interior excised from the computational grid [
102,
103]
. This remains one of the longest simulations of a dynamic black hole carried out to date. Furthermore, the scattering of a gravitational wave off a Schwarzschild black hole was successfully carried out in the extreme nonlinear regime where the backscattered Bondi news was as large as
$N=400$
(in dimensionless geometric units) [
31]
, showing that the code can cope with the enormous power output
${N}^{2}{c}^{5}/G\approx {10}^{60}W$
in conventional units. This exceeds the power that would be produced if, in 1 second, the entire galaxy were converted to gravitational radiation.

Canberra code
Analytic stability analysis of the underlying finite difference equations is impractical because of the extensive mix of spectral techniques, higher order methods, and splines. Although there is no clearcut CFL limit on the code, stability tests show that there is a limit on the time step. The damping of high frequency modes due to the implicit filtering would be expected to suppress numerical instability, but the stability of the Canberra code is nevertheless subject to two qualifications [22, 23, 24] : (i) At late times (less than
$100M$
), the evolution terminates as it approaches an event horizon, apparently because of a breakdown of the NQS gauge condition, although an analysis of how and why this should occur has not yet been given. (ii) Numerical instabilities arise from dynamic inner boundary conditions and restrict the inner boundary to a fixed Schwarzschild horizon. Tests in the extreme nonlinear regime were not reported.
3.5.4 Accuracy

PITT code
Second order accuracy has been verified in an extensive number of testbeds [42, 31, 102, 228, 229] , including new exact solutions specifically constructed in null coordinates for the purpose of convergence tests:

∙
Linearized waves on a Minkowski background in null cone coordinates.

∙
Boost and rotation symmetric solutions [30] .

∙
Schwarzschild in rotating coordinates.

∙
Polarization symmetry of nonlinear twistfree axisymmetric waveforms.

∙
Robinson–Trautman waveforms from perturbed Schwarzschild black holes.

∙
Nonlinear Robinson–Trautman waveforms utilizing an independently computed solution of the Robinson–Trautman equation.

∙
Perturbations of a Schwarzschild black hole utilizing an independent computed solution of the Teukolsky equation.

Canberra code
The complexity of the algorithm and NQS gauge makes it problematic to establish accuracy by direct means. Exact solutions do not provide an effective convergence check, because the Schwarzschild solution is trivial in the NQS gauge and other known solutions in this gauge require dynamic inner boundary conditions which destabilize the present version of the code. Convergence to linearized solutions is a possible check but has not yet been performed.
Instead indirect tests by means of geometric consistency and partial convergence tests are used to calibrate accuracy. The consistency tests were based on the constraint equations, which are not enforced during null evolution except at the inner boundary. The balance between mass loss and radiation flux through
${\mathcal{\mathcal{I}}}^{+}$
is a global consequence of these constraints. No appreciable growth of the constraints was noticeable until within
$5M$
of the final breakdown of the code. In weak field tests where angular resolution does not dominate the error, partial convergence tests based upon varying the radial grid size verify the 8th order convergence in the shear expected from the Runge–Kutta integration and splines. When the radial source of error is small, reduced error with smaller time step can also be discerned.
In practical runs, the major source of inaccuracy is the spherical harmonic resolution, which was restricted to
$\ell \le 15$
by hardware limitations. Truncation of the spherical harmonic expansion has the effect of modifying the equations to a system for which the constraints are no longer satisfied. The relative error in the constraints is
${10}^{3}$
for strong field simulations [
21]
.
3.5.5 First versus second differential order
The PITT code was originally formulated in the second differential form of Equations ( 22 , 23 , 24 , 25 ), which in the spinweighted version leads to an economical number of 2 real and 2 complex variables. Subsequently, the variable
$$\begin{array}{c}{Q}_{A}={r}^{2}{e}^{2\beta}{h}_{AB}{U}_{,r}^{B}\end{array}$$ 
(26)

was introduced to reduce Equation ( 23 ) to two first order radial equations, which simplified the startup procedure at the initial boundary. Although the resulting code has been verified to be stable and second order accurate, its application to increasingly difficult problems involving strong fields, and gradients have led to numerical errors that make important physical effects hard to measure. In particular, in initial attempts to simulate a white hole fission, Gómez [
96]
encountered an oscillatory error pattern in the angular directions near the time of fission. The origin of the problem was tracked to numerical error of an oscillatory nature introduced by
${\mathbb{\xd0}}^{2}$
terms in the hypersurface and evolution equations. Gómez' solution was to remove the offending second angular derivatives by introducing additional variables and reducing the system to first differential order in the angular directions. This suppressed the oscillatory mode and subsequently improved performance in the simulation of the white hole fission problem [
98]
(see Section 3.7.2 ).
This success opens the issue of whether a completely first differential order code might perform even better, as has been proposed by Gómez and Frittelli [
97]
. They gave a first order quasilinear formulation of the Bondi system which, at the analytic level, obeys a standard uniqueness and existence theorem (extending previous work for the linearized case [
89]
); and they point out, at the numerical level, that a first order code could also benefit from the applicability of standard numerical techniques. This is an important issue which is not simple to resolve without code comparison. The part of the code in which the
${\mathbb{\xd0}}^{2}$
operator introduced the oscillatory error mode was not identified in [
96]
, i.e. whether it originated in the inner boundary treatment or in the interpolations between stereographic patches where second derivatives might be troublesome.
There are other possible ways to remove the oscillatory angular modes, such as adding angular dissipation or using more accurate methods of patching the sphere. The current finite difference algorithm only introduces numerical dissipation in the radial direction [
142]
. The economy of variables in the original Bondi scheme should not be abandoned without further tests and investigation.
3.5.6 Nonlinear scattering off a Schwarzschild black hole
A natural physical application of a characteristic evolution code is the nonlinear version of the classic problem of scattering off a Schwarzschild black hole, first solved perturbatively by Price [
175]
.
Here the inner worldtube for the characteristic initial value problem consists of the ingoing branch of the
$r=2m$
hypersurface (the past horizon), where Schwarzschild data are prescribed. The nonlinear problem of a gravitational wave scattering off a Schwarzschild black hole is then posed in terms of data on an outgoing null cone which describe an incoming pulse with compact support.
Part of the energy of this pulse falls into the black hole and part is backscattered to
${\mathcal{\mathcal{I}}}^{+}$
. This problem has been investigated using both the PITT and Canberra codes.
The Pittsburgh group studied the backscattered waveform (described by the Bondi news function) as a function of incoming pulse amplitude. The computational ethmodule smoothly handled the complicated time dependent transformation between the noninertial computational frame at
${\mathcal{\mathcal{I}}}^{+}$
and the inertial (Bondi) frame necessary to obtain the standard “plus” and “cross” polarization modes. In the perturbative regime, the news corresponds to the backscattering of the incoming pulse off the effective Schwarzschild potential. When the energy of the pulse is no larger than the central Schwarzschild mass, the backscattered waveform still depends roughly linearly on the amplitude of the incoming pulse. However, for very high amplitudes the waveform behaves quite differently. Its amplitude is greater than that predicted by linear scaling and its shape drastically changes and exhibits extra oscillations. In this very high amplitude case, the mass of the system is completely dominated by the incoming pulse, which essentially backscatters off itself in a nonlinear way.
The Canberra code was used to study the change in Bondi mass due to the radiation [
21]
. The Hawking mass
${M}_{H}(u,r)$
was calculated as a function of radius and retarded time, with the Bondi mass
${M}_{B}\left(u\right)$
then obtained by taking the limit
$r\to \infty $
. The limit had good numerical behavior.
For a strong initial pulse with
$\ell =4$
angular dependence, in a run from
$u=0$
to
$u=70$
(in units where the interior Schwarzschild mass is 1), the Bondi mass dropped from 1.8 to 1.00002, showing that almost half of the initial energy of the system was backscattered and that a surprisingly negligible amount of energy fell into the black hole. A possible explanation is that the truncation of the spherical harmonic expansion cuts off wavelengths small enough to effectively penetrate the horizon. The Bondi mass decreased monotonically in time, as necessary theoretically, but its rate of change exhibited an interesting pulsing behavior whose time scale could not be obviously explained in terms of quasinormal oscillations. The Bondi mass loss formula was confirmed with relative error of less than
${10}^{3}$
. This is impressive accuracy considering the potential sources of numerical error introduced by taking the limit of the Hawking mass with limited resolution. The code was also used to study the appearance of logarithmic terms in the asymptotic expansion of the Weyl tensor [
25]
. In addition, the Canberra group studied the effect of the initial pulse amplitude on the waveform of the backscattered radiation, but did not extend their study to the very high amplitude regime in which qualitatively interesting nonlinear effects occur.
3.5.7 Black hole in a box
The PITT code has also been implemented to evolve along an advanced time foliation by ingoing null cones, with data given on a worldtube at their outer boundary and on the initial ingoing null cone. The code was used to evolve a black hole in the region interior to the worldtube by implementing a horizon finder to locate the marginally trapped surface (MTS) on the ingoing cones and excising its singular interior [
105]
. The code tracks the motion of the MTS and measures its area during the evolution. It was used to simulate a distorted “black hole in a box” [
102]
. Data at the outer worldtube was induced from a Schwarzschild or Kerr spacetime but the worldtube was allowed to move relative to the stationary trajectories; i.e. with respect to the grid the worldtube is fixed but the black hole moves inside it. The initial null data consisted of a pulse of radiation which subsequently travels outward to the worldtube where it reflects back toward the black hole. The approach of the system to equilibrium was monitored by the area of the MTS, which also equals its Hawking mass. When the worldtube is stationary (static or rotating in place), the distorted black hole inside evolved to equilibrium with the boundary. A boost or other motion of the worldtube with respect to the black hole did not affect this result. The marginally trapped surface always reached equilibrium with the outer boundary, confirming that the motion of the boundary was “pure gauge”.
The code runs “forever” even when the worldtube wobbles with respect to the black hole to produce artificial periodic time dependence. An initially distorted, wobbling black hole was evolved for a time of
$60,000M$
, longer by orders of magnitude than permitted by the stability of other existing 3D black hole codes at the time. This exceptional performance opens a promising new approach to handle the inner boundary condition for Cauchy evolution of black holes by the matching methods reviewed in Section 4 .
Note that setting the pulse to zero is equivalent to prescribing shear free data on the initial null cone. Combined with Schwarzschild boundary data on the outer world tube, this would be complete data for a Schwarzschild space time. However, the evolution of such shear free null data combined with Kerr boundary data would have an initial transient phase before settling down to a Kerr black hole. This is because the twist of the shearfree Kerr null congruence implies that Kerr data specified on a null hypersurface are not generally shear free. The event horizon is an exception but Kerr null data on other null hypersurfaces have not been cast in explicit analytic form. This makes the Kerr spacetime an awkward testbed for characteristic codes. (Curiously, Kerr data on a null hypersurface with a conical type singularity do take a simple analytic form, although unsuitable for numerical evolution [
79]
.) Using some intermediate analytic results of Israel and Pretorius [
173]
, Venter and Bishop [
219]
have recently constructed a numerical algorithm for transforming the Kerr solution into Bondi coordinates and in that way provide the necessary null data numerically.
3.6 Characteristic treatment of binary black holes
An important application of characteristic evolution is the calculation of the waveform emitted by binary black holes, which is possible during the very interesting nonlinear domain from merger to ringdown [
144,
224]
. The evolution is carried out along a family of ingoing null hypersurfaces which intersect the horizon in topological spheres. It is restricted to the period following the merger, for otherwise the ingoing null hypersurfaces would intersect the horizon in disjoint pieces corresponding to the individual black holes. The evolution proceeds backward in time on an ingoing null foliation to determine the exterior spacetime in the postmerger era. It is an example of the characteristic initial value problem posed on an intersecting pair of null hypersurfaces [
185,
119]
, for which existence theorems apply in some neighborhood of the initial null hypersurfaces [
155,
84,
83]
.
Here one of the null hypersurfaces is the event horizon
${\mathcal{\mathscr{H}}}^{+}$
of the binary black holes. The other is an ingoing null hypersurface
${J}^{+}$
which intersects
${\mathcal{\mathscr{H}}}^{+}$
in a topologically spherical surface
${\mathcal{S}}^{+}$
approximating the equilibrium of the final Kerr black hole, so that
${J}^{+}$
approximates future null infinity
${\mathcal{\mathcal{I}}}^{+}$
. The required data for the analytic problem consists of the degenerate conformal null metrics of
${\mathcal{\mathscr{H}}}^{+}$
and
${J}^{+}$
and the metric and extrinsic curvature of their intersection
${\mathcal{S}}^{+}$
.
The conformal metric of
${\mathcal{\mathscr{H}}}^{+}$
is provided by the conformal horizon model for a binary black hole horizon [
144,
133]
, which treats the horizon in standalone fashion as a threedimensional manifold endowed with a degenerate metric
${\gamma}_{ab}$
and affine parameter
$t$
along its null rays. The metric is obtained from the conformal mapping
${\gamma}_{ab}={\Omega}^{2}{\hat{\gamma}}_{ab}$
of the intrinsic metric
${\hat{\gamma}}_{ab}$
of a flat space null hypersurface emanating from a convex surface
${\mathcal{S}}_{0}$
embedded at constant time in Minkowski space.
The horizon is identified with the null hypersurface formed by the inner branch of the boundary of the past of
${\mathcal{S}}_{0}$
, and its extension into the future. The flat space null hypersurface expands forever as its affine parameter
$\hat{t}$
(Minkowski time) increases, but the conformal factor is chosen to stop the expansion so that the crosssectional area of the black hole approaches a finite limit in the future.
At the same time, the Raychaudhuri equation (which governs the growth of surface area) forces a nonlinear relation between the affine parameters
$t$
and
$\hat{t}$
. This is what produces the nontrivial topology of the affine
$t$
slices of the black hole horizon. The relative distortion between the affine parameters
$t$
and
$\hat{t}$
, brought about by curved space focusing, gives rise to the trousers shape of a binary black hole horizon.
Figure 3
: Trousers shaped event horizon obtained by the conformal model.
An embedding diagram of the horizon for an axisymmetric headon collision, obtained by choosing
${\mathcal{S}}_{0}$
to be a prolate spheroid, is shown in Figure 3 [
144]
. The black hole event horizon associated with a triaxial ellipsoid reveals new features not seen in the degenerate case of the headon collision [
133]
, as depicted in Figure 4 . If the degeneracy is slightly broken, the individual black holes form with spherical topology but as they approach, tidal distortion produces two sharp pincers on each black hole just prior to merger. At merger, the two pincers join to form a single temporarily toroidal black hole. The inner hole of the torus subsequently closes up to produce first a peanut shaped black hole and finally a spherical black hole. No violation of topological censorship [
82]
occurs because the hole in the torus closes up superluminally. Consequently, a causal curve passing through the torus at a given time can be slipped below the bottom of a trouser leg to yield a causal curve lying entirely outside the hole [
190]
. In the degenerate axisymmetric limit, the pincers reduce to a point so that the individual holes have teardrop shape and they merge without a toroidal phase. Animations of this merger can be viewed at [
145]
.
Figure 4
: Upper left: Tidal distortion of approaching black holes Upper right: Formation of sharp pincers just prior to merger. Middle left: Temporarily toroidal stage just after merger. Middle right: Peanut shaped black hole after the hole in the torus closes. Lower: Approach to final equilibrium.
The conformal horizon model determines the data on
${\mathcal{\mathscr{H}}}^{+}$
and
${\mathcal{S}}^{+}$
. The remaining data necessary to evolve the exterior spacetime are given by the conformal geometry of
${J}^{+}$
, which constitutes the outgoing radiation waveform. The determination of the mergerringdown waveform proceeds in two stages. In the first stage, this outgoing waveform is set to zero and the spacetime is evolved backward in time to calculate the incoming radiation entering from
${\mathcal{\mathcal{I}}}^{}$
. (This incoming radiation is eventually absorbed by the black hole.) From a time reversed point of view, this evolution describes the outgoing waveform emitted in the fission of a white hole, with the physically correct initial condition of no ingoing radiation. Preliminary calculations show that at late times the waveform is entirely quadrupolar (
$\ell =2$
) but that a strong octopole mode (
$\ell =4$
) exists just before fission.
In the second stage of the calculation, this waveform could be used to generate the physically correct outgoing waveform for a black hole merger. The passage from the first stage to the second is the nonlinear equivalent of first determining an inhomogeneous solution to a linear problem and then adding the appropriate homogeneous solution to satisfy the boundary conditions. In this context, the first stage supplies an advanced solution and the second stage the homogeneous retarded minus advanced solution. When the evolution is carried out in the perturbative regime of a Kerr or Schwarzschild background, as in the close approximation [
176]
, this superposition of solutions is simplified by the time reflection symmetry [
224]
. The second stage has been carried out in the perturbative regime of the close approximation using a characteristic code which solves the Teukolsky equation, as described in Section 3.7 . More generally, beyond the perturbative regime, the mergerringdown waveform must be obtained by a more complicated inverse scattering procedure, which has not yet been attempted.
There is a complication in applying the PITT code to this double null evolution because a dynamic horizon does not lie precisely on
$r$
grid points. As a result, the
$r$
derivative of the null data, i.e. the ingoing shear of
$\mathcal{\mathscr{H}}$
, must also be provided in order to initiate the radial hypersurface integrations. The ingoing shear is part of the free data specified at
${\mathcal{S}}^{+}$
. Its value on
$\mathcal{\mathscr{H}}$
can be determined by integrating (backward in time) a sequence of propagation equations involving the horizon's twist and ingoing divergence. A horizon code which carries out these integrations has been tested to give accurate data even beyond the merger [
99]
.
The code has revealed new global properties of the headon collision by studying a sequence of data for a family of colliding black holes which approaches a single Schwarzschild black hole.
The resulting perturbed Schwarzschild horizon provides global insight into the close limit [
176]
, in which the individual black holes have joined in the infinite past. A marginally antitrapped surface divides the horizon into interior and exterior regions, analogous to the division of the Schwarzschild horizon by the
$r=2M$
bifurcation sphere. In passing from the perturbative to the strongly nonlinear regime there is a rapid transition in which the individual black holes move into the exterior portion of the horizon. The data paves the way for the PITT code to calculate whether this dramatic time dependence of the horizon produces an equally dramatic waveform.
See Section
3.7.2 for first stage results.
3.7 Perturbations of Schwarzschild
The nonlinear 3D PITT code has been calibrated in the regime of small perturbations of a Schwarzschild spacetime [
228,
229]
by measuring convergence with respect to independent solutions of the Teukolsky equation [
210]
. By decomposition into spherical harmonics, the Teukolsky equation reduces the problem of a perturbation of a stationary black hole to a 1D problem in the
$(t,r)$
subspace perturbations for a component of the Weyl tensor. Historically, the Teukolsky equation was first solved numerically by Cauchy evolution. Campanelli, Gómez, Husa, Winicour, and Zlochower [
56,
134]
have reformulated the Teukolsky formalism as a doublenull characteristic evolution algorithm. The evolution proceeds on a family of outgoing null hypersurfaces with an ingoing null hypersurface as inner boundary and with the outer boundary compactified at future null infinity.
It applies to either the Weyl component
${\Psi}_{0}$
or
${\Psi}_{4}$
, as classified in the Newman–Penrose formalism.
The
${\Psi}_{0}$
component comprises constraintfree gravitational data on an outgoing null hypersurface and
${\Psi}_{4}$
comprises the corresponding data on an ingoing null hypersurface. In the study of perturbations of a Schwarzschild black hole,
${\Psi}_{0}$
is prescribed on an outgoing null hypersurface
${\mathcal{J}}^{}$
, representing an early retarded time approximating past null infinity, and
${\Psi}_{4}$
is prescribed on the inner white hole horizon
${\mathcal{\mathscr{H}}}^{}$
.
The physical setup is described in Figure
5 . The outgoing null hypersurfaces extend to future null infinity
${\mathcal{\mathcal{I}}}^{+}$
on a compactified numerical grid. Consequently, there is no need for either an artificial outer boundary condition or an interior extraction worldtube. The outgoing radiation is computed in the coordinates of an observer in an inertial frame at infinity, thus avoiding any gauge ambiguity in the waveform.
Figure 5
: The physical setup for the scattering problem. A star of mass
$M$
has undergone spherically symmetric collapse to form a black hole. The ingoing null worldtube
$\mathcal{N}$
lies outside the collapsing matter. Inside
$\mathcal{N}$
(but outside the matter) there is a vacuum Schwarzschild metric. Outside of
$\mathcal{N}$
, data for an ingoing pulse is specified on the initial outgoing null hypersurface
${\mathcal{J}}^{}$
. As the pulse propagates to the black hole event horizon
${\mathcal{\mathscr{H}}}^{+}$
, part of its energy is scattered to
${\mathcal{\mathcal{I}}}^{+}$
.
The first calculations were carried out with nonzero data for
${\Psi}_{4}$
on
${\mathcal{\mathscr{H}}}^{}$
and zero data on
${\mathcal{J}}^{}$
[
56]
(so that no ingoing radiation entered the system). The resulting simulations were highly accurate and tracked the quasinormal ringdown of a perturbation consisting of a compact pulse through 10 orders of magnitude and tracked the final power law decay through an additional 6 orders of magnitude. The measured exponent of the power law decay varied from
$\approx 5.8$
, at the beginning of the tail, to
$\approx 5.9$
near the end, in good agreement with the predicted value of
$2\ell +2=6$
for a quadrupole wave [
175]
.
The accuracy of the perturbative solutions provide a virtual exact solution for carrying out convergence tests of the nonlinear PITT null code. In this way, the error in the Bondi news function computed by the PITT code was calibrated for perturbative data consisting of either an outgoing pulse on
${\mathcal{\mathscr{H}}}^{}$
or an ingoing pulse on
${\mathcal{J}}^{}$
. For the outgoing pulse, clean second order convergence was confirmed until late times in the evolution, when small deviations from second order arise from accumulation of roundoff and truncation error. For the Bondi news produced by the scattering of an ingoing pulse, clean second order convergence was again confirmed until late times when the pulse approached the
$r=2M$
black hole horizon. The late time error arises from loss of resolution of the pulse (in the radial direction) resulting from the properties of the compactified radial coordinate used in the code. This type of error could be eliminated by using characteristic AMR techniques under development [
174]
.
3.7.1 Close approximation white hole and black hole waveforms
The characteristic Teukolsky code has been used to study radiation from axisymmetric white holes and black holes in the close approximation. The radiation from an axisymmetric fissioning white hole [
56]
was computed using the Weyl data on
${\mathcal{\mathscr{H}}}^{}$
supplied by the conformal horizon model described in Section 3.6 , with the fission occurring along the axis of symmetry. The close approximation implies that the fission takes place far in the future, i.e. in the region of
${\mathcal{\mathscr{H}}}^{}$
above the black hole horizon
${\mathcal{\mathscr{H}}}^{+}$
. The data have a free parameter
$\eta $
which controls the energy yielded by the white hole fission. The radiation waveform reveals an interesting dependence on the parameter
$\eta $
. In the large
$\eta $
limit, the waveform consists of a single pulse, followed by ringdown and tail decay. The amplitude of the pulse scales quadratically with
$\eta $
and the width decreases with
$\eta $
.
As
$\eta $
is reduced, the initial pulse broadens and develops more structure. In the small
$\eta $
limit, the amplitude scales linearly with
$\eta $
and the shape is independent of
$\eta $
.
Since there was no incoming radiation, the above model gave the physically appropriate boundary conditions for a white hole fission (in the close approximation). From a time reversed view point, the system corresponds to a black hole merger with no outgoing radiation at future null infinity, i.e. the analog of an advanced solution with only ingoing but no outgoing radiation. In the axisymmetric case studied, the merger corresponds to a headon collision between two black holes. The physically appropriate boundary conditions for a black hole merger correspond to no ingoing radiation on
${\mathcal{J}}^{}$
and binary black hole data on
${\mathcal{\mathscr{H}}}^{+}$
. Because
${\mathcal{J}}^{}$
and
${\mathcal{\mathscr{H}}}^{+}$
are disjoint, the corresponding data cannot be used directly to formulate a double null characteristic initial value problem. However, the ingoing radiation at
${\mathcal{J}}^{}$
supplied by the advanced solution for the black hole merger could be used as Stage I of a two stage approach to determine the corresponding retarded solution. In Stage II, this ingoing radiation is used to generate the analogue of an advanced minus retarded solution.
A pure retarded solution (with no ingoing radiation but outgoing radiation at
${\mathcal{\mathcal{I}}}^{+}$
) can then be constructed by superposition. The time reflection symmetry of the Schwarzschild background is key to carrying out this construction.
This two stage strategy has been carried out by Husa, Zlochower, Gómez, and Winicour [
134]
.
The superposition of the Stage I and II solutions removes the ingoing radiation from