Usuário(a):Wcris~ptwiki/wave equation

Origem: Wikipédia, a enciclopédia livre.

Predefinição:Quantum mechanics

In physics, especially quantum mechanics, the Schrödinger equation is an equation that describes how the quantum state of a physical system changes in time. It is as central to quantum mechanics as Newton's laws are to classical mechanics.

In the standard interpretation of quantum mechanics, the quantum state, also called a wavefunction or state vector, is the most complete description that can be given to a physical system. Solutions to Schrödinger's equation describe not only atomic and subatomic systems, electrons and atoms, but also macroscopic systems, possibly even the whole universe. The equation is named after Erwin Schrödinger, who discovered it in 1926.[1]

Schrödinger's equation can be mathematically transformed into Heisenberg's matrix mechanics, and into Feynman's path integral formulation. The Schrödinger equation describes time in a way that is inconvenient for relativistic theories, a problem which is not as severe in Heisenberg's formulation and completely absent in the path integral.

The Schrödinger equation[editar | editar código-fonte]

The Schrödinger equation takes several different forms, depending on the physical situation. This section presents the equation for the general case and for the simple case encountered in many textbooks.

General quantum system[editar | editar código-fonte]

For a general quantum system:

where

Single particle in three dimensions[editar | editar código-fonte]

For a single particle in three dimensions:

where

  • is the particle's position in three-dimensional space,
  • is the wavefunction, which is the amplitude for the particle to have a given position r at any given time t.
  • is the mass of the particle.
  • is the time independent potential energy of the particle at each position r.

Historical background and development[editar | editar código-fonte]

Einstein interpreted Planck's quanta as photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, a mysterious wave-particle duality. Since energy and momentum are related in the same way as frequency and wavenumber in relativity, it followed that the momentum of a photon is proportional to its wavenumber.

DeBroglie hypothesized that this is true for all particles, for electrons as well as photons, that the energy and momentum of an electron are the frequency and wavenumber of a wave. Assuming that the waves travel roughly along classical paths, he showed that they form standing waves only for certain discrete frequencies, discrete energy levels which reproduced the old quantum condition.

Following up on these ideas, Schrödinger decided to find a proper wave equation for the electron. He was guided by Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system--- the trajectories of light rays become sharp tracks which obey an analog of the principle of least action. Hamilton believed that mechanics was the zero-wavelength limit of wave propagation, but did not formulate an equation for those waves. This is what Schrödinger did, and a modern version of his reasoning is reproduced in the next section. The equation he found is (in natural units):

Using this equation, Schrödinger computed the spectral lines for hydrogen by treating a hydrogen atom's single negatively charged electron as a wave, , moving in a potential well, V, created by the positively charged proton. This computation reproduced the energy levels of the Bohr model.

But this was not enough, since Sommerfeld had already seemingly correctly reproduced relativistic corrections. Schrödinger used the relativistic energy momentum relation to find what is now known as the Klein-Gordon equation in a Coulomb potential:

He found the standing-waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself in an isolated mountain cabin with a lover.{{carece de fontes}}

While there, Schrödinger decided that the earlier nonrelativistic calculations were novel enough to publish, and decided to leave off the problem of relativistic corrections for the future. He put together his wave equation and the spectral analysis of hydrogen in a paper in 1926.[2] The paper was enthusiastically endorsed by Einstein, who saw the matter-waves as the visualizable antidote to what he considered to be the overly formal matrix mechanics.

The Schrödinger equation tells you the behaviour of , but does not say what is. Schrödinger tried unsuccessfully, in his fourth paper, to interpret it as a charge density.[3] In 1926 Max Born, just a few days after Schrödinger's fourth and final paper was published, successfully interpreted as a probability amplitude[4]. Schrödinger, though, always opposed a statistical or probabilistic approach, with its associated discontinuities; like Einstein, who believed that quantum mechanics was a statistical approximation to an underlying deterministic theory, Schrödinger was never reconciled to the Copenhagen interpretation.[5]

Derivation[editar | editar código-fonte]

Short heuristic derivation[editar | editar código-fonte]

Assumptions[editar | editar código-fonte]

(1) The total energy E of a particle is
This is the classical expression for a particle with mass m where the total energy E is the sum of the kinetic energy , , and the potential energy V. The momentum of the particle is p, or mass times velocity. The potential energy is assumed to vary with position, and possibly time as well.
Note that the energy E and momentum p appear in the following two relations:
(2) Einstein's light quanta hypothesis of 1905, which asserts that the energy E of a photon is proportional to the frequency f of the corresponding electromagnetic wave:
where the frequency f of the quanta of radiation (photons) are related by Planck's constant h,
and is the angular frequency of the wave.
(3) The de Broglie hypothesis of 1924, which states that any particle can be associated with a wave, represented mathematically by a wavefunction Ψ, and that the momentum p of the particle is related to the wavelength λ of the associated wave by:
where is the wavelength and is the wavenumber of the wave.
Expressing p and k as vectors, we have

Expressing the wave function as a complex plane wave[editar | editar código-fonte]

Schrödinger's great insight, late in 1925, was to express the phase of a plane wave as a complex phase factor:

and to realize that since

then

and similarly since

and

we find:

so that, again for a plane wave, he obtained:

And by inserting these expressions for the energy and momentum into the classical formula we started with we get Schrödinger's famed equation for a single particle in the 3-dimensional case in the presence of a potential V:

Longer discussion[editar | editar código-fonte]

The particle is described by a wave, and in natural units, the frequency is the energy E of the particle, while the momentum p is the wavenumber k. These are not two separate assumptions, because of special relativity.

The total energy is the same function of momentum and position as in classical mechanics:

where the first term T(p) is the kinetic energy and the second term V(x) is the potential energy.

Schrödinger required that a Wave packet at position x with wavenumber k will move along the trajectory determined by Newton's laws in the limit that the wavelength is small.

Consider first the case without a potential, V=0.

So that a plane wave with the right energy/frequency relationship obeys the free Schrödinger equation:

and by adding together plane waves, you can make an arbitrary wave.

When there is no potential, a wavepacket should travel in a straight line at the classical velocity. The velocity v of a wavepacket is:

which is the momentum over the mass as it should be. This is one of Hamilton's equations from mechanics:

after identifying the energy and momentum of a wavepacket as the frequency and wavenumber.

To include a potential energy, consider that as a particle moves the energy is conserved, so that for a wavepacket with approximate wavenumber k at approximate position x the quantity

must be constant. The frequency doesn't change as a wave moves, but the wavenumber does. So where there is a potential energy, it must add in the same way:

This is the time dependent Schrödinger equation. It is the equation for the energy in classical mechanics, turned into a differential equation by substituting:

Schrödinger studied the standing wave solutions, since these were the energy levels. Standing waves have a complicated dependence on space, but vary in time in a simple way:

substituting, the time-dependent equation becomes the standing wave equation:

Which is the original time-independent Schrödinger equation.

In a potential gradient, the k-vector of a short-wavelength wave must vary from point to point, to keep the total energy constant. Sheets perpendicular to the k-vector are the wavefronts, and they gradually change direction, because the wavelength is not everywhere the same. A wavepacket follows the shifting wavefronts with the classical velocity, with the acceleration equal to the force divided by the mass.

an easy modern way to verify that Newton's second law holds for wavepackets is to take the Fourier transform of the time dependent Schrödinger equation. For an arbitrary polynomial potential this is called the Schrödinger equation in the momentum representation:

The group-velocity relation for the fourier transformed wave-packet gives the second of Hamilton's equations.

Versions[editar | editar código-fonte]

There are several equations which go by Schrödinger's name:

Time dependent equation[editar | editar código-fonte]

This is the equation of motion for the quantum state. In the most general form, it is written:

Where is a linear operator acting on the wavefunction . takes as input one and produces another in a linear way, a function-space version of a matrix multiplying a vector. For the specific case of a single particle in one dimension moving under the influence of a potential V (adopting natural units where ):

and the operator H can be read off:

it is a combination of the operator which takes the second derivative, and the operator which pointwise multiplies by V(x). When acting on it reproduces the right hand side.

For a particle in three dimensions, the only difference is more derivatives:

and for N particles, the difference is that the wavefunction is in 3N-dimensional configuration space, the space of all possible particle positions.

This last equation is in a very high dimension, so that the solutions are not easy to visualize.

Time independent equation[editar | editar código-fonte]

This is the equation for the standing waves, the eigenvalue equation for H. In abstract form, for a general quantum system, it is written:

For a particle in one dimension,

But there is a further restriction--- the solution must not grow at infinity, so that it has a finite L^2-norm:

For example, when there is no potential, the equation reads:

which has oscillatory solutions for E>0 (the C's are arbitrary constants):

and exponential solutions for E<0

The exponentially growing solutions have an infinite norm, and are not physical. They are not allowed in a finite volume with periodic or fixed boundary conditions.

For a constant potential V the solution is oscillatory for E>V and exponential for E<V, corresponding to energies which are allowed or disallowed in classical mechnics. Oscillatory solutions have a classically allowed energy and correspond to actual classical motions, while the exponential solutions have a disallowed energy and describe a small amount of quantum bleeding into the classically disallowed region, to quantum tunneling. If the potential V grows at infinity, the motion is classically confined to a finite region, which means that in quantum mechanics every solution becomes an exponential far enough away. The condition that the exponential is decreasing restricts the energy levels to a discrete set, called the allowed energies.

Energy eigenstates[editar | editar código-fonte]

A solution of the time independent equation is called an energy eigenstate with energy E:

To find the time dependence of the state, consider starting the time-dependent equation with an initial condition . The time derivative at t=0 is everywhere proportional to the value:

So that at first the whole function just gets rescaled, and it maintains the property that its time derivative is proportional to itself. So for all times,

substituting,

So that the solution of the time-dependent equation with this initial condition is:

This is a restatement of the fact that solutions of the time-independent equation are the standing wave solutions of the time dependent equation. They only get multiplied by a phase as time goes by, and otherwise are unchanged.

Superpositions of energy eigenstates change their properties according to the relative phases between the energy levels.

Nonlinear equation[editar | editar código-fonte]

The nonlinear Schrödinger equation is the partial differential equation

for the complex field ψ.

This equation arises from the Hamiltonian

with the Poisson brackets

It must be noted that this is a classical field equation. Unlike its linear counterpart, it never describes the time evolution of a quantum state.

Properties[editar | editar código-fonte]

First order in time[editar | editar código-fonte]

The Schrödinger equation describes the time evolution of a quantum state, and must determine the future value from the present value. A classical field equation can be second order in time derivatives, the classical state can include the time derivative of the field. But a quantum state is a full description of a system, so that the Schrödinger equation is always first order in time.

Linear[editar | editar código-fonte]

The Schrödinger equation is linear in the wavefunction: if and are solutions to the time dependent equation, then so is , where a and b are any complex numbers.

In quantum mechanics, the time evolution of a quantum state is always linear, for fundamental reasons. Although there are nonlinear versions of the Schrödinger equation, these are not equations which describe the evolution of a quantum state, but classical field equations like Maxwell's equations or the Klein-Gordon equation.

The Schrödinger equation itself can be thought of as the equation of motion for a classical field not for a wavefunction, and taking this point of view, it describes a coherent wave of nonrelativistic matter, a wave of a Bose condensate or a superfluid with a large indefinite number of particles and a definite phase and amplitude.

Real eigenstates[editar | editar código-fonte]

The time-independent equation is also linear, but in this case linearity has a slightly different meaning. If two wavefunctions and are solutions to the time-independent equation with the same energy E, then any linear combination of the two is a solution with energy E. Two different solutions with the same energy are called degenerate.

In an arbitrary potential, there is one obvious degeneracy: if a wavefunction solves the time-independent equation, so does . By taking linear combinations, the real and imaginary part of are each solutions. So that restricting attention to real valued wavefunctions does not affect the time-independent eigenvalue problem.

In the time-dependent equation, complex conjugate waves move in opposite directions. Given a solution to the time dependent equation , the replacement:

produces another solution, and is the extension of the complex conjugation symmetry to the time-dependent case. The symmetry of complex conjugation is called time-reversal.

Unitary time evolution[editar | editar código-fonte]

The Schrödinger equation is Unitary, which means that the total norm of the wavefunction, the sum of the squares of the value at all points:

has zero time derivative.

The derivative of is according to the complex conjugate equations

where the operator is defined as the continuous analog of the Hermitian conjugate,

For a discrete basis, this just means that the matrix elements of the linear operator H obey:

The derivative of the inner product is:

and is proportional to the imaginary part of H. If H has no imaginary part, if it is self-adjoint, then the probability is conserved. This is true not just for the Schrödinger equation as written, but for the Schrödinger equation with nonlocal hopping:

so long as:

the particular choice:

reproduces the local hopping in the ordinary Schrödinger equation. On a discrete lattice approximation to a continuous space, H(x,y) has a simple form:

whenever x and y are nearest neighbors. On the diagonal

where n is the number of nearest neighbors.

Positivity of energy[editar | editar código-fonte]

If the potential is bounded from below, the eigenfunctions of the Schrödinger equation have energy which is also bounded from below. This can be seen most easily by using the variational principle, as follows. (See also below.)

For any linear operator bounded from below, the eigenvector with the smallest eigenvalue is the vector that minimizes the quantity

over all which are normalized:

In this way, the smallest eigenvalue is expressed through the variational principle.

For the Schrödinger Hamiltonian bounded from below, the smallest eigenvalue is called the ground state energy. That energy is the minimum value of

(we used an integration by parts). The right hand side is never smaller than the smallest value of ; in particular, the ground state energy is positive when is everywhere positive.

Positive definite nondegenerate ground state[editar | editar código-fonte]

For potentials that are bounded below and are not infinite in such a way that will divide space into regions which are inaccessible by quantum tunneling, there is a ground state which minimizes the integral above. The lowest energy wavefunction is real and nondegenerate and has the same sign everywhere.

To prove this, let the ground state wavefunction be . The real and imaginary parts are separately ground states, so it is no loss of generality to assume the is real. Suppose now, for contradiction, that changes sign. Define to be the absolute value of .

The potential and kinetic energy integral for is equal to psi, except that has a kink wherever changes sign. The integrated-by-parts expression for the kinetic energy is the sum of the squared magnitude of the gradient, and it is always possible to round out the kink in such a way that the gradient gets smaller at every point, so that the kinetic energy is reduced.

This also proves that the ground state is nondegenerate. If there were two ground states and not proportional to each other and both everywhere nonnegative then a linear combination of the two is still a ground state, but it can be made to have a sign change.

For one-dimensional potentials, every eigenstate is nondegenerate, because the number of sign changes is equal to the level number.

Already in two dimensions, it is easy to get a degeneracy--- for example, if a particle is moving in a separable potential: V(x,y) = U(x) + W(y), then the energy levels are the sum of the energies of the one-dimensional problem. It is easy to see that by adjusting the overall scale of U and W that the levels can be made to collide.

For standard examples, the three-dimensional harmonic oscillator and the central potential, the degeneracies are a consequence of symmetry.

Completeness[editar | editar código-fonte]

The energy eigenstates form a basis--- any wavefunction may be written as a sum over the discrete energy states or an integral over continuous energy states, or more generally as an integral over a measure. This is the spectral theorem in mathematics, and in a finite state space it is just a statement of the completeness of the eigenvectors of a Hermitian matrix.

Local conservation of probability[editar | editar código-fonte]

The probability density of a particle is . The probability flux is defined as:

in units of (probability)/(area × time).

The probability flux satisfies the continuity equation:

where is the probability density and measured in units of (probability)/(volume) = r−3. This equation is the mathematical equivalent of the probability conservation law.

For a plane wave:

So that not only is the probability of finding the particle the same everywhere, but the probability flux is as expected from an object moving at the classical velocity .

The reason that the Schrödinger equation admits a probability flux is because all the hopping is local and forward in time.

Heisenberg observables[editar | editar código-fonte]

There are many linear operators which act on the wavefunction, each one defines a Heisenberg matrix when the energy eigenstates are discrete. For a single particle, the operator which takes the derivative of the wavefunction in a certain direction:

Is called the momentum operator. Multiplying operators is just like multiplying matrices, the product of A and B acting on is A acting on the output of B acting on .

An eigenstate of p obeys the equation:

for a number k, and for a normalizable wavefunction this restricts k to be real, and the momentum eigenstate is a wave with frequency k.

The position operator x multiplies each value of the wavefunction at the position x by x:

So that in order to be an eigenstate of x, a wavefunction must be entirely concentrated at one point:

In terms of p, the Hamiltonian is:

It is easy to verify that p acting on x acting on psi:

while x acting on p acting on psi reproduces only the first term:

so that the difference of the two is not zero:

or in terms of operators:

Since the time derivative of a state is:

while the complex conjugate is

The time derivative of a matrix element

obeys the Heisenberg equation of motion. This establishes the equivalence of the Schrödinger and Heisenberg formalisms, ignoring the mathematical fine points of the limiting procedure for continuous space.

Correspondence principle[editar | editar código-fonte]

Ver artigo principal: Ehrenfest Theorem

The Schrödinger equation satisfies the correspondence principle. In the limit of small wavelength wavepackets, it reproduces Newton's laws. This is easy to see from the equivalence to matrix mechanics.

All operators in Heisenberg's formalism obey the quantum analog of Hamilton's equations:

So that in particular, the equations of motion for the X and P operators are:

in the Schrödinger picture, the interpretation of this equation is that it gives the time rate of change of the matrix element between two states when the states change with time. Taking the expectation value in any state shows that Newton's laws hold not only on average, but exactly, for the quantities:

Relativity[editar | editar código-fonte]

Ver artigo principal: Relativistic wave equations

The Schrödinger equation does not take into account relativistic effects, as a wave equation, it is invariant under a Galilean transformation, but not under a Lorentz transformation. But in order to include relativity, the physical picture must be altered in a radical way.

The Klein–Gordon equation uses the relativistic mass-energy relation (in natural units):

to produce the differential equation:

which is relativistically invariant, but second order in , and so cannot be an equation for the quantum state. This equation also has the property that there are solutions with both positive and negative frequency, a plane wave solution obeys:

which has two solutions, one with positive frequency the other with negative frequency. This is a disaster for quantum mechanics, because it means that the energy is unbounded below.

A more sophisticated attempt to solve this problem uses a first order wave equation, the Dirac equation, but again there are negative energy solutions. In order to solve this problem, it is essential to go to a multiparticle picture, and to consider the wave equations as equations of motion for a quantum field, not for a wavefunction.

The reason is that relativity is incompatible with a single particle picture. A relativistic particle cannot be localized to a small region without the particle number becoming indefinite. When a particle is localized in a box of length L, the momentum is uncertain by an amount roughly proportional to h/L by the uncertainty principle. This leads to an energy uncertainty of hc/L, when |p| is large enough so that the mass of the particle can be neglected. This uncertainty in energy is equal to the mass-energy of the particle when

and this is called the Compton wavelength. Below this length, it is impossible to localize a particle and be sure that it stays a single particle, since the energy uncertainty is large enough to produce more particles from the vacuum by the same mechanism that localizes the original particle.

But there is another approach to relativistic quantum mechanics which does allow you to follow single particle paths, and it was discovered within the path-integral formulation. If the integration paths in the path integral include paths which move both backwards and forwards in time as a function of their own proper time, it is possible to construct a purely positive frequency wavefunction for a relativistic particle. This construction is appealing, because the equation of motion for the wavefunction is exactly the relativistic wave equation, but with a nonlocal constraint that separates the positive and negative frequency solutions. The positive frequency solutions travel forward in time, the negative frequency solutions travel backwards in time. In this way, they both analytically continue to a statistical field correlation function, which is also represented by a sum over paths. But in real space, they are the probability amplitudes for a particle to travel between two points, and can be used to generate the interaction of particles in a point-splitting and joining framework. The relativistic particle point of view is due to Richard Feynman.

Feynman's method also constructs the theory of quantized fields, but from a particle point of view. In this theory, the equations of motion for the field can be interpreted as the equations of motion for a wavefunction only with caution--- the wavefunction is only defined globally, and in some way related to the particle's proper time. The notion of a localized particle is also delicate--- a localized particle in the relativistic particle path integral corresponds to the state produced when a local field operator acts on the vacuum, and exacly which state is produced depends on the choice of field variables.

Solutions[editar | editar código-fonte]

Some general techniques are:

In some special cases, special methods can be used:

Free Schrödinger equation[editar | editar código-fonte]

When the potential is zero, the Schrödinger equation is linear with constant coefficients:

The solution for any initial condition can be found by Fourier transforms. Because the coefficients are constant, an initial plane wave stays a plane wave. Only the coefficient changes:

Substituting:

So that A is also oscillating in time:

and the solution is:

Where , a restatement of DeBroglie's relations.

To find the general solution, write the initial condition as a sum of plane waves by taking its Fourier transform:

The equation is linear, so each plane waves evolves independently:

Which is the general solution. When complemented by an effective method for taking Fourier transforms, it becomes an efficient algorithm for finding the wavefunction at any future time--- Fourier transform the initial conditions, multiply by a phase, and transform back.

Gaussian wavepacket[editar | editar código-fonte]

An easy and instructive example is the Gaussian wavepacket:

where a is a positive real number, the square of the width of the wavepacket. The total normalization of this wavefunction is:

The Fourier transform is a Gaussian again in terms of the wavenumber k:

With the physics convention which puts the factors of in Fourier transforms in the k-measure.

Each separate wave only phase-rotates in time, so that the time dependent Fourier-transformed solution is:

The inverse Fourier transform is still a Gaussian, but the parameter a has become complex, and there is an overall normalization factor.

The branch of the square root is determined by continuity in time--- it is the value which is nearest to the positive square root of a. It is convenient to rescale time to absorb m, replacing t/m by t.

The integral of over all space is invariant, because it is the inner product of with the state of zero energy, which is a wave with infinite wavelength, a constant function of space. For any energy state, with wavefunction , the inner product:

,

only changes in time in a simple way: its phase rotates with a frequency determined by the energy of . When has zero energy, like the infinite wavelength wave, it doesn't change at all.

The sum of the absolute square of is also invariant, which is a statement of the conservation of probability. Explicitly in one dimension:

Which gives the norm:

which has preserved its value, as it must.

The width of the Gaussian is the interesting quantity, and it can be read off from the form of :

.

The width eventually grows linearly in time, as . This is wave-packet spreading--- no matter how narrow the initial wavefunction, a Schrödinger wave eventually fills all of space. The linear growth is a reflection of the momentum uncertainty--- the wavepacket is confined to a narrow width and so has a momentum which is uncertain by the reciprocal amount , a spread in velocity of , and therefore in the future position by , where the factor of m has been restored by undoing the earlier rescaling of time.

Galilean invariance[editar | editar código-fonte]

Galilean boosts are transformations which look at the system from the point of view of an observer moving with a steady velocity -v. A boost must change the physical properties of a wavepacket in the same way as in classical mechanics:

So that the phase factor of a free Schrödinger plane wave:

is only different in the boosted coordinates by a phase which depends on x and t, but not on p.

An arbitrary superposition of plane wave solutions with different values of p is the same superposition of boosted plane waves, up to an overall x,t dependent phase factor. So any solution to the free Schrödinger equation, , can be boosted into other solutions:

Boosting a constant wavefunction produces a plane-wave. More generally, boosting a plane-wave:

produces a boosted wave:

Boosting the spreading Gaussian wavepacket:

produces the moving Gaussian:

Which spreads in the same way.

Free propagator[editar | editar código-fonte]

The narrow-width limit of the Gaussian wavepacket solution is the propagator K. For other differential equations, this is sometimes called the Green's function, but in quantum mechanics it is traditional to reserve the name Green's function for the time Fourier transform of K. When a is the infinitesimal quantity , the Gaussian initial condition, rescaled so that its integral is one:

becomes a delta function, so that its time evolution:

gives the propagator.

Note that a very narrow initial wavepacket instantly becomes infinitely wide, with a phase which is more rapidly oscillatory at large values of x. This might seem strange--- the solution goes from being concentrated at one point to being everywhere at all later times, but it is a reflection of the momentum uncertainty of a localized particle. Also note that the norm of the wavefunction is infinite, but this is also correct since the square of a delta function is divergent in the same way.

The factor of is an infinitesimal quantity which is there to make sure that integrals over K are well defined. In the limit that becomes zero, K becomes purely oscillatory and integrals of K are not absolutely convergent. In the remainder of this section, it will be set to zero, but in order for all the integrations over intermediate states to be well defined, the limit is to only to be taken after the final state is calculated.

The propagator is the amplitude for reaching point x at time t, when starting at the origin, x=0. By translation invariance, the amplitude for reaching a point x when starting at point y is the same function, only translated:

In the limit when t is small, the propagator converges to a delta function:

but only in the sense of distributions. The integral of this quantity multiplied by an arbitrary differentiable test function gives the value of the test function at zero. To see this, note that the integral over all space of K is equal to 1 at all times:

since this integral is the inner-product of K with the uniform wavefunction. But the phase factor in the exponent has a nonzero spatial derivative everywhere except at the origin, and so when the time is small there are fast phase cancellations at all but one point. This is rigorously true when the limit is taken after everything else.

So the propagation kernel is the future time evolution of a delta function, and it is continuous in a sense, it converges to the initial delta function at small times. If the initial wavefunction is an infinitely narrow spike at position :

it becomes the oscillatory wave:

Since every function can be written as a sum of narrow spikes:

the time evolution of every function is determined by the propagation kernel:

And this is an alternate way to express the general solution. The interpretation of this expression is that the amplitude for a particle to be found at point x at time t is the amplitude that it started at times the amplitude that it went from to x, summed over all the possible starting points. In other words, it is a convolution of the kernel K with the initial condition.

Since the amplitude to travel from x to y after a time can be considered in two steps, the propagator obeys the identity:

Which can be interpreted as follows: the amplitude to travel from x to z in time t+t' is the sum of the amplitude to travel from x to y in time t multiplied by the amplitude to travel from y to z in time t', summed over all possible intermediate states y. This is a property of an arbitrary quantum system, and by subdividing the time into many segments, it allows the time evolution to be expressed as a path integral.

Analytic continuation to diffusion[editar | editar código-fonte]

The spreading of wavepackets in quantum mechanics is directly related to the spreading of probability densities in diffusion. For a particle which is random walking, the probability density function at any point satisfies the diffusion equation:

where the factor of 2, which can be removed by a rescaling either time or space, is only for convenience.

A solution of this equation is the spreading gaussian:

and since the integral of , is constant, while the width is becoming narrow at small times, this function approaches a delta function at t=0:

again, only in the sense of distributions, so that

for any smooth test function f.

The spreading Gaussian is the propagation kernel for the diffusion equation and it obeys the convolution identity:

Which allows diffusion to be expressed as a path integral. The propagator is the exponential of an operator H:

which is the infinitesimal diffusion operator.

A matrix has two indices, which in continuous space makes it a function of x and x'. In this case, because of translation invariance, the matrix element K only depend on the difference of the position, and a convenient abuse of notation is to refer to the operator, the matrix elements, and the function of the difference by the same name:

Translation invariance means that continuous matrix multiplication:

is really convolution:

The exponential can be defined over a range of t's which include complex values, so long as integrals over the propagation kernel stay convergent.

As long as the real part of z is positive, for large values of x K is exponentially decreasing and integrals over K are absolutely convergent.

The limit of this expression for z coming close to the pure imaginary axis is the Schrödinger propagator:

and this gives a more conceptual explanation for the time evolution of Gaussians. From the fundamental identity of exponentiation, or path integration:

holds for all complex z values where the integrals are absolutely convergent so that the operators are well defined.

So that quantum evolution starting from a Gaussian, which is the diffusion kernel K:

gives the time evolved state:

This explains the diffusive form of the Gaussian solutions:

Variational principle[editar | editar código-fonte]

The variational principle asserts that for any Hermitian matrix A, the eigenvector corresponding to the lowest eigenvalue minimizes the quantity:

on the unit sphere . This follows by the method of Lagrange multipliers, at the minimum the gradient of the function is parallel to the gradient of the constraint:

which is the eigenvalue condition

so that the extreme values of a quadratic form A are the eigenvalues of A, and the value of the function at the extreme values is just the corresponding eigenvalue:

When the hermitian matrix is the Hamiltonian, the minimum value is the lowest energy level.

In the space of all wavefunctions, the unit sphere is the space of all normalized wavefunctions , the ground state minimizes

or, after an integration by parts,

All the stationary points come in complex conjugate pairs since the integrand is real. Since the stationary points are eigenvalues, any linear combination is a stationary point, and the real and imaginary part are both stationary points.

Potential and ground state[editar | editar código-fonte]

For a particle in a positive definite potential, the ground state wavefunction is real and positive, and has a dual interpretation as the probability density for a diffusion process. The analogy between diffusion and nonrelativistic quantum motion, originally discovered and exploited by Schrödinger, has led to many exact solutions.

A positive definite wavefunction:

is a solution to the time-independent Schrödinger equation with m=1 and potential:

with zero total energy. W, the logarithm of the ground state wavefunction. The second derivative term is higher order in , and ignoring it gives the semi-classical approximation.

The form of the ground state wavefunction is motivated by the observation that the ground state wavefunction is the Boltzmann probability for a different problem, the probability for finding a particle diffusing in space with the free-energy at different points given by W. If the diffusion obeys detailed balance and the diffusion constant is everywhere the same, the Fokker Planck equation for this diffusion is the Schrödinger equation when the time parameter is allowed to be imaginary. This analytic continuation gives the eigenstates a dual interpretation--- either as the energy levels of a quantum system, or the relaxation times for a stochastic equation.

Harmonic oscillator[editar | editar código-fonte]

Ver artigo principal: Quantum harmonic oscillator

W should grow at infinity, so that the wavefunction has a finite integral. The simplest analytic form is:

with an arbitrary constant , which gives the potential:

This potential describes a Harmonic oscillator, with the ground state wavefunction:

The total energy is zero, but the potential is shifted by a constant. The ground state energy of the usual unshifted Harmonic oscillator potential:

is then the additive constant:

which is the zero point energy of the oscillator.

Coulomb potential[editar | editar código-fonte]

Another simple but useful form is

where W is proportional to the radial coordinate. This is the ground state for two different potentials, depending on the dimension. In one dimension, the corresponding potential is singular at the origin, where it has some nonzero density:

and, up to some rescaling of variables, this is the lowest energy state for a delta function potential, with the bound state energy added on.

with the ground state energy:

and the ground state wavefunction:

In higher dimensions, the same form gives the potential:

which can be identified as the attractive Coulomb law, up to an additive constant which is the ground state energy. This is the superpotential that describes the lowest energy level of the Hydrogen atom, once the mass is restored by dimensional analysis:

where is the Bohr radius, with energy

The ansatz

modifies the Coulomb potential to include a quadratic term proportional to , which is useful for nonzero angular momentum.

Operator formalism[editar | editar código-fonte]

Bra-ket notation[editar | editar código-fonte]

Ver artigo principal: Bra-ket notation

In the mathematical formulation of quantum mechanics, a physical system is fully described by a vector in a complex Hilbert space, the collection of all possible normalizable wavefunctions. The wavefunction is just an alternate name for the vector of complex amplitudes, and only in the case of a single particle in the position representation is it a wave in the usual sense, a wave in space time. For more complex systems, it is a wave in an enormous space of all possible worlds. Two nonzero vectors which are multiples of each other, two wavefunctions which are the same up to rescaling, represent the same physical state.

The wavefunction vector can be written in several ways:

1. as an abstract ket vector:
2. As a list of complex numbers, the components relative to a discrete list of normalizable basis vectors :
3. As a continuous superposition of non-normalizable basis vectors, like position states :

The divide between the continuous basis and the discrete basis can be bridged by limiting arguments. The two can be formally unified by thinking of each as a measure on the real number line.

In the most abstract notation, the Schrödinger equation is written:

which only says that the wavefunction evolves linearly in time, and names the linear operator which gives the time derivative the Hamiltonian H. In terms of the discrete list of coefficients:

which just reaffirms that time evolution is linear, since the Hamiltonian acts by matrix multiplication.

In a continuous representation, the Hamiltonian is a linear operator, which acts by the continuous version of matrix multiplication:

Taking the complex conjugate:

In order for the time-evolution to be unitary, to preserve the inner products, the time derivative of the inner product must be zero:

for an arbitrary state , which requires that H is Hermitian. In a discrete representation this means that . When H is continuous, it should be self-adjoint, which adds some technical requirement that H does not mix up normalizable states with states which violate boundary conditions or which are grossly unnormalizable.

The formal solution of the equation is the matrix exponential (natural units):

For every time-independent Hamiltonian operator, , there exists a set of quantum states, , known as energy eigenstates, and corresponding real numbers satisfying the eigenvalue equation.

This is the time-independent Schrödinger equation.

For the case of a single particle, the Hamiltonian is the following linear operator (natural units):

which is a Self-adjoint operator when V is not too singular and does not grow too fast. Self-adjoint operators have the property that their eigenvalues are real in any basis, and their eigenvectors form a complete set, either discrete or continuous.

Expressed in a basis of Eigenvectors of H, the Schrödinger equation becomes trivial:

Which means that each energy eigenstate is only multiplied by a complex phase:

Which is what matrix exponentiation means--- the time evolution acts to rotate the eigenfunctions of H.

When H is expressed as a matrix for wavefunctions in a discrete energy basis:

so that:

The physical properties of the C's are extracted by acting by operators, matrices. By redefining the basis so that it rotates with time, the matrices become time dependent, which is the Heisenberg picture.

Galilean invariance[editar | editar código-fonte]

Galilean symmetry requires that H(p) is quadratic in p in both the classical and quantum Hamiltonian formalism. In order for Galilean boosts to produce a p-independent phase factor, px - Ht must have a very special form--- translations in p need to be compensated by a shift in H. This is only true when H is quadratic.

The infinitesimal generator of Boosts in both the classical and quantum case is:

where the sum is over the different particles, and B,x,p are vectors.

The poisson bracket/commutator of with x and p generate infinitesimal boosts, with v the infinitesimal boost velocity vector:

Iterating these relations is simple, since they add a constant amount at each step. By iterating, the dv's incrementally sum up to the finite quantity V:

B divided by the total mass is the current center of mass position minus the time times the center of mass velocity:

In other words, B/M is the current guess for the position that the center of mass had at time zero.

The statement that B doesn't change with time is the center of mass theorem. For a Galilean invariant system, the center of mass moves with a constant velocity, and the total kinetic energy is the sum of the center of mass kinetic energy and the kinetic energy measured relative to the center of mass.

Since B is explicitly time dependent, H does not commute with B, rather:

this gives the transformation law for H under infinitesimal boosts:

the interpretation of this formula is that the change in H under an infinitesimal boost is entirely given by the change of the center of mass kinetic energy, which is the dot product of the total momentum with the infinitesimal boost velocity.

The two quantities (H,P) form a representation of the Galilean group with central charge M, where only H and P are classical functions on phase-space or quantum mechanical operators, while M is a parameter. The transformation law for infinitesimal v:

can be iterated as before--- P goes from P to P+MV in infinitesimal increments of v, while H changes at each step by an amount proportional to P, which changes linearly. The final value of H is then changed by the value of P halfway between the starting value and the ending value:

The factors proportional to the central charge M are the extra wavefunction phases.

Boosts give too much information in the single-particle case, since Galilean symmetry completely determines the motion of a single particle. Given a multi-particle time dependent solution:

with a potential that depends only on the relative positions of the particles, it can be used to generate the boosted solution:

For the standing wave problem, the motion of the center of mass just adds an overall phase. When solving for the energy levels of multiparticle systems, Galilean invariance allows the center of mass motion to be ignored.

See also[editar | editar código-fonte]

Notes[editar | editar código-fonte]

  1. Schrödinger, Erwin (1926). «An Undulatory Theory of the Mechanics of Atoms and Molecules» (PDF). Phys. Rev. 28 (6): 1049–1070. doi:10.1103/PhysRev.28.1049 
  2. Erwin Schrödinger, Annalen der Physik, (Leipzig) (1926), Main paper
  3. Schrödinger: Life and Thought by Walter John Moore, Cambridge University Press 1992 ISBN 0-521-43767-9, page 219 (hardback version)
  4. Schrödinger: Life and Thought by Walter John Moore, Cambridge University Press 1992 ISBN 0-521-43767-9, page 220
  5. Schrödinger: Life and Thought by Walter John Moore, Cambridge University Press 1992 ISBN 0-521-43767-9, page 479 (hardback version) makes it clear that even in his last year of life, in a letter to Max Born, he never accepted the Copenhagen Interpretation. cf pg 220

References[editar | editar código-fonte]

  • Paul Adrien Maurice Dirac (1958). The Principles of Quantum Mechanics 4th ed. [S.l.]: Oxford University Press 
  • David J. Griffiths (2004). Introduction to Quantum Mechanics 2nd ed. [S.l.]: Benjamin Cummings 
  • Richard Liboff (2002). Introductory Quantum Mechanics 4th ed. [S.l.]: Addision Wesley 
  • David Halliday (2007). Fundamentals of Physics 8th ed. [S.l.]: Wiley 
  • Serway, Moses, and Moyer (2004). Modern Physics 3rd ed. [S.l.]: Brooks Cole 
  • Walter John Moore (1992). Schrödinger: Life and Thought. [S.l.]: Cambridge University Press 
  • Schrödinger, Erwin (1926). «An Undulatory Theory of the Mechanics of Atoms and Molecules». Phys. Rev. 28 (6). 28: 1049–1070. doi:10.1103/PhysRev.28.1049 

External links[editar | editar código-fonte]


Category:Fundamental physics concepts Category:Partial differential equations Category:Quantum mechanics Category:Equations Category:Austrian inventions pt:Equação de Schrödinger


Dirac Equation[editar | editar código-fonte]

Predefinição:Quantum field theory In physics, the Dirac equation is a relativistic quantum mechanical wave equation formulated by British physicist Paul Dirac in 1928 and provides a description of elementary spin-½ particles, such as electrons, consistent with both the principles of quantum mechanics and the theory of special relativity. The equation demands the existence of antiparticles and actually predated their experimental discovery, making the discovery of the positron, the antiparticle of the electron, one of the greatest triumphs of modern theoretical physics.

Mathematical formulation[editar | editar código-fonte]

The Dirac equation in the form originally proposed by Dirac is:

where
m is the rest mass of the electron,
c is the speed of light,
p is the momentum operator,
is the reduced Planck's constant,
x and t are the space and time coordinates.

The new elements in this equation are the 4×4 matrices and , and the four-component wavefunction . The matrices are all Hermitian and have squares equal to the identity matrix:

and they all mutually anticommute: and . Explicitly,

where i and j are distinct and range from 1 to 3. These matrices, and the form of the wavefunction, have a deep mathematical significance. The algebraic structure represented by the Dirac matrices had been created some 50 years earlier by the English mathematician W. K. Clifford, which in turn had been based on the mid-19th century work of the German mathematician Hermann Grassmann in his "Lineare Ausdehnungslehre" (Theory of Linear Extensions). The latter had been regarded as well-nigh incomprehensible by most of his contemporaries. The appearance of something so seemingly abstract, at such a late date, in such a direct physical manner, amounts to one of the most remarkable chapters in the history of physics.

Comparison with the Schrödinger equation[editar | editar código-fonte]

The Dirac equation is superficially similar to the Schrödinger equation for a free particle:

The left side represents the square of the momentum operator divided by twice the mass, which is the nonrelativistic kinetic energy. If one wants to get a relativistic generalization of this equation, then the space and time derivatives must enter symmetrically, as they do in the relativistic Maxwell equations—the derivatives must be of the same order in space and time. In relativity, the momentum and the energy are the space and time parts of a geometrical space-time vector, the 4-momentum, and they are related by the relativistically invariant relation

which says that the length of this vector is the rest mass m. Replacing E and p by and as Schrödinger theory requires, we get a relativistic equation:

and the wave function is a relativistic scalar, it is a complex number which has the same numerical value in all frames. Because the equation is second order in the time derivative, one must specify both the initial value of and not just . This is normal for classical waves where the initial conditions are the position and velocity. However, in quantum mechanics, the wavefunction is supposed to be the complete description; just knowing the wavefunction should determine the future.

In the Schrödinger theory, the probability density is given by the positive definite expression

and its current by

and the conservation of probability density has a local form:

In a relativistic theory, the form of the probability density and the current must form a four vector, so the form of the probability density can be found from the current just by replacing by :

Everything is relativistic now, but the probability density is not positive definite, because the initial values of both and can be freely chosen. This expression reduces to Schrödinger's density and current for superpositions of positive frequency waves whose wavelength is long compared to the compton wavelength, that is, for nonrelativistic motions. It reduces to a negative definite quantity for superpositions of negative frequency waves only. It mixes up both signs when forces which have an appreciable amplitude to produce relativistic motions are involved, at which point scattering can produce particles and antiparticles.

Although it was not a successful description of a single particle, this equation is resurrected in quantum field theory, where it is known as the Klein–Gordon equation, and describes a relativistic spin-0 complex field. The non-positive probability density and current are the charge-density and current, while the particles are described by a mode-expansion.

In order to give the Klein–Gordon equation an interpretation as an equation for the probability amplitude for a single particle to have a given position, the negative frequency solutions need to be interpreted as describing the particle travelling backwards in time, so that they propagate into the past. The equation with this interpretation does not predict the future from the present except in the nonrelativistic limit, rather it places a global constraint on the amplitudes. This can be used to construct a perturbation expansion with particles zipping backwards and forwards in time, the Feynman diagrams, but it does not allow a straightforward wavefunction description, since each particle has its own separate proper time.

Dirac's coup[editar | editar código-fonte]

What is needed, then, is an equation that is first-order in both space and time. One could formally take the relativistic expression for the energy , replace p by its operator equivalent, expand the square root in an infinite series of derivative operators, set up an eigenvalue problem, then solve the equation formally by iterations. Most physicists had little faith in such a process, even if it were technically possible.

As the story goes, Dirac was staring into the fireplace at Cambridge, pondering this problem, when he hit upon the idea of taking the square root of the wave operator thus:

On multiplying out the right side, we see that in order to get all the cross-terms such as to vanish, we must assume

with

Dirac, who had just then been intensely involved with working out the foundations of Heisenberg's matrix mechanics, immediately understood that these conditions could be met if A, B... are matrices, with the implication that the wave function has multiple components. This immediately explained the appearance of two-component wave functions in Pauli's phenomenological theory of spin, something that up until then had been regarded as mysterious, even to Pauli himself. However, one needs at least 4×4 matrices to set up a system with the properties desired—so the wave function had four components, not two, as in the Pauli theory.

Given the factorization in terms of these matrices, one can now write down immediately an equation

with to be determined. Applying again the matrix operator on either side yields

On taking we find that all the components of the wave function individually satisfy the relativistic energy–momentum relation. Thus the sought-for equation that is first-order in both space and time is

With and , we get the Dirac equation.

Comparison with the Pauli theory[editar | editar código-fonte]

The necessity of introducing half-integral spin goes back experimentally to the results of the Stern–Gerlach experiment. A beam of atoms is run through a strong inhomogeneous magnetic field, which then splits into N parts depending on the intrinsic angular momentum of the atoms. It was found that for silver atoms, the beam was split in two—the ground state therefore could not be integral, because even if the intrinsic angular momentum of the atoms were as small as possible, 1, the beam would be split into 3 parts, corresponding to atoms with Lz = −1, 0, and +1. The conclusion is that silver atoms have net intrinsic angular momentum of 12. Pauli set up a theory which explained this splitting by introducing a two-component wave function and a corresponding correction term in the Hamiltonian, representing a semi-classical coupling of this wave function to an applied magnetic field, as so:

Here is the applied electromagnetic field, and the three sigmas are Pauli matrices. is the charge of the particle, e.g. for the electron. On squaring out the first term, a residual interaction with the magnetic field is found, along with the usual Hamiltonian of a charged particle interacting with an applied field:

This Hamiltonian is now a 2 × 2 matrix, so the Schrödinger equation based on it,

must use a two-component wave function. Pauli had introduced the sigma matrices

as pure phenomenologyDirac now had a theoretical argument that implied that spin was somehow the consequence of the marriage of quantum theory to relativity.

The Pauli matrices share the same properties as the Dirac matrices—they are all Hermitian, square to 1, and anticommute. This allows one to immediately find a representation of the Dirac matrices in terms of the Pauli matrices:

The Dirac equation now may be written as an equation coupling two-component spinors:

Notice that on the diagonal we find the rest energy of the particle. If we set the momentum to zero—that is, bring the particle to rest—then we have

The equations for the individual two-spinors are now decoupled, and we see that the "top" and "bottom" two-spinors are individually eigenfunctions of the energy with eigenvalues equal to plus and minus the rest energy, respectively. The appearance of this negative energy eigenvalue is completely consistent with relativity.

It should be strongly emphasized that this separation in the rest frame is not an invariant statement—the "bottom" two-spinor does not represent antimatter as such in general. The entire four-component spinor represents an irreducible whole—in general, states will have an admixture of positive and negative energy components. If we couple the Dirac equation to an electromagnetic field, as in the Pauli theory, then the positive and negative energy parts will be mixed together, even if they are originally decoupled. Dirac's main problem was to find a consistent interpretation of this mixing. As we shall see below, it brings a new phenomenon into physics—matter/antimatter creation and annihilation.

Covariant form and relativistic invariance[editar | editar código-fonte]

The explicitly covariant form of the Dirac equation is (employing the Einstein summation convention):

In the above, are the Dirac matrices. is Hermitian, and the are anti-Hermitian, with the definition

This may be summarized using the Minkowski metric on spacetime in the form

where the bracket expression means , the anticommutator. These are the defining relations of a Clifford algebra over a pseudo-orthogonal 4-d space with metric signature . Note that one may also employ the metric form by multiplying all the gammas by a factor of . At an elementary level, the choice may be regarded as conventional, but there are specific reasons[necessário esclarecer] for preferring the former, both mathematically and for convenience in calculation and physical interpretation. In the literature, one almost always finds the convention in use. The specific Clifford algebra employed in the Dirac equation is known as the Dirac algebra.

The Dirac equation may be interpreted as an eigenvalue expression, where the rest mass is proportional to an eigenvalue of the 4-momentum operator, the proportion being the speed of light in vacuo:

In practice, physicists often use units of measure such that and c are equal to 1, known as "natural" units. The equation then takes the simple form

or, if Feynman slash notation is employed,

A fundamental theorem states that if two distinct sets of matrices are given that both satisfy the Clifford relations, then they are connected to each other by a similarity transformation:

If in addition the matrices are all unitary, as are the Dirac set, then S itself is unitary;

The transformation U is unique up to a multiplicative factor of absolute value 1. Let us now imagine a Lorentz transformation to have been performed on the derivative operators, which form a covariant vector. In order that the operator remain invariant, the gammas must transform among themselves as a contravariant vector with respect to their spacetime index. These new gammas will themselves satisfy the Clifford relations, because of the orthogonality of the Lorentz transformation. By the fundamental theorem, we may replace the new set by the old set subject to a unitary transformation. In the new frame, remembering that the rest mass is a relativistic scalar, the Dirac equation will then take the form

If we now define the transformed spinor

then we have the transformed Dirac equation

Thus, once we settle on a unitary representation of the gammas, it is final provided we transform the spinor according the unitary transformation that corresponds to the given Lorentz transformation.

These considerations reveal the origin of the gammas in geometry, hearkening back to Grassmann's original motivation - they represent a fixed basis of unit vectors in spacetime. Similarly, products of the gammas such as represent oriented surface elements, and so on. With this in mind, we can find the form the unit volume element on spacetime in terms of the gammas as follows. By definition, it is

In order that this be an invariant, the epsilon symbol must be a tensor, and so must contain a factor of , where g is the determinant of the metric tensor. Since this is negative, that factor is imaginary. Thus

This matrix is given the special symbol , owing to its importance when one is considering improper transformations of spacetime, that is, those that change the orientation of the basis vectors. In the representation we are using for the gammas, it is

Also note that we could as easily have taken the negative square root of the determinant of g - the choice amounts to an initial handedness convention.

Lorentz Invariance of the Dirac equation[editar | editar código-fonte]

The Lorentz invariance of the Dirac equation follows from its covariant nature.

Comparison with the Klein-Gordon equation[editar | editar código-fonte]

In Feynman slash notation the Klein-Gordon equation:

can be factorised as:

The last factor is simply the Dirac equation. Hence any solution to the Dirac equation is automatically a solution to the Klein-Gordon equation, but the converse is not true: that is, not all solutions to the Klein–Gordon equation solve the Dirac equation.

Adjoint equation and Dirac current[editar | editar código-fonte]

By defining the adjoint spinor

where is the complex conjugate and transpose of , and noticing that

,

we obtain, by taking the Hermitian conjugate of the Dirac equation and multiplying from the right by , the adjoint equation:

where is understood to act to the left. Multiplying the Dirac equation by from the left, and the adjoint equation by from the right, and adding, produces the law of conservation of the Dirac current in covariant form:

Now we see the great advantage of the first-order equation over the one Schrödinger had tried - this is the conserved probability current density required by relativistic invariance, only now its 0-component is positive definite:

The Dirac equation and its adjoint are the Euler–Lagrange equations of the 4-d invariant action integral

where the scalar L is the Dirac Lagrangian

and for the purposes of variation, and are regarded as independent fields. The relativistic invariance also follows immediately from the variational principle.

Coupling to an electromagnetic field[editar | editar código-fonte]

To consider problems in which an applied electromagnetic field interacts with the particles described by the Dirac equation, one uses the correspondence principle, and takes over into the theory the corresponding expression from classical mechanics, whereby the total momentum of a charged particle in an external field is modified as so:

(where is the charge of the particle; for example, for an electron). In natural units, the Dirac equation then takes the form

This validity of this prescription is confirmed experimentally with great precision. It is known as minimal coupling, and is found throughout particle physics. Indeed, while the introduction of the electromagnetic field in this way is essentially phenomenological in this context, it rises to a fundamental principle in quantum field theory.

Now as stated above, the transformation U is defined only up to a phase factor . Also, the fundamental observable of the Dirac theory, the current, is unchanged if we multiply the wave function by an arbitrary phase. We may exploit this to get the form of the mutual interaction of a Dirac particle and the electromagnetic field, as opposed to simply considering a Dirac particle in an applied field, by assuming this arbitrary phase factor to depend continuously on position:

Notice now that

In order to preserve minimal coupling, we must add to the potential a term proportional to the gradient of the phase. But we know from electrodynamics that this leaves the electromagnetic field itself invariant. The value of the phase is arbitrary, but not how it changes from place to place. This is the starting point of gauge theory, which is the main principle on which quantum field theory is based. The simplest such theory, and the one most thoroughly understood, is known as quantum electrodynamics. The equations of field theory thus have invariance under both Lorentz transformations and gauge transformations.

Curved spacetime Dirac equation[editar | editar código-fonte]

The Dirac equation can be written in curved spacetime using vierbein fields. Vierbeins describe a local frame that enables to define Dirac matrices at every point. Contracting these matrices with vierbeins give the right transformation properties. This way Dirac's equation takes the following form in curved spacetime [1]:

Here is the vierbein and is the covariant derivative for fermion fields, defined as follows

where is the Lorentzian metric, is the commutator of Dirac matrices:

and is the spin connection:

where is the Christoffel symbol. Note that here, Latin letters denote the "Lorentzian" indices and Greek ones denote "Riemannian" indices.

Physical interpretation[editar | editar código-fonte]

The Dirac theory, while providing a wealth of information that is accurately confirmed by experiments, nevertheless introduces a new physical paradigm that appears at first difficult to interpret and even paradoxical. Some of these issues of interpretation must be regarded as open questions. Here we will see how the Dirac theory brilliantly answered some of the outstanding issues in physics at the time it was put forward, while posing others that are still the subject of debate.

Identification of observables[editar | editar código-fonte]

The critical physical question in a quantum theory is - what are the physically observable quantities defined by the theory? According to general principles, such quantities are defined by Hermitian operators that act on the Hilbert space of possible states of a system. The eigenvalues of these operators are then the possible results of measuring the corresponding physical quantity. In the Schrödinger theory, the simplest such object is the overall Hamiltonian, which represents the total energy of the system. If we wish to maintain this interpretation on passing to the Dirac theory, we must take the Hamiltonian to be

This looks promising, because we see by inspection the rest energy of the particle and, in case , the energy of a charge placed in an electric potential . What about the term involving the vector potential? In classical electrodynamics, the energy of a charge moving in an applied potential is

Thus the Dirac Hamiltonian is fundamentally distinguished from its classical counterpart, and we must take great care to correctly identify what is an observable in this theory. Much of the apparent paradoxical behavior implied by the Dirac equation amounts to a misidentification of these observables. Let us now describe one such effect. (cont'd)

Energy per Oscillator[editar | editar código-fonte]

Predefinição:Cleanup-section In quantum mechanics, the measure E of the average energy per oscillator is given by:

Where

T is the measure of the temperature
h is the Planck's constant
R is the universal gas constant
F is the constant frequency
is the number of oscillators at the lowest energy

If we let , then

and then

If and , then E satisfies:

and

History[editar | editar código-fonte]

Since the Dirac equation was originally invented to describe the electron, we will generally speak of "electrons" in this article. The equation also applies to quarks, which are also elementary spin-½ particles. A modified Dirac equation can be used to approximately describe protons and neutrons, which are not elementary particles (they are made up of quarks), but have a net spin of ½. Another modification of the Dirac equation, called the Majorana equation, is thought to describe neutrinos — also spin-½ particles.

The Dirac equation describes the probability amplitudes for a single electron. This is a single-particle theory; in other words, it does not account for the creation and destruction of the particles. It gives a good prediction of the magnetic moment of the electron and explains much of the fine structure observed in atomic spectral lines. It also explains the spin of the electron. Two of the four solutions of the equation correspond to the two spin states of the electron. The other two solutions make the peculiar prediction that there exist an infinite set of quantum states in which the electron possesses negative energy. This strange result led Dirac to predict, via a remarkable hypothesis known as "hole theory," the existence of particles behaving like positively-charged electrons. Dirac thought at first these particles might be protons. He was chagrined when the strict prediction of his equation (which actually specifies particles of the same mass as the electron) was verified by the discovery of the positron in 1932. When asked later why he hadn't actually boldly predicted the yet unfound positron with its correct mass, Dirac answered "Pure cowardice!" He shared the Nobel Prize anyway, in 1933.

Despite these successes, Dirac's theory is flawed by its neglect of the possibility of creating and destroying particles, one of the basic consequences of relativity. This difficulty is resolved by reformulating it as a quantum field theory. Adding a quantized electromagnetic field to this theory leads to the theory of quantum electrodynamics (QED). Moreover the equation cannot fully account for particles of negative energy but is restricted to positive energy particles.

A similar equation for spin 3/2 particles is called the Rarita-Schwinger equation.

Hole theory[editar | editar código-fonte]

The negative E solutions found in the preceding section are problematic, for it was assumed that the particle has a positive energy. Mathematically speaking, however, there seems to be no reason for us to reject the negative-energy solutions. Since they exist, we cannot simply ignore them, for once we include the interaction between the electron and the electromagnetic field, any electron placed in a positive-energy eigenstate would decay into negative-energy eigenstates of successively lower energy by emitting excess energy in the form of photons. Real electrons obviously do not behave in this way.

To cope with this problem, Dirac introduced the hypothesis, known as hole theory, that the vacuum is the many-body quantum state in which all the negative-energy electron eigenstates are occupied. This description of the vacuum as a "sea" of electrons is called the Dirac sea. Since the Pauli exclusion principle forbids electrons from occupying the same state, any additional electron would be forced to occupy a positive-energy eigenstate, and positive-energy electrons would be forbidden from decaying into negative-energy eigenstates.

Dirac further reasoned that if the negative-energy eigenstates are incompletely filled, each unoccupied eigenstate – called a hole – would behave like a positively charged particle. The hole possesses a positive energy, since energy is required to create a particle–hole pair from the vacuum. As noted above, Dirac initially thought that the hole might be the proton, but Hermann Weyl pointed out that the hole should behave as if it had the same mass as an electron, whereas the proton is over 1800 times heavier. The hole was eventually identified as the positron, experimentally discovered by Carl Anderson in 1932.

It is not entirely satisfactory to describe the "vacuum" using an infinite sea of negative-energy electrons. The infinitely negative contributions from the sea of negative-energy electrons has to be canceled by an infinite positive "bare" energy and the contribution to the charge density and current coming from the sea of negative-energy electrons is exactly canceled by an infinite positive "jellium" background so that the net electric charge density of the vacuum is zero. In quantum field theory, a Bogoliubov transformation on the creation and annihilation operators (turning an occupied negative-energy electron state into an unoccupied positive energy positron state and an unoccupied negative-energy electron state into an occupied positive energy positron state) allows us to bypass the Dirac sea formalism even though, formally, it is equivalent to it.

In certain applications of condensed matter physics, however, the underlying concepts of "hole theory" are valid. The sea of conduction electrons in an electrical conductor, called a Fermi sea, contains electrons with energies up to the chemical potential of the system. An unfilled state in the Fermi sea behaves like a positively-charged electron, though it is referred to as a "hole" rather than a "positron". The negative charge of the Fermi sea is balanced by the positively-charged ionic lattice of the material.

Dirac bilinears[editar | editar código-fonte]

There are five different (neutral) Dirac bilinear terms not involving any derivatives:

  • (S)calar: (scalar, P-even)
  • (P)seudoscalar: (scalar, P-odd)
  • (V)ector: (vector, P-even)
  • (A)xial: (vector, P-odd)
  • (T)ensor: (antisymmetric tensor, P-even),

where and .

A Dirac mass term is an S coupling. A Yukawa coupling may be S or P. The electromagnetic coupling is V. The weak interactions are V-A.

See also[editar | editar código-fonte]

References[editar | editar código-fonte]

  1. Lawrie, Ian D. A Unified Grand Tour of Theoretical Physics. [S.l.: s.n.] 

Selected papers[editar | editar código-fonte]

Textbooks[editar | editar código-fonte]

  • Halzen, Francis; Martin, Alan (1984). Quarks & Leptons: An Introductory Course in Modern Particle Physics. [S.l.]: John Wiley & Sons. ISBN 
  • Dirac, P.A.M., Principles of Quantum Mechanics, 4th edition (Clarendon, 1982)
  • Shankar, R., Principles of Quantum Mechanics, 2nd edition (Plenum, 1994)
  • Bjorken, J D & Drell, S, Relativistic Quantum mechanics
  • Thaller, B., The Dirac Equation, Texts and Monographs in Physics (Springer, 1992)
  • Schiff, L.I., Quantum Mechanics, 3rd edition (McGraw-Hill, 1955)
  • Griffiths, D.J., Introduction to Elementary Particles (Wiley, John & Sons, Inc., 1987) ISBN 0-471-60386-4.

External links[editar | editar código-fonte]

Category:Fundamental physics concepts Category:Quantum field theory Category:Spinors Category:Partial differential equations Category:Fermions pt:Equação de Dirac


Wavelets[editar | editar código-fonte]

A non-technical definition of a wavelet is that it is a wave with an amplitude that starts out at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation" like one might see recorded by a seismograph or heart monitor. Generally, wavelets are purposefully crafted to have specific properties that make them useful for signal processing. Wavelets can be combined, using a "shift, multiply and sum" technique called convolution, with portions of an unknown signal to extract information from the unknown signal. For example, a wavelet could be created to have a frequency of "Middle C" and a short duration of roughly a 32nd note. If this wavelet were to be convolved at periodic intervals with a signal created from the recording of a song, then the results of these convolutions would be useful for determining when the "Middle C" note was being played in the song. Mathematically the wavelet will resonate if the unknown signal contains information of similar frequency - just as a tuning fork physically resonates with sound waves of its specific tuning frequency. This concept of resonance is at the core of many practical applications of wavelet theory.

As wavelets are a mathematical tool they can be used to extract information from many different kinds of data, including - but certainly not limited to - audio signals and images. Sets of wavelets are generally needed to analyze data fully. A set of "complementary" wavelets will deconstruct data without gaps or overlap so that the deconstruction process is mathematically reversible. Thus, sets of complementary wavelets are useful in wavelet based compression/decompression algorithms where it is desirable to recover the original information with minimal loss.

A broader and more rigorous definition of a wavelet is that it is a mathematical function used to divide a given function or continuous-time signal into different scale components. Usually one can assign a frequency range to each scale component. Each scale component can then be studied with a resolution that matches its scale. A wavelet transform is the representation of a function by wavelets. The wavelets are scaled and translated copies (known as "daughter wavelets") of a finite-length or fast-decaying oscillating waveform (known as the "mother wavelet"). Wavelet transforms have advantages over traditional Fourier transforms for representing functions that have discontinuities and sharp peaks, and for accurately deconstructing and reconstructing finite, non-periodic and/or non-stationary signals.

In formal terms, this representation is a wavelet series representation of a square-integrable function with respect to either a complete, orthonormal set of basis functions, or an overcomplete set of Frame of a vector space (also known as a Riesz basis), for the Hilbert space of square integrable functions.

Wavelet transforms are classified into discrete wavelet transforms (DWTs) and continuous wavelet transforms (CWTs). Note that both DWT and CWT are continuous-time (analog) transforms. They can be used to represent continuous-time (analog) signals. CWTs operate over every possible scale and translation whereas DWTs use a specific subset of scale and translation values or representation grid.

Wikcionário
Wikcionário
O Wikcionário tem o verbete wavelet.

The word wavelet is due to Morlet and Grossmann in the early 1980s. They used the French word ondelette, meaning "small wave". Soon it was transferred to English by translating "onde" into "wave", giving "wavelet".

Wavelet theory[editar | editar código-fonte]

Wavelet theory is applicable to several subjects. All wavelet transforms may be considered forms of time-frequency representation for continuous-time (analog) signals and so are related to harmonic analysis. Almost all practically useful discrete wavelet transforms use discrete-time filterbanks. These filter banks are called the wavelet and scaling coefficients in wavelets nomenclature. These filterbanks may contain either finite impulse response (FIR) or infinite impulse response (IIR) filters. The wavelets forming a CWT are subject to the uncertainty principle of Fourier analysis respective sampling theory: Given a signal with some event in it, one cannot assign simultaneously an exact time and frequency response scale to that event. The product of the uncertainties of time and frequency response scale has a lower bound. Thus, in the scaleogram of a continuous wavelet transform of this signal, such an event marks an entire region in the time-scale plane, instead of just one point. Also, discrete wavelet bases may be considered in the context of other forms of the uncertainty principle.

Wavelet transforms are broadly divided into three classes: continuous, discrete and multiresolution-based.

Continuous wavelet transforms (Continuous Shift & Scale Parameters)[editar | editar código-fonte]

In continuous wavelet transforms, a given signal of finite energy is projected on a continuous family of frequency bands (or similar subspaces of the Lp function space ). For instance the signal may be represented on every frequency band of the form for all positive frequencies f>0. Then, the original signal can be reconstructed by a suitable integration over all the resulting frequency components.

The frequency bands or subspaces (sub-bands) are scaled versions of a subspace at scale 1. This subspace in turn is in most situations generated by the shifts of one generating function , the mother wavelet. For the example of the scale one frequency band this function is

with the (normalized) sinc function. Other example mother wavelets are:

Meyer
Morlet
Mexican Hat

The subspace of scale a or frequency band is generated by the functions (sometimes called child wavelets)

,

where a is positive and defines the scale and b is any real number and defines the shift. The pair (a,b) defines a point in the right halfplane .

The projection of a function x onto the subspace of scale a then has the form

with wavelet coefficients

.

See a list of some Continuous wavelets.

For the analysis of the signal x, one can assemble the wavelet coefficients into a scaleogram of the signal.

Discrete wavelet transforms (Discrete Shift & Scale parameters)[editar | editar código-fonte]

It is computationally impossible to analyze a signal using all wavelet coefficients, so one may wonder if it is sufficient to pick a discrete subset of the upper halfplane to be able to reconstruct a signal from the corresponding wavelet coefficients. One such system is the affine system for some real parameters a>1, b>0. The corresponding discrete subset of the halfplane consists of all the points with integers . The corresponding baby wavelets are now given as

.

A sufficient condition for the reconstruction of any signal x of finite energy by the formula

is that the functions form a tight frame of .

Multiresolution discrete wavelet transforms[editar | editar código-fonte]

D4 wavelet

In any discretised wavelet transform, there are only a finite number of wavelet coefficients for each bounded rectangular region in the upper halfplane. Still, each coefficient requires the evaluation of an integral. To avoid this numerical complexity, one needs one auxiliary function, the father wavelet . Further, one has to restrict a to be an integer. A typical choice is a=2 and b=1. The most famous pair of father and mother wavelets is the Daubechies 4 tap wavelet.

From the mother and father wavelets one constructs the subspaces

, where

and

, where .

From these one requires that the sequence

forms a multiresolution analysis of and that the subspaces are the orthogonal "differences" of the above sequence, that is, is the orthogonal complement of inside the subspace . In analogy to the sampling theorem one may conclude that the space with sampling distance more or less covers the frequency baseband from 0 to . As orthogonal complement, roughly covers the band .

From those inclusions and orthogonality relations follows the existence of sequences and that satisfy the identities

and

and

and .

The second identity of the first pair is a refinement equation for the father wavelet . Both pairs of identities form the basis for the algorithm of the fast wavelet transform.

Mother wavelet[editar | editar código-fonte]

For practical applications, and for efficiency reasons, one prefers continuously differentiable functions with compact support as mother (prototype) wavelet (functions). However, to satisfy analytical requirements (in the continuous WT) and in general for theoretical reasons, one chooses the wavelet functions from a subspace of the space This is the space of measurable functions that are absolutely and square integrable:

and

Being in this space ensures that one can formulate the conditions of zero mean and square norm one:

is the condition for zero mean, and
is the condition for square norm one.

For to be a wavelet for the continuous wavelet transform (see there for exact statement), the mother wavelet must satisfy an admissibility criterion (loosely speaking, a kind of half-differentiability) in order to get a stably invertible transform.

For the discrete wavelet transform, one needs at least the condition that the wavelet series is a representation of the identity in the space . Most constructions of discrete WT make use of the multiresolution analysis, which defines the wavelet by a scaling function. This scaling function itself is solution to a functional equation.

In most situations it is useful to restrict to be a continuous function with a higher number M of vanishing moments, i.e. for all integer m<M

Some example mother wavelets are:

Meyer
Morlet
Mexican Hat

The mother wavelet is scaled (or dilated) by a factor of and translated (or shifted) by a factor of to give (under Morlet's original formulation):

For the continuous WT, the pair (a,b) varies over the full half-plane ; for the discrete WT this pair varies over a discrete subset of it, which is also called affine group.

These functions are often incorrectly referred to as the basis functions of the (continuous) transform. In fact, as in the continuous Fourier transform, there is no basis in the continuous wavelet transform. Time-frequency interpretation uses a subtly different formulation (after Delprat).

Comparisons with Fourier Transform (Continuous-Time)[editar | editar código-fonte]

The wavelet transform is often compared with the Fourier transform, in which signals are represented as a sum of sinusoids. The main difference is that wavelets are localized in both time and frequency whereas the standard Fourier transform is only localized in frequency. The Short-time Fourier transform (STFT) is also time and frequency localized but there are issues with the frequency time resolution and wavelets often give a better signal representation using Multiresolution analysis.

The discrete wavelet transform is also less computationally complex, taking O(N) time as compared to O(N log N) for the fast Fourier transform. This computational advantage is not inherent to the transform, but reflects the choice of a logarithmic division of frequency, in contrast to the equally spaced frequency divisions of the FFT.

Definition of a wavelet[editar | editar código-fonte]

There are a number of ways of defining a wavelet (or a wavelet family).

Scaling filter[editar | editar código-fonte]

The wavelet is entirely defined by the scaling filter - a low-pass finite impulse response (FIR) filter of length 2N and sum 1. In biorthogonal wavelets, separate decomposition and reconstruction filters are defined.

For analysis the high pass filter is calculated as the quadrature mirror filter of the low pass, and reconstruction filters the time reverse of the decomposition.

Daubechies and Symlet wavelets can be defined by the scaling filter.

Scaling function[editar | editar código-fonte]

Wavelets are defined by the wavelet function (i.e. the mother wavelet) and scaling function (also called father wavelet) in the time domain.

The wavelet function is in effect a band-pass filter and scaling it for each level halves its bandwidth. This creates the problem that in order to cover the entire spectrum, an infinite number of levels would be required. The scaling function filters the lowest level of the transform and ensures all the spectrum is covered. See [1] for a detailed explanation.

For a wavelet with compact support, can be considered finite in length and is equivalent to the scaling filter g.

Meyer wavelets can be defined by scaling functions

Wavelet function[editar | editar código-fonte]

The wavelet only has a time domain representation as the wavelet function .

For instance, Mexican hat wavelets can be defined by a wavelet function. See a list of a few Continuous wavelets.

Applications of Discrete Wavelet Transform[editar | editar código-fonte]

Generally, an approximation to DWT is used for data compression if signal is already sampled, and the CWT for signal analysis. Thus, DWT approximation is commonly used in engineering and computer science, and the CWT in scientific research.

Wavelet transforms are now being adopted for a vast number of applications, often replacing the conventional Fourier Transform. Many areas of physics have seen this paradigm shift, including molecular dynamics, ab initio calculations, astrophysics, density-matrix localisation, seismic geophysics, optics, turbulence and quantum mechanics. This change has also occurred in image processing, blood-pressure, heart-rate and ECG analyses, DNA analysis, protein analysis, climatology, general signal processing, speech recognition, computer graphics and multifractal analysis. In computer vision and image processing, the notion of scale-space representation and Gaussian derivative operators is regarded as a canonical multi-scale representation.

One use of wavelet approximation is in data compression. Like some other transforms, wavelet transforms can be used to transform data, then encode the transformed data, resulting in effective compression. For example, JPEG 2000 is an image compression standard that uses biorthogonal wavelets. This means that although the frame is overcomplete, it is a tight frame (see types of Frame of a vector space), and the same frame functions (except for conjugation in the case of complex wavelets) are used for both analysis and synthesis, i.e., in both the forward and inverse transform. For details see wavelet compression.

A related use is that of smoothing/denoising data based on wavelet coefficient thresholding, also called wavelet shrinkage. By adaptively thresholding the wavelet coefficients that correspond to undesired frequency components smoothing and/or denoising operations can be performed.

Wavelet transforms are also starting to be used for communication applications. Wavelet OFDM is the basic modulation scheme used in HD-PLC (a powerline communications technology developed by Panasonic), and in one of the optional modes included in the IEEE P1901 draft standard. The advantage of Wavelet OFDM over traditional FFT OFDM systems is that Wavelet can achieve deeper notches and that it does not require a Guard Interval (which usually represents significant overhead in FFT OFDM systems)[1].

History[editar | editar código-fonte]

The development of wavelets can be linked to several separate trains of thought, starting with Haar's work in the early 20th century. Notable contributions to wavelet theory can be attributed to Zweig’s discovery of the continuous wavelet transform in 1975 (originally called the cochlear transform and discovered while studying the reaction of the ear to sound)[2], Pierre Goupillaud, Grossmann and Morlet's formulation of what is now known as the CWT (1982), Jan-Olov Strömberg's early work on discrete wavelets (1983), Daubechies' orthogonal wavelets with compact support (1988), Mallat's multiresolution framework (1989), Nathalie Delprat's time-frequency interpretation of the CWT (1991), Newland's Harmonic wavelet transform (1993) and many others since.

Timeline[editar | editar código-fonte]

Wavelet Transforms[editar | editar código-fonte]

There are a large number of wavelet transforms each suitable for different applications. For a full list see list of wavelet-related transforms but the common ones are listed below:

Generalized Transforms[editar | editar código-fonte]

There are a number of generalized transforms of which the wavelet transform is a special case. For example, Joseph Segman introduced scale into the Heisenberg group, giving rise to a continuous transform space that is a function of time, scale, and frequency. The CWT is a two-dimensional slice through the resulting 3d time-scale-frequency volume.

Another example of a generalized transform is the chirplet transform in which the CWT is also a two dimensional slice through the chirplet transform.

An important application area for generalized transforms involves systems in which high frequency resolution is crucial. For example, darkfield electron optical transforms intermediate between direct and reciprocal space have been widely used in the harmonic analysis of atom clustering, i.e. in the study of crystals and crystal defects[3]. Now that transmission electron microscopes are capable of providing digital images with picometer-scale information on atomic periodicity in nanostructure of all sorts, the range of pattern recognition[4] and strain[5]/metrology[6] applications for intermediate transforms with high frequency resolution (like brushlets[7] and ridgelets[8]) is growing rapidly.

List of wavelets[editar | editar código-fonte]

Discrete wavelets[editar | editar código-fonte]

Continuous wavelets[editar | editar código-fonte]

Real valued[editar | editar código-fonte]

Complex valued[editar | editar código-fonte]

See also[editar | editar código-fonte]

Notes[editar | editar código-fonte]

  1. Recent Developments in the Standardization of Power Line Communications within the IEEE, (Galli, S. and Logvinov, O - IEEE Communications Magazine, July 2008)
  2. http://scienceworld.wolfram.com/biography/Zweig.html Zweig, George Biography on Scienceworld.wolfram.com
  3. P. Hirsch, A. Howie, R. Nicholson, D. W. Pashley and M. J. Whelan (1965/1977) Electron microscopy of thin crystals (Butterworths, London/Krieger, Malabar FLA) ISBN 0-88275-376-2
  4. P. Fraundorf, J. Wang, E. Mandell and M. Rose (2006) Digital darkfield tableaus, Microscopy and Microanalysis 12:S2, 1010-1011 (cf. arXiv:cond-mat/0403017)
  5. M. J. Hÿtch, E. Snoeck and R. Kilaas (1998) Quantitative measurement of displacement and strain fields from HRTEM micrographs, Ultramicroscopy 74:131-146.
  6. Martin Rose (2006) Spacing measurements of lattice fringes in HRTEM image using digital darkfield decomposition (M.S. Thesis in Physics, U. Missouri - St. Louis)
  7. F. G. Meyer and R. R. Coifman (1997) Applied and Computational Harmonic Analysis 4:147.
  8. A. G. Flesia, H. Hel-Or, A. Averbuch, E. J. Candes, R. R. Coifman and D. L. Donoho (2001) Digital implementation of ridgelet packets (Academic Press, New York).

References[editar | editar código-fonte]

  • Paul S. Addison, The Illustrated Wavelet Transform Handbook, Instituto de Física, 2002, ISBN 0-7503-0692-0
  • Ingrid Daubechies, Ten Lectures on Wavelets, Society for Industrial and Applied Mathematics, 1992, ISBN 0-89871-274-2
  • A. N. Akansu and R. A. Haddad, Multiresolution Signal Decomposition: Transforms, Subbands, Wavelets, Academic Press, 1992, ISBN 0-12-047140-X
  • P. P. Vaidyanathan, Multirate Systems and Filter Banks, Prentice Hall, 1993, ISBN 0-13-605718-7
  • Mladen Victor Wickerhauser, Adapted Wavelet Analysis From Theory to Software, A K Peters Ltd, 1994, ISBN 1-56881-041-5
  • Gerald Kaiser, A Friendly Guide to Wavelets, Birkhauser, 1994, ISBN 0-8176-3711-7
  • Haar A., Zur Theorie der orthogonalen Funktionensysteme, Mathematische Annalen, 69, pp 331-371, 1910.
  • Ramazan Gençay, Faruk Selçuk and Brandon Whitcher, An Introduction to Wavelets and Other Filtering Methods in Finance and Economics, Academic Press, 2001, ISBN 0-12-279670-5
  • Donald B. Percival and Andrew T. Walden, Wavelet Methods for Time Series Analysis, Cambridge University Press, 2000, ISBN 0-5216-8508-7
  • Tony F. Chan and Jackie (Jianhong) Shen, Image Processing and Analysis - Variational, PDE, Wavelet, and Stochastic Methods, Society of Applied Mathematics, ISBN 089871589X (2005)
  • Stéphane Mallat, "A wavelet tour of signal processing" 2nd Edition, Academic Press, 1999, ISBN 0-12-466606-x
  • Barbara Burke Hubbard, "The World According to Wavelets: The Story of a Mathematical Technique in the Making", AK Peters Ltd, 1998, ISBN 1568810725, ISBN 978-1568810720

External links[editar | editar código-fonte]

Commons
Commons
O Commons possui imagens e outros ficheiros sobre Wavelet

pt:Wavelet