|
|
|
|
|
Second Workshop on
Algorithms in Quantum Information Processing |
Monday
9:05am |
|
Quantum Algorithms Tutorial
Michele Mosca
Centre for Quantum Computation
Clarendon Laboratory, Parks Road
Oxford OX2 6UD
United Kingdom
e-mail: mmosca@cacr.math.uwaterloo.ca
I will review the main quantum algorithms using new insights obtained over the past few
years. This will include the quantum Fourier transform, Shor's algorithms, the eigenvalue
estimation approach of Kitaev, relations to other algorithms and connections between them.
Quantum amplitude estimation (a generalisation of quantum counting) will also be
discussed. |
10:30am |
|
Resources Relations in
Quantum Information Processing
Charles Bennett
IBM T. J. Watson Research Center
PO Box 218
Yorktown Heights, NY 10598
e-mail: bennetc@watson.ibm.com
Quantum information processing (including classical processing as a subset) utilizes
resources such as entanglement, interaction, communication via noiseless or noisy
channels, and thermodynamic irreversibility to perform desired transformations of
classical information and intact quantum states. I survey the emerging theory of
reducibilities and equivalences among these resources. |
11:30am |
|
Correlation and Nonlocality
N. David Mermin
Cornell University
Physics Department
Clark Hall, Ithaca, NY 14853-2501
e-mail: ndm4@cornell.edu
The quantum correlations between spatially separated systems that were formerly in close
contact but no longer interact, can have a mysterious quality, captured by the phrase
"quantum nonlocality". Attitudes toward this "nonlocality" vary
from the view that nothing is nonlocal except the misguided attempt to explain the
correlations in terms of nonexistent "hidden variables", to the view that
quantum mechanics has proved that what you decide to do over here can instantaneously
alter the physical character of something over there. Such conceptually mysterious
correlations have assumed a central role in quantum information theory. While the
philosophical puzzles do not hinder the gedanken-practical application of the
correlations, quantum information scientists might bear the puzzles in mind, on the chance
that some of the exquisitely clever things they manage to do with entangled states might
illuminate them. |
2:00pm |
|
Bell Inequalities Revisited
Asher Peres
Technion - Israel Institute of Technology
Department of Physics
Technion City
32 000 Haifa, Israel
e-mail: peres@photon.technion.ac.il
Bell inequalities are upper bounds on the correlations of distant events. They involve
only classical (Boolean) logic and the principle of local causes, namely: events that
occur in a given spacetime region are independent of external parameters that may be
controlled, at the same moment, by agents located in distant spacetime regions. Bell's
momentous discovery was that these bounds might be exceeded by the correlations predicted
by quantum mechanics. This review will include the following topics:
- Derivation of generalized inequalities from Farkas's Lemma in convex analysis.
- Sequential tests with adaptive strategies.
- Collective tests of quantum nonlocality.
- Quantum Bell inequalities, which are obeyed by separable systems, and violated by
entangled systems.
- Interesting open questions under current investigation.
|
2:45pm |
|
Quantum Nonlocality
without Entanglement
David DiVincenzo
IBM T. J. Watson Research Center
PO Box 218
Yorktown Heights, NY 10598
e-mail: divince@watson.ibm.com
We have discovered various ensembles of orthogonal bipartite (and multipartite) quantum
states which show a variety of manifestations of quantum nonlocality. When their parts are
shared between several parties, the identity of these states cannot be determined
reliably, even if these parties are allowed any local quantum operations and any amount of
classical communication. We discuss the mutual information attainable in these
measurements as well as other measures of this nonlocality, such as the entropy produced
when the parties are instructed to prepare one of the ensemble of states locally, and the
amount of "advice" (classical data or quantum entanglement) needed to make the
measurements doable. The difference between remote dephasing of states and remote
measurement is clarified. |
Tuesday
9:00am |
|
Decoherence -- An Algorithmic
Point of View
Wojciech Zurek
e-mail: whz@lanl.gov
It is by now widely accepted that decoherence can single out a preferred pointer basis
through the process known as environment-induced superselection (or einselection), and
that its undoing is difficult as the monitoring by the environment is (effectively, if not
in principle) irreversible. I shall analyse the nature of decoherence -- in particular,
the reasons for why is it so difficult to undo -- from an algorithmic point of view. The
consequences of decoherence shall be investigated from the same vantage point, which will
lead in particular to an operational definition of probabilities.
Some of the to-be-discussed results have been described in: W. H. Zurek, DECOHERENCE,
EINSELECTION AND THE EXISTENTIAL INTERPRETATION (The Rough Guide), Phil. Trans. Roy. Soc.
London A, vol. 356 1793-1821 (1998). |
9:45am |
|
Bound entanglement
Pawel Horodecki
Technical University of Gdansk
Faculty of Applied Physics and Mathematics
ul. Narutowicza 11/12
Gdansk 80-952
Poland
e-mail: pawel@mifgate.pg.gda.pl
A mixed state is called entangled (non-separable) if it is not a mixture of product
states. It called bound entangled if it is entangled, but cannot be distilled i.e.
brought to the maximally entangled form by means of local quantum operations and classical
communication. A characterization of bound entangled states is given. It is proved that
bound entangled states do not allow better than classical fidelity of teleportation. It is
also shown that the set of all nondistillable states is compact. Two methods of seeking of
bound entangled states are presented: (i) the one involving theory of positive maps, (ii)
the one based on investigation of the range of the density matrix. New bound entangled
states obtained within the first method are presented. It is also shown that the bound
entanglement can be useful for quantum communication within a process analogous to the
phenomenon of chemical activation. This supports the idea of entanglement-energy analogy. |
11:00am |
|
Unextendible Product Bases and
Bound Entanglement
John Smolin
IBM T. J. Watson Research Center
PO Box 218
Yorktown Heights, NY 10598
e-mail: smolin@watson.ibm.com
We obtain easy examples of "bound" entanglement--entangled mixed states from
which no pure entanglement is distillable--using the notion of an unextendible product
basis, of which we give examples. We exhibit a bound-entangled state of a tripartite 2x2x2
system that has no bipartite entanglement, in the sense that all three corresponding 2x4
bipartite states are unentangled. We show that if a set of orthogonal product states is
distinguishable by local von Neumann measurements and classical communication, then this
set can be completed to a full basis. We give a set of states on 3x4 that is
distinguishable by generalized measurements and can be completed in an extended Hilbert
space. |
2:00pm |
|
Limits on the Computational
Power of Noisy Quantum Computers
Dorit Aharonov
Institute of Advanced Studies
47 Einstein drive, Princeton, NJ 08540
e-mail: doria@math.ias.edu
I will describe a classical randomized simulation of noisy quantum Turing machines and
noisy quantum circuits. It is shown that for quantum Turing machine with any non-zero
amount of noise , the simulation is efficient. Hence noisy quantum Turing machines have no
computational advantage over classical computers.
The case of quantum circuits is qualitatively different. I will show that the classical
simulation is efficient given that the noise is larger than a certain threshold.
Interestingly, the cost of the simulation exhibits an abrupt transition from exponential
to polynomial at a critical amount of noise. I will discuss some physical and
computational implications of this abrupt transition.
The proofs involve percolation and branching processes techniques, and are presented in
the density matrix framework.
Most of the results are joint work with Michael Ben-Or, and were presented in FOCS96 in
the paper "Polynomial Simulations of Decohered Quantum Computers". An on-line
version can be found in: http://xxx.lanl.gov/find/quant-ph/1/Dorit+Aharonov/0/1/0/all/1/1
|
2:45pm |
|
On the problem of
equilibration and the computation of correlation functions on a quantum computer
Barbara Terhal
IBM T. J. Watson Research Center
PO Box 218
Yorktown Heights, NY 10598
e-mail: terhal@watson.ibm.com
I address the question of how a quantum computer can be used to simulate experiments on
quantum systems in thermal equilibrium. I present two approaches for the preparation of
the equilibrium state on a quantum computer. For both approaches, one can show that the
output state of the algorithm, after long enough time, is the desired equilibrium. I
discuss these approaches with the help of a numerical analysis for small systems. Finally,
I show how equilibrium (time)-correlation functions can be efficiently estimated on a
quantum computer, given a preparation of the equilibrium state. |
Wednesday
9:00am |
|
Tutorial: Quantum error
correction
Peter Shor
AT&T Labs
Room C237
180 Park Ave.
Florham Park, NJ 07932
e-mail: shor@research.att.com
I will explain how quantum error correcting codes
work, using a few simple examples, and introduce some of the general theory of quantum
error correction. |
10:30am |
|
Quantum Error Correction
Andrew Steane
Oxford University
Atomic and Laser Physics
Clarendon Laboratory, Parks Road
Oxford
OX2 8AZ England
e-mail: a.steane@physics.ox.ac.uk
I will present some of the general theory of quantum error correction, while discussing
recent work in the area of fault tolerant computing. There is a fundamental discrepancy
between the conditions for efficient memory storage and those for simple operations to
evolve a quantum computation. This causes a trade-off between the required scale-up in
computer size and the achieved noise tolerance. This trade-off will be shown to be largely
avoidable, by adapting the recovery operation so that it simultaneously corrects errors
and evolves the computation by performing a useful measurement. Combining this with recent
work on finding good quantum error correcting codes, and some improvements in the way
error information is extracted, it is found that, under the standard model of ubiquitous
stochastic, uncorrelated noise, a quantum computer need be only an order of magnitude
larger than the logical machine contained within it in order to be reliable. |
11:15am |
|
Universal Gates and
Fault -Tolerant Quantum Computing
Vwani P. Roychowdhury
UCLA
Electrical Engineering Department
Los Angeles, CA 90095
e-mail: vwani@ee.ucla.edu
In this talk, we will present a set of recent results on universal gates and
fault-tolerant (FT) quantum computing. Several sets of gates have been proved to be
universal for quantum computing; however, sets of gates that are universal for
fault-tolerant quantum computing have received only limited attention. The only proposed
scheme for FT computation is based on the fault-tolerant realization of the following set
of gates: the 3-qubit Toffoli gate, and the 1-qubit Hadamard and \sigma_z^{1/2} gates. A
proof of universality of this set of gates, however, is not available in the literature.
We present a new set (comprising of the 2-qubit Control-Not gate, and the 1-qubit Hadamard
and \sigma_z^{1/4} gates) and prove it to be universal for FT computation. This is the
first known FT universal set. Furthermore, it involves at most 2-qubit operations and can
potentially enable FT computation in recently proposed schemes for realizing quantum
computers that rely only on near-neighbor interactions. We shall also outline our recent
results on the issue of whether measurements can be delayed in FT quantum computers. All
the existing schemes use measurement during the process of fault tolerant computation, and
methods for delaying the measurements have been ignored. However, the bulk computation
model, which currently provides the best implementation for quantum algorithm, cannot
operate unless measurements are delayed until the end. We present the first-known scheme
for delaying measurements in a fault-tolerant manner. This is joint work with P. Oscar
Boykin, Tal Mor, Matthew Pulver, and Farrokh Vatan. |
2:00pm |
|
Quantum Communication
Complexity
Richard Cleve
University of Calgary
Department of Computer Science
2500 University Drive, N.W.
Calgary, Alberta T2N 1N4
Canada
e-mail: cleve@cpsc.ucalgary.ca
Recent advances in communication complexity in a setting where quantum information is
available are reviewed. Holevo's Theorem implies that n qubits cannot convey more
classical information than n bits. Nevertheless, there are information processing
tasks that require communication and for which using qubits instead of bits results in
substantial savings. For example, a certain two-party scheduling problem can be solved
with significantly less qubit communication than bit communication (this is joint work
with Harry Buhrman and Avi Wigderson). We also investigate the communication complexity
implicit in entangled quantum states, where a bipartite measurement is given from a set of
possibilities. Bell's Theorem and its variants essentially imply that the correlations
that arise between the two components cannot be simulated by using only classical
correlations (a.k.a. "local hidden variables") in place of true entanglement.
This is under the assumption that no communication can occur between the components. If
sufficient communication is permitted then the correlations can be simulated by classical
means. We quantify the amount of communication necessary and sufficient for this
simulation (this is joint work with Gilles Brassard and Alain Tapp). |
2:45pm |
|
Pseudo-Telepathy: Power
and Limitation of Entanglement
Alain Tapp
Université de Montréal
D.I.R.O
C.P. 6128 Succ. Centre-ville
Montréal, Québec H3C 3J7
Québec
e-mail: tappa@iro.umontreal.ca
Although Communication Complexity is not a recent topic in computer science, its relations
to quantum information theory is much more recent. Following the result of R. Cleve and H.
Buhrman, we now know that entanglement can significantly reduce the communication required
for a set of players to compute some specific function of their private inputs. The most
striking result is certainly the one of H. Buhrman, R. Cleve and A. Wigderson, which
showed that the amount of reduction in communication can be exponential. In this talk I
will show a specific problem involving two players that can be solved with NO
communication if the players shared n EPR pairs, but would require
2^{\Omega(n)}bits of classical communication without it. This form of
"Pseudo-Telepathy" can also be used to obtain a lower bound on the amount of
communication required to simulate n EPR pairs: how much communication need we
allow a local model of the world to perfectly simulate entanglement? This work has been
done in collaboration with G. Brassard and R. Cleve. Clearly, the fact that entanglement
can be substituted for communication is intriguing, especially since it is known that,
strictly speaking, entanglement cannot be used to send messages or even to compress them.
Why does it reduce communication complexity? In fact entanglement is not always useful to
solve communication problems. I will also show that for the specific two-player problem of
computing the inner product of their privates inputs, entanglement is of no help. This
work has been done in collaboration with R. Cleve, W. van Dam and M. Nielsen. |
4:00pm |
|
The Communication Complexity
of Sampling
Amnon Ta-Shma
International Computer Science Institute
1947 Center St.
Berkeley, CA 94704
e-mail: amnon@icsi.berkeley.edu
Alice and Bob want to play cards over the phone. To start the game each of them should
choose a set of ten random cards, and of course the two sets have to be disjoint. How much
communication between Alice and Bob is needed to achieve this task? In the talk I will
define the sampling communication complexity model, which generalizes the above example.
We will then study this problem in the classical and quantum communication complexity
models. We will discover:
- An exponential gap between the quantum and classical sampling complexity. This is the
first provable exponential gap known between quantum and classical computations.
- A new interpretation for the Log-Rank Conjecture which is a long standing open problem
in classical communication complexity (details in the talk).
- A tight characterization of the complexity of approximating a super-position, which is a
central question in Quantum Information theory.
Joint work with Andris Ambainis, Leonard Schulman, Umesh Vazirani and Avi Wigderson. |
Thursday
9:00am |
|
Quantum NP
Alexei Kitaev
California Institute of Technology
Department of Physics
Mail code 12-33
Pasadena, CA 91125
e-mail: kitaev@cco.caltech.edu
A computational problem is said to be in BQNP if it can be defined in terms of a quantum
witness verifiable on a quantum computer in polynomial time. (The abbreviation stands for
"bounded error probability, quantum nondeterministic polinomial"). The following
problem is shown to be BQNP complete: decide whether a "local Hamiltonian" (a
sum of several Hermitian operators, each involving a constant number of qubits) has an
eigenvalue smaller then a, or all the eigenvalues are larger than b,
where 1/(b-a) is polynomial. |
9:45am |
|
One complexity theorist's
view of quantum computing
Lance Fortnow
University of Chicago
Department of Computer Science
1100 E. 58th St.
Chicago, IL 60637
email: fortnow@cs.uchicago.edu
One can view the class BQP of problems efficiently computable on quantum computers in one
of two ways:
A class of problems captured by a new paradigm of computation built on deep principles
of quantum mechanics.
- A class of problems captured by a new paradigm of computation built on deep principles
of quantum mechanics.
- Yet another complexity class.
As a complexity theorist I wish to explore the latter view. BQP is usually describe in
a mysterious way using messy definitions with awful notation. In actuality, BQP makes for
a rather nice complexity class:
- BQP is robust--small changes to the definition do not affect the set of languages in the
class.
- BQP contains interesting problems, such as factoring, not known to be contained in
smaller classes.
- BQP does have rather simple characterizations that are understandable to computer
scientists with no knowledge of the underlying physics.
- BQP has interesting relationships with other well-studied complexity classes.
We will discuss these issues and talk about why understanding them leads to a much
clearer picture of the power of quantum computation. We will discuss other quantum related
classes and mention several interesting open questions. |
11:00am |
|
Amplitude Amplification
Peter Høyer
BRICS
Department of Computer Science, University of Aarhus
Ny Munkegade, Bldg. 540
Aarhus C 8000
Denmark
e-mail: u2pi@imada.ou.dk
The success probability of a classical randomized algorithm can be boosted by repetition:
If the algorithm succeeds with probability a, then by repeating the algorithm k
times, we can increase the probability of success to roughly ka, assuming ka
\ll 1. Intuitively, we can think of this basic strategy as each additional run of the
given algorithm boosting the probability of success by an additive amount of roughly a.
On a quantum computer, we can of course also apply this classical method to boosting the
probability of success. In each of the k runs, we apply the algorithm on the
given classical input to create a superposition which we then measure. Our probability of
measuring the correct answer is a. In other words, the amplitude by which we
measure the correct answer is of norm \sqrt a. Now, suppose that we in each additional run
of the given algorithm could add the amplitudes, so that after just \sqrt a repetitions,
our amplitude of success is \sqrt{ka}. Then our probability of success would still be ka
as on the classical computer, but our total work much less. Unfortunately, such an
improved method cannot exist because of the unitary restriction, but fortunately, a
slightly modified version of it does. I call this amplitude amplification. In this talk, I
will describe the concept of amplitude amplification, when it applies, and when it does
not. I will give several examples, many of which goes beyond boosting success
probabilities, and many of which have no classical analogues. |
2:00pm |
|
Limitations of Quantum
Computing: Lower Bounds via Polynomials
Harry Buhrman
CWI
INS4
Kruislaan 413, Amsterdam 1098
The Netherlands
email: Harry.Buhrman@cwi.nl
Most algorithms in Quantum Computation are developed in the black box setting. This is the
setting where we are interested in determining the property of some function f :
{0,1}^n \mapsto {0,1}. The algorithms are geared to determine this property with
as few applications---black-box calls---of f as possible. For example
Grovers algorithm determines the property whether there exists a x \in {0,1}^n
such that f(x) = 1 using only \sqrt{2^n} calls of f. We will
discuss the limitations of this black-box approach. As a tool we will show how to
translate any quantum algorithm into a multivariate polynomial. It now follows that a
lower bound on the degree of the polynomial that corresponds to the property of f is
a lower on the number of applications of f that any quantum algorithm needs to
make to determine this property. We will review some of the recent results obtained using
this method. In particular we will show that for any property the advantage that a quantum
algorithm can have over a classical deterministic algorithm is at most a polynomial. |
2:45pm |
|
A Simple Example of
Definitions of Truth, Validity, Consistency, and Completeness in Quantum Mechanics
Paul Benioff
Argonne National Laboratory
Physics Division
Argonne, IL 60439
e-mail: BENIOFF@anph09.phy.anl.gov
Besides their use for efficient computation, quantum computers are a base for studying
quantum systems that create valid physical theories using mathematics and physics. An
essential part of the validation process for quantum mechanics is the development of a
coherent theory of mathematics and quantum mechanics together. Such a theory should
combine mathematical logical concepts with quantum mechanics. That this might be possible
is shown here by defining truth, validity, consistency, and completeness for a quantum
mechanical version of a simple classical expression enumeration machine described by
Smullyan. It is seen that for an interpretation based on a Feynman path sum over
expression path states, truth, consistency, and completeness have different properties
than for the classical system. For instance the truth of a sentence S is defined only on
the paths containing S. It is undefined elsewhere. Also S and its negation can both be
true provided they appear on separate paths. This satisfies the definition of consistency.
It is seen that validity and completeness connect the system dynamics to the truth of the
sentences. It is proved that validity implies consistency. The requirements of validity
and maximal completeness strongly restrict the possible dynamics of the system. Aspects of
the existence of a valid, maximally complete dynamics are discussed. It is shown that
there exists a dynamics describing the evolution of a quantum computer that is also valid
and complete for the set of sentences considered here. |
Friday
9:00am |
|
Enhancing Classical
Cryptography with Quantum Communication
Louis Salvail
Aarhus University
Dept. of Computer Science
Ny Munkegade
8000 Aarhus C
Denmark
e-mail: salvail@daimi.daimi.aau.dk
An important problem in classical cryptography consists in finding the weakest assumption
for the implementation of some fundamental primitives. One such a primtive is called
Zero-Knowledge Arguments which allows a polynomial-time prover to convince a
polynomial-time verifier of the validity of some statement without revealing any
additional information. Naor, Ostrovski, Venkatesan and Young have shown that
Zero-Knowledge Arguments for NP can be based upon any one-way permutation. The result
follows from the construction of an unconditionally concealing bit commitment from any
one-way permutation. The bit commitment scheme is interactive (requires n-1
rounds where n is a security parameter) and works only if the one-way function is
almost k-regular. In addition, if an advarsary succeeds in opening the
bit of his choice with probability S(n) then the adversary can invert
the function with success probability I(n) in O(S(n)/n^8).
This kind of reduction is called polynomial preserving. It is unknown if a similar
construction exists requiring less interaction or a weaker assumption on the structure of
the one-way function. It is also not known how to diminish the gap between S(n)
and I(n). In this talk, we look at what happens if one-way functions are
replaced by quantum one-way functions and communication between the players can take place
over a quantum channel. As shown by Mayers and independantly by Lo and Chau,
unconditionally concealing quantum bit commitment scheme is impossible. Any such quantum
bit commitment can be cheated by a sender applying an unitary transform defined from the
protocol specifications. We show that there exists a scheme that can be cheated by the
sender only if he can invert a quantum one-way permutation. The protocol is
non-interactive meaning that is requires only one message during both the committing and
the revealing phases. The reduction of the quantum algorithm for inverting the one-way
permutation to an adversary of the bit commitment scheme is linear preserving whereas it
is polynomial preserving in the classical case. We also argue that a similar construction
can tolerate one-way functions that are not known to provide bit commitment using the
interactive hashing technique. Our results suggest that perhaps even in a computational
setting, quantum communication allows to base cryptography on weaker assumptions. |
9:45am |
|
The Security of Quantum Bit
Commitment Schemes
Giles Brassard
Université de Montréal
D.I.R.O.
C.P. 6128, Succ Centre Ville
Montréal, Québec H3C 3J7
Canada
e-mail: brassard@iro.umontreal.ca
Can quantum mechanics be harnessed to provide unconditionally secure bit commitment
schemes and other cryptographic primitives beyond key distribution? This was believed
possible for several years until Mayers came along with his proof that the
"provably'' secure BCJL bit commitment scheme was fatally flawed. (This attack was
found independently by Lo and Chau.) This discovery prompted several attempts at fixing
the protocol, but every time ad hoc successful attacks soon followed. This
cat-and-mouse game went on until Mayers proved that all these attempts had been in vain:
unconditionally secure quantum bit commitment schemes are impossible. Despite all odds,
various kinds of ideas continued to be proposed by some of us as well as others in the
hope they would not fall prey to Mayers' result. Perhaps the most interesting approach
consisted in using various types of short-lived classical bit commitment schemes
to force the cheater to perform measurements at specified points in the execution of the
protocol, which indeed would fix the problem. Alas, this strategy was doomed as
measurements can always be postponed until cheating becomes possible. Despite their
failure, these attempts contributed to enhance our understanding of what is going on with
quantum bit commitment schemes and other quantum cryptographic protocols. The purpose of
this talk is to review Mayers' result, describe some of the most promising suggested ways
to bypass it, and show why those approaches are vulnerable. Assume for example that you
are given an unknown quantum bit [|\Psi>=\alpha |0> + \beta |1>] and you are
expected to measure it either in the computational basis, |0> versus |1>, or the
diagonal basis, (|0> + |1>)/\sqrt{2}versus (|0> - |1>)/\sqrt{2}. You could
cheat the protocol if only you could postpone the choice of basis (and therefore the
measurement) until after subsequent classical information becomes available. To prevent
this, you are asked to use a classical (perhaps multi-party) commitment scheme to commit
to your choice of basis (computational or diagonal) as well as to the result of your
measurement. Then, either you are immediately challenged to open those commitments to
"prove'' that you had measured, in which case this quantum bit is not used any
further, or you are allowed to keep secret your choice of basis and measurement result for
later use. The hope was that this approach could turn a classical bit commitment scheme
that is reasonably secure for a very short time into a quantum bit commitment scheme that
is permanently secure. Unfortunately, it turns out that you can always postpone measuring
the quantum bit and offer fake commitments even if you are unable to break the underlying
commitment scheme in the time that elapses between the commitment and the challenge: if a
challenge comes, you can open your "commitments'' in a way consistent with what you
would have committed to had you measured the quantum bit, and if the challenge does not
come, you can keep the quantum bit intact for later measurement in the basis of your
choice. |
11:00am |
|
Security of Quantum Key
Distribution
Eli Biham
Technion
Computer Science Department
Technion city, Haifa 32000
Israel
e-mail: biham@cs.technion.ac.il
Quantum cryptography potentially allows the ability to distribute keys in an
information-secure way, which is beyond the abilities of classical cryptography. Security
of quantum key distribution against sophisticated attacks is among the most important
issues in quantum information theory. In this talk we present the basics of quantum key
distribution and discuss its security against collective and joint attacks. |
11:45am |
|
Qubits: from theory to
implementation (a brief survey + a case study)
Tal Mor
University of California-Los Angeles
Electrical Engineering Department
Box 951594
Los Angeles, CA 90095-1594
e-mail: talmo@EE.UCLA.EDU
Quantum physics tells us that, in principle, one can manipulate quantum bits (two-level
systems, now called simply qubits). One can store them and send them, initialize
and measure their state, transform their state, each one alone, or a few together.
Unfortunately, quantum physics does not tell us how to do these basic operations, and as
yet, there is no suggestion of a system where all these steps can be done. Recent research
shows that if qubits can be suitably manipulated, then a new form of computer and
information sciences is created, where teleportation, ultra-fast quantum algorithms, and
secure quantum key distribution are the main promises. While performing all the basic
steps is still impossible, a number of specific proposals, where particular simple tasks
can be implemented, have emerged. Many of these caused a considerable controversy since
the advantages of using qubits are easily lost when the qubits are not implemented
carefully, as in one of the examples I show here. I shall give a detailed description of
implementing a quantum key distribution scheme. The implemented "qubits'' are usually
not true two-level systems, and as a result, the existing experimental schemes (based on
weak pulses) become totally insecure in the presence of high channel loss rate! I
will present another potential implementation: qubits generated using a process of
parametric downconversion, to show that it is much more promising. |