The essence of the second law of thermodynamics is the statement that all adiabatic processes (slow or violent, reversible or not) can be quantified by a unique entropy function, S, on the equilibrium states of all macroscopic systems, whose increase is a necessary and sufficient condition for such a process to occur. It is one of the few really fundamental physical laws in the sense that no deviation, however tiny, is permitted and its consequences are far reaching. Since the entropy principle is independent of any statistical mechanical model, it ought to be derivable from a few logical principles without recourse to Carnot cycles, ideal gases and other assumptions about such things as 'heat', 'hot' and 'cold', 'temperature', 'reversible processes', etc. Indeed, temperature is a consequence of entropy rather than the other way around. In this lecture on joint work with Jakob Yngvason, the foundations of the subject and the construction of entropy from a few simple, physical principles will be presented. (For background, see: Notices of the Amer. Math. Soc. 45, p.571 (1998), Physics Today 53, p.32 (April 2000) and Physics Reports 310, p.1 (1999).)
Public-Key-Infrastrukturen (PKI) sichern heute viele Computer-Anwendungen ab. Wichtige Beispiele sind die Email-Kommunikation, Web-Zugang und Virtual-Private-Networks. Die Sicherheit von PKIs beruht wesentlich auf der Schwierigkeit, bestimmte mathematische Berechnungsprobleme zu lösen, besonders auf der Schwierigkeit, die Primfaktorzerlegung natürlicher Zahlen zu finden, die Produkt zweier großer Primzahlen sind. Es ist aber nicht bekannt, ob solche Berechnungsprobleme wirklich schwierig sind. Im Gegenteil, Peter Shor hat gezeigt, dass Quantencomputer alle für PKIs relevanten zahlentheoretischen Probleme schnell lösen können.
In diesem Vortrag beschreibe ich Möglichkeiten, alternative Krypto-Verfahren zu konstruieren zum Beispiel auf der Grundlage schwieriger Probleme aus der algebraischen Zahlentheorie oder der Geometrie der Zahlen. Ich diskutiere, inwieweit solche Verfahren gegen Quantencomputer-Attacken resistent sind. Ich erläutere auch, wie PKIs konzipiert werden können, in die neue Kryptoverfahren leicht integrieren können.
Control of spatiotemporal chaos is one of the central problems of nonlinear dynamics. Recently, we have reported [1] suppression of chemical turbulence by global delayed feedback in the catalytic reaction of CO oxidation on platinum single-crystal surfaces. When feedback intensity was increased, spiral-wave turbulence was transformed into new intermittent chaotic regimes with cascades of reproducing and annihilating local structures on the background of uniform oscillations. The global feedback further led to the development of cluster patterns and standing waves and to the stabilization of uniform oscillations. These findings are theoretically reproduced in our simulations of the complex Ginzburg-Landau equation with global feedback [2,3] and of the realistic model of the CO oxidation reaction [4].[1] M. Kim. M. Bertram, M. Pollmann, A. von Oertzen, A.S. Mikhailov, H.H. Rotermund, G. Ertl, Science 292 (2001) 1357[2] D. Battogtokh, A.S. Mikhailov, Physica D 90 (1996) 84[3] D. Battogtokh, A. Preusser, A.S. Mikhailov, Physica D 106 (1997) 327[4] M. Bertram, A.S. Mikhailov, Phys. Rev. E 63 (2001) 066102
In this talk I will describe some analytical problems in Quantum Field Theory (QFT) and some of the recent results and approaches. I will not assume any prior knowledge of the subject and I will try to show how it arises from Classical Field Theory, i.e. partial differential equations. In other words I will view QFT as Quantum Mechanics of infinitely many degrees of freedom or of extended objects (strings, surfaces, etc).
Since any biological cell more advanced in evolution than a bacterium, depends in its internal structure and organization on the cytoskeleton a highly dynamic polymer network within the cell interior our group strives to understand the physics of the cytoskeleton. We have developed novel laser-based nanomanipulation tools to take a look into cells and to investigate the cytoskeleton. We particularly examine to which extent changes in the cytoskeleton characterize the progression of cancer from precancer to metastasis and how the cytoskeleton can be used to control neuronal growth. Our ultimate goal is the development of a tabletop device for quick cancer diagnosis, which accomplishes cancer's earliest detection and precise determination of its stage existing techniques fail in both aspects. Moreover, our research group plans to build well-controlled circuits of genuine neurons and to develop novel therapies in neuroprothetics.
Unravelling the mechanisms of energy transfer on a molecular level is one of the central problems of chemical reaction kinetics. Most intriguing from the chemists point of view is the connection between dynamical and structural properties. Although empirically well established this relationship leaves many open questions. Which are its microscopic foundations ? Are there transferable properties of functional groups and how do they determine the course of chemical reactions ?
Modern spectroscopy opens a unique approach to these problems. The key is provided by the interpretation of molecular spectra in terms of explicit quantum-mechanical models of the underlying molecular motion. Studies of OH- and NH2 -groups in different environments demonstrate how experiment and theory combine to draw a detailled picture of the molecular quantum-dynamics. In perfect analogy to the separation of electronic and nuclear motion in the Born-Oppenheimer approximation characteristic motions of individual structural features are adiabatically separated from the overall system dynamics. This phenomenon of vibrational adiabaticity could play a central role in the understanding of the microscopic foundations of empirical structure-reactivity relationships.
Die gesamte von der Sonne gelieferte Energie, die den Boden der Atmosphäre erreicht, wird an dieser vielgestaltigen Bodenoberfläche absorbiert. Von hier aus wird die unterste Schicht der Atmosphäre mit Wärme versorgt. Wie über einer Heizplatte entstehen dabei Auftriebsgebiete in denen erwärme Luft in die Höhe steigt, und Abstiegsgebiete, in denen aus höheren Schichten der Atmosphäre kalte Luft die warme am Boden ersetzt. Zwischen Auf- und Abtriebsgebieten entwickeln sich horizontale Ausgleichsbewegungen, deren geringe Bewegungsgeschwindigkeiten oft durch großräumige Windsysteme überlagert werden. Aber auch die unterschiedliche Gestalt der Unterlage kann zu einem auf und ab der Luftbewegungen führen. Letztlich wird so für einen Energieaustausch zwischen der warme Erdoberfläche und der darüber gelagerten kalten Luft gesorgt. Hält man ein empfindliches Messgerät in diese Strömung, dann würde man am Messgerät ein ziemlich unregelmäßiges Verhalten solcher meteorologischer Größen wie der Windgeschwindigkeit oder der Lufttemperatur beobachten. Eine solche Unregelmäßigkeit dieser Luftbewegungen wird als Turbulenz bezeichnet und der dadurch induzierte Energietransport führt zur Herausbildung einer konvektiven turbulenten Grenzschicht.
Man versucht, der Vielgestaltigkeit dieser kleinräumigen Bewegungen mit statistischen Größen, wie Mittelwerte, Varianzen und Kovarianzen eine gewisse Ordnung zu geben. Solche Theorien, die den turbulenten Energieaustausch mit statistischen Mitteln beschreiben, wurden in vielen Naturexperimenten überprüft. Sie haben ihre Leistungsfähigkeit bei der Modellierung des turbulenten Energieaustausches innerhalb von numerischen Wetterprognosemodellen unter Beweis gestellt. Schaut man den turbulenten Energietransport etwas genauer an, dann kann die den Energieaustausch bewerkstelligende Turbulenz in die Überlagerung einer Vielzahl unterschiedlich großer Wirbelstrukturen aufgelöst werden. Es macht jedoch keinen Sinn, jeden einzelnen dieser die Energie transportierenden Wirbel in einem Wettermodell zu berücksichtigen, wenn man z.B. das Ziel hat den Energieumsatz zwischen Erdoberfläche und Atmosphäre über einem ganzen Kontinent zu berechnen. Schränkt man die Größe des Zielgebietes ein, dann kann jedoch ein Verfahren sinnvoll sein, das möglichst viele dieser energietragenden Strukturen direkt berechnen kann. Früher unmöglich, sind solche Berechnungsverfahren heute durch die enorme Erhöhung der Rechnerleistungen Stand der Dinge. Solche numerische Verfahren, sind in der Meteorologie als Large Eddy Simulationen bekannt. Large Eddy Simulationen zeigen diese oben beschrieben faszinierenden Strukturen, diesen Wechsel an Auf- und Abwindgebieten innerhalb einer konvektiven Grenzschicht der Atmosphäre, die man sich so leicht vorstellen kann, die aber in der Natur nur schwer zu beobachten sind.
Cells in tissues or body fluids cooperate by means of intricate webs of proteins encoding the beahavior of each individual cell. As a metaphor, the working of these networks can be regared as a language, in which the relative spatial position of proteins follows strict rules to enable the cells to communicate on a common "semantic" basis. To understand the rules by which cells make local collective decisons it is essential to decipher the underlying portein networks. This can only be done by tracing out these networks directly by means of topological proteomics technology working on the level of each individual cell in intact cell systems, such as tissues. The resulting data are enormous challenge to informatics and mathematics approaches related to pattern recognition, matching, interpretation, and modelling.
Because calibrated light curves of thermonuclear (Type Ia) supernovae
have become a major tool to determine the local expansion rate of the
Universe, and also its geometrical structure,
considerable attention has been given to models of these events
over the past couple of years. There are good reasons to believe
that perhaps most Type Ia supernovae are the explosions of white dwarf
stars, consisting mainly of carbon and oxygen only, that
have approached the Chandrasekhar mass,
Mchan 1.39 M,
and are disrupted by thermonuclear fusion of carbon and oxygen. Recent progress in modeling Type Ia supernovae as well as several of the still open questions are addressed in this talk. Although the main emphasis will be on studies of the explosion mechanism itself and on the related physical processes, including the physics and numerical modeling of turbulent nuclear combustion in degenerate stars, we also discuss observational implications and constraints, including consequences for cosmology.
Zur Beurteilung der Funktion eines Biomoleküles innerhalb der Zellprozesse muss man Informationen über seine dreidimensionale räumliche Struktur und über die Flexibilität dieser Struktur haben.
Die Beschreibung der Dynamik (und damit der Flexibilität) von Biomolekülen führt auf Mehrskalen-Probleme, bei welchen schnelle Mikroskalen nichtlinear mit langsamen Makroskalen verkoppelt sind. Aufgrund dieser Kopplung ist die Mikroskala für die langsame Dynamik effektiv von grosser Bedeutung, d.h. sie kann nicht trivial ausgemittelt oder -gefiltert werden. Auf der anderen Seite ist der Anwender oft nicht an den Details der Mikroskala interessiert: diese sind nur chemisch irrelevante, kleine Oszillationen des quasi formstabilen Molekülgerüsts, während die Makrodynamik durch Übergänge zwischen global'' deutlich verschiedenen Formen des Molekülgerüsts, den sogenannten Konformationen des Moleküls, gekennzeichnet ist.
Der Vortrag stellt eine Methode zur direkten Berechnung dieser Konformationen vor, die die Ankopplung der Mikroskalen berücksichtigt, ohne ihre explizite Simulationen über makroskopisch lange Zeiträume zu erfordern. Dazu wird zuerst eine Beschreibung des Problems in Rahmen der statistischen Mechanik entwickelt, die auf die Konstruktion eines die Übergangswahrscheinlichkeiten zwischen den Konformationen beschreibenden Markov-Operator führt.
Es stellt sich heraus, dass die Konformationen aus den Eigenvektoren zu einem Cluster von isolierten Eigenwerten dieses Operators bestimmbar sind. Die numerische Berechnung der Konformationen erfordert also eine Diskretisierung des Eigenwert-Probleme zu diesem Operator, was wegen der riesigen Anzahl von Freiheitsgraden nur mit Hilfe eines speziellen Monte-Carlo-Verfahrens möglich ist.
We present a class of constitutive updates for general viscoplastic solids including such aspects of material behavior as finite elastic and plastic deformations, non-Newtonian viscosity, rate-sensitivity and arbitrary flow and hardening rules. The distinguishing characteristic of the proposed constitutive updates is that, by construction, the corresponding incremental stress-strain relations derive from a pseudo-elastic strain-energy density. This in turn confers the incremental boundary value problem a variational structure. In particular, the incremental deformation mapping follows from a minimum principle. In crystals exhibiting latent hardening, the energy function is nonconvex and has wells corresponding to single-slip deformations. This favors microstructures consisting locally of single slip. We develop a micromechanical theory of dislocation structures and finite deformation single crystal plasticity based on the direct generation of deformation microstructures and the computation of the attendant effective behavior. Specifically, we aim at describing the lamellar dislocation structures which develop at large strains under monotonic loading. These microstructures are regarded as instances of sequential lamination and treated accordingly. The present approach is based on the explicit construction of microstructures by recursive lamination and their subsequent equilibration in order to relax the incremental constitutive description of the material. The microstructures are permitted to evolve in complexity and fineness with increasing macroscopic deformation. The dislocation structures are deduced from the plastic deformation gradient field by recourse to Kröner's formula for the dislocation density tensor. The theory is rendered nonlocal by the consideration of the self-energy of the dislocations. Selected examples demonstrate the ability of the theory to generate complex microstructures, determine the softening effect which those microstructures have on the effective behavior of the crystal, and account for the dependence of the effective behavior on the size of the crystalline sample, or size effect. In this last regard, the theory predicts the effective behavior of the crystal to stiffen with decreasing sample size, in keeping with experiment. In contrast to strain-gradient theories of plasticity, the size effect occurs for nominally uniform macroscopic deformations.
Non-linear field equations such as the KPZ equation for deposition
and the Navier-Stokes equation for hydrodynamics are discussed by
the derivation of transport equations for the correlation function
of the field , i.e.
,
where h satisfies a diffusion equation driven by a noise, f,
defined as noise with a given spectrum, and containing a non linear
term, Mhh, which couples to the field itself.
In previous work, an equation for the steady state correlation function
was derived and solved to give a power law solution in an intermediate
range of k, i.e. .
In this paper (joint work with Moshe Shwartz), the probability distribution for
or is derived,
the procedure having the same relation to the static distribution
as Lagrangian mechanics has to the Hamiltonian. A conservative
system has a static solution exp(-H/kT), but there is no
equivalent for the distribution of histories, so there is little study
of this approach. However, since approximations are essential,
the Lagrangian method is used, and is more powerful
than the usual Hamiltonian Liouville's
equation
Boltzmann equation route.
The approximate equation is derived and solved in the usual conditions,
and takes the form .
Quantum Field Theory needs regularization and renormalization. In order to cure diseases, ideas from Noncommutative Geometry might help. Three types of deformations are used. - We mention matrix geometry and the Fuzzy Sphere; a regularization is found which respects symmetries. - Field theory on noncommutative spaces has still divergences. Recent attempts to prove renormalizability are reviewed too.
Some recent models for inhomogeneous spatial point processes with interaction will be reviewed. The focus is on models derived from homogeneous Markov point processes. For some of the models, the interaction is location dependent. A new type of transformation related model with this property is also suggested. The statistical inference based on likelihood and pseudolikelihood is discussed for the different models. In particular, it is shown that for transformation models, the pseudolikelihood function can be decomposed in a similar fashion as the likelihood function.
Before the review, I will also give a summary of the research at Laboratory for Computational Stochastics.
The evolution of populations under the joint action of mutation and selection is, in the framework of classical population genetics, described by systems of ordinary differential equations. These equations carry over to molecular evolution if alleles are identified with sequences, and a suitable mutation model is specified. The reulting systems are, however, very large and hard to treat.
Matters are simplified by a connection to statistical physics. It may be shown that the mutation-reproduction matrix of the evolution model is exactly equivalent to the Hamiltonian of an Ising quantum chain. Here, the mutation rate corresponds to the temperature, and the fitness of a sequence may be identified with the interaction energy of the spins within the chain. Hence, the methods of statistical physics may be used to diagonalize the mutation-reproduction matrix, and thus solve the evolution model exactly. However, the quantum-mechanical states do not translate directly into the probabilities of the evolution model, since they rely on the quantum-mechanical (as opposed to classical) probability concept; here, the methods require some modification.
The secondary structures of nucleic acids provide a unique computer model for investigating the most important aspects of their structural and evolutionary biology. Secondary structures, defined as the lists of base pairing contacts in RNA or DNA molecules, are a coarse-grained representation of the 3D structures; nevertheless they capture many important features of the molecules. The existence of efficient algorithms for solving the folding problem, i.e., for predicting the secondary structure given only the sequence, allows a detailed analysis of the model by means of computer simulations. The notion of a "landscape" underlies both the structure formation (folding) and the (in vitro) evolution of RNA.
Evolutionary adaptation may be seen as hill climbing process on a fitness landscape which is determined by the phenotype of the RNA molecule (within the model this is its secondary structure) and the selection constraints acting on the molecules. We find that a substantial fraction of point mutations do not change an RNA secondary structure. On the other hand, a comparable fraction of mutations leads to very different structures. This interplay of smoothness and ruggedness (or robustness and sensitivity) is a generic feature of both RNA and protein sequence-structure maps. Its consequences, "shape space covering" and "neutral networks" are inherited by the fitness landscapes and determine the dynamics of RNA evolution. Punctuated equilibria at phenotype level and a diffusion-like evolution of the underlying genotypes are a characteristic feature of such models.
The folding dynamics of particular RNA molecule can also be studied in a meaningful way based on secondary structures. Given an RNA sequence, we consider the energy landscape formed by all possible conformations (secondary structures). A straight-forward implementation of the Metropolis algorithm is sufficient to produce a quite realistic folding kinetics, allowing to identify meta-stable states and folding pathways. Just as in the protein case there are good and bad folders which can be distinguished by the properties of their energy landscapes.