1 \chapter{Basic principles of utilized simulation techniques}
4 In the following the simulation methods used within the scope of this study are introduced.
5 Enabling the investigation of the evolution of structure on the atomic scale, molecular dynamics (MD) simulations are chosen for modeling the behavior and precipitation of C introduced into an initially crystalline Si environment.
6 To be able to model systems with a large amount of atoms computational efficient classical potentials to describe the interaction of the atoms are most often used in MD studies.
7 For reasons of flexibility in executing this non-standard task and in order to be able to use a novel interaction potential \cite{albe_sic_pot} an appropriate MD code called {\textsc posic}\footnote{{\textsc posic} is an abbreviation for {\bf p}recipitation {\bf o}f {\bf SiC}}\footnote{Source code: http://www.physik.uni-augsburg.de/\~{}zirkelfr/posic/posic.tar.bz2} including a library collecting respective MD subroutines was developed from scratch.
8 The basic ideas of MD in general and the adopted techniques as implemented in {\textsc posic} in particular are outlined in section \ref{section:md}, while the functional form and derivative of the employed classical potential is presented in appendix \ref{app:d_tersoff}.
9 An overview of the most important tools within the MD package is given in appendix \ref{app:code}.
10 Although classical potentials are often most successful and at the same time computationally efficient in calculating some physical properties of a particular system, not all of its properties might be described correctly due to the lack of quantum-mechanical effects.
11 Thus, in order to obtain more accurate results quantum-mechanical calculations from first principles based on density functional theory (DFT) were performed.
12 The Vienna {\em ab initio} simulation package ({\textsc vasp}) \cite{kresse96} is used for this purpose.
13 The relevant basics of DFT are described in section \ref{section:dft} while an overview of utilities mainly used to create input or parse output data of {\textsc vasp} is given in appendix \ref{app:code}.
14 The gain in accuracy achieved by this method, however, is accompanied by an increase in computational effort constraining the simulated system to be much smaller in size.
15 Thus, investigations based on DFT are restricted to single defects or combinations of two defects in a rather small Si supercell, their structural relaxation as well as some selected diffusion processes.
16 Next to the structure, defects can be characterized by the defect formation energy, a scalar indicating the costs necessary for the formation of the defect, which is explained in section \ref{section:basics:defects}.
17 The method used to investigate migration pathways to identify the prevalent diffusion mechanism is introduced in section \ref{section:basics:migration} and modifications to the {\textsc vasp} code implementing this method are presented in appendix \ref{app:patch_vasp}.
19 \section{Molecular dynamics simulations}
26 \dq We may regard the present state of the universe as the effect of the past and the cause of the future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom; for such an intellect nothing could be uncertain and the future just like the past would be present before its eyes.\dq{}
28 {\em Marquis Pierre Simon de Laplace, 1814.} \cite{laplace}
33 Pierre Simon de Laplace phrased this vision in terms of a controlling, omniscient instance - the {\em Laplace demon} - which would be able to look into the future as well as into the past due to the deterministic nature of processes, governed by the solution of differential equations.
34 Although Laplace's vision is nowadays corrected by chaos theory and quantum mechanics, it expresses two main features of classical mechanics, the determinism of processes and time reversibility of the fundamental equations.
35 This understanding was one of the first ideas for doing molecular dynamics simulations, considering an isolated system of particles, the behaviour of which is fully determined by the solution of the classical equations of motion.
37 \subsection{Introduction to molecular dynamics simulations}
39 Molecular dynamics (MD) simulation is a technique to compute a system of particles, referred to as molecules, with their positions, volocities and forces among each other evolving in time.
40 The MD method was first introduced by Alder and Wainwright in 1957 \cite{alder57,alder59} to study the interactions of hard spheres.
41 The basis of the approach are Newton's equations of motion to describe classicaly the many-body system.
42 MD is the numerical way of solving the $N$-body problem which cannot be solved analytically for $N>3$.
43 A potential is necessary describing the interaction of the particles.
44 By MD a complete description of the system in the sense of classical mechanics on the microscopic level is obtained.
45 The microscopic information can then be translated to macroscopic observables by means of statistical mechanics.
47 The basic idea is to assume that the particles can be described classically by Newton's equations of motion, which are integrated numerically.
48 A system of $N$ particles of masses $m_i$ ($i=1,\ldots,N$) at positions ${\bf r}_i$ and velocities $\dot{{\bf r}}_i$ is given by
50 %m_i \frac{d^2}{dt^2} {\bf r}_i = {\bf F}_i \Leftrightarrow
51 %m_i \frac{d}{dt} {\bf r}_i = {\bf p}_i\textrm{ , } \quad
52 %\frac{d}{dt} {\bf p}_i = {\bf F}_i\textrm{ .}
53 m_i \ddot{{\bf r}_i} = {\bf F}_i \Leftrightarrow
54 m_i \dot{{\bf r}_i} = {\bf p}_i\textrm{, }
55 \dot{{\bf p}_i} = {\bf F}_i\textrm{ .}
57 The forces ${\bf F}_i$ are obtained from the potential energy $U(\{{\bf r}\})$:
59 {\bf F}_i = - \nabla_{{\bf r}_i} U({\{\bf r}\}) \, \textrm{.}
61 Given the initial conditions ${\bf r}_i(t_0)$ and $\dot{{\bf r}}_i(t_0)$ the equations can be integrated by a certain integration algorithm.
62 The solution of these equations provides the complete information of a system evolving in time.
63 The following sections cover the tools of the trade necessary for the MD simulation technique.
64 Three ingredients are required for a MD simulation:
66 \item A model for the interaction between system constituents is needed.
67 Interaction potentials and their accuracy for describing certain systems of elements will be outlined in section \ref{subsection:interact_pot}.
68 \item An integrator is needed, which propagtes the particle positions and velocities from time $t$ to $t+\delta t$, realised by a finite difference scheme which moves trajectories discretely in time.
69 This is explained in section \ref{subsection:integrate_algo}.
70 \item A statistical ensemble has to be chosen, which allows certain thermodynamic quantities to be controlled or to stay constant.
71 This is discussed in section \ref{subsection:statistical_ensembles}.
73 These ingredients will be outlined in the follwoing.
74 The discussion is restricted to methods employed within this study.
76 \subsection{Interaction potentials for silicon and carbon}
77 \label{subsection:interact_pot}
79 The potential energy of $N$ interacting atoms can be written in the form
81 U(\{{\bf r}\}) = \sum_i U_1({\bf r}_i) + \sum_i \sum_{j>i} U_2({\bf r}_i,{\bf r}_j) + \sum_i \sum_{j>i} \sum_{k>j>i} U_3({\bf r}_i,{\bf r}_j,{\bf r}_k) \ldots
83 where $U$ is the total potential energy.
84 $U_1$ is a single particle potential describing external forces.
85 Examples of single particle potentials are the gravitational force or an electric field.
86 $U_2$ is a two body pair potential which only depends on the distance $r_{ij}$ between the two atoms $i$ and $j$.
87 If not only pair potentials are considered, three body potentials $U_3$ or multi body potentials $U_n$ can be included.
88 Usually these higher order terms are avoided since they are not easy to model and it is rather time consuming to evaluate potentials and forces originating from these many body terms.
89 Ordinary pair potentials have a close-packed structure like face-centered cubic (FCC) or hexagonal close-packed (HCP) as a ground state.
90 A pair potential is, thus, unable to describe properly elements with other structures than FCC or HCP.
91 Silicon and carbon for instance, have a diamand and zincblende structure with four covalently bonded neighbors, which is far from a close-packed structure.
92 A three body potential has to be included for these types of elements.
94 \subsubsection{The Tersoff bond order potential}
96 Tersoff proposed an empirical interatomic potential for covalent systems \cite{tersoff_si1,tersoff_si2}.
97 The Tersoff potential explicitly incorporates the dependence of bond order on local environments, permitting an improved description of covalent materials.
98 Due to the covalent character Tersoff restricted the interaction to nearest neighbor atoms accompanied by an increases in computational efficiency for the evaluation of forces and energy based on the short-range potential.
99 Tersoff applied the potential to silicon \cite{tersoff_si1,tersoff_si2,tersoff_si3}, carbon \cite{tersoff_c} and also to multicomponent systems like silicon carbide \cite{tersoff_m}.
100 The basic idea is that, in real systems, the bond order, i.e. the strength of the bond, depends upon the local environment \cite{abell85}.
101 Atoms with many neighbors form weaker bonds than atoms with only a few neighbors.
102 Although the bond strength intricately depends on geometry the focus on coordination, i.e. the number of neighbors forming bonds, is well motivated qualitatively from basic chemistry since for every additional formed bond the amount of electron pairs per bond and, thus, the strength of the bonds is decreased.
103 If the energy per bond decreases rapidly enough with increasing coordination the most stable structure will be the dimer.
104 In the other extreme, if the dependence is weak, the material system will end up in a close-packed structure in order to maximize the number of bonds and likewise minimize the cohesive energy.
105 This suggests the bond order to be a monotonously decreasing function with respect to coordination and the equilibrium coordination being determined by the balance of bond strength and number of bonds.
106 Based on pseudopotential theory the bond order term $b_{ijk}$ limitting the attractive pair interaction is of the form $b_{ijk}\propto Z^{-\delta}$ where $Z$ is the coordination number and $\delta$ a constant \cite{abell85}, which is $\frac{1}{2}$ in the seond-moment approximation within the tight binding scheme \cite{horsfield96}.
108 Tersoff incorporated the concept of bond order in a three-body potential formalism.
109 The interatomic potential is taken to have the form
111 E & = & \sum_i E_i = \frac{1}{2} \sum_{i \ne j} V_{ij} \textrm{ ,} \\
112 V_{ij} & = & f_C(r_{ij}) [ f_R(r_{ij}) + b_{ij} f_A(r_{ij}) ] \textrm{ .}
114 $E$ is the total energy of the system, constituted either by the sum over the site energies $E_i$ or by the bond energies $V_{ij}$.
115 The indices $i$ and $j$ correspond to the atoms of the system with $r_{ij}$ being the distance from atom $i$ to atom $j$.
116 The functions $f_R$ and $f_A$ represent a repulsive and an attractive pair potential.
117 The repulsive part is due to the orthogonalization energy of overlapped atomic wave functions.
118 The attractive part is associated with the bonding.
120 f_R(r_{ij}) & = & A_{ij} \exp (- \lambda_{ij} r_{ij} ) \\
121 f_A(r_{ij}) & = & -B_{ij} \exp (- \mu_{ij} r_{ij} )
123 The function $f_C$ is the a cutoff function to limit the range of interaction to nearest neighbors.
124 It is designed to have a smooth transition of the potential at distances $R_{ij}$ and $S_{ij}$.
126 f_C(r_{ij}) = \left\{
128 1, & r_{ij} < R_{ij} \\
129 \frac{1}{2} + \frac{1}{2} \cos \Big[ \pi (r_{ij} - R_{ij})/(S_{ij} - R_{ij}) \Big], & R_{ij} < r_{ij} < S_{ij} \\
134 As discussed above, $b_{ij}$ represents a measure of the bond order, monotonously decreasing with the coordination of atoms $i$ and $j$.
137 b_{ij} & = & \chi_{ij} (1 + \beta_i^{n_i} \zeta^{n_i}_{ij})^{-1/2n_i} \\
138 \zeta_{ij} & = & \sum_{k \ne i,j} f_C (r_{ik}) \omega_{ik} g(\theta_{ijk}) \\
139 g(\theta_{ijk}) & = & 1 + c_i^2/d_i^2 - c_i^2/[d_i^2 + (h_i - \cos \theta_{ijk})^2]
141 where $\theta_{ijk}$ is the bond angle between bonds $ij$ and $ik$.
142 This is illustrated in Figure \ref{img:tersoff_angle}.
145 \includegraphics[width=8cm]{tersoff_angle.eps}
147 \caption{Angle between bonds of atoms $i,j$ and $i,k$.}
148 \label{img:tersoff_angle}
150 The angular dependence does not give a fixed minimum angle between bonds since the expression is embedded inside the bond order term.
151 The relation to the above discussed bond order potential becomes obvious if $\chi=1, \beta=1, n=1, \omega=1$ and $c=0$.
152 Parameters with a single subscript correspond to the parameters of the elemental system \cite{tersoff_si3,tersoff_c} while the mixed parameters are obtained by interpolation from the elemental parameters by the arithmetic or geometric mean.
153 The elemental parameters were obtained by fit with respect to the cohesive energies of real and hypothetical bulk structures and the bulk modulus and bond length of the diamond structure.
154 New parameters for the mixed system are $\chi$, which is used to finetune the strength of heteropolar bonds, and $\omega$, which is set to one for the C-Si interaction but is available as a feature to permit the application of the potential to more drastically different types of atoms in the future.
156 The force acting on atom $i$ is given by the derivative of the potential energy.
157 For a three body potential ($V_{ij} \neq V{ji}$) the derivation is of the form
159 \nabla_{{\bf r}_i} E = \frac{1}{2} \big[ \sum_j ( \nabla_{{\bf r}_i} V_{ij} + \nabla_{{\bf r}_i} V_{ji} ) + \sum_k \sum_j \nabla_{{\bf r}_i} V_{jk} \big] \textrm{ .}
161 The force is then given by
163 F^i = - \nabla_{{\bf r}_i} E \textrm{ .}
165 Details of the Tersoff potential derivative are presented in appendix \ref{app:d_tersoff}.
167 \subsubsection{Improved analytical bond order potential}
169 Although the Tersoff potential is one of the most widely used potentials there are some shortcomings.
170 Describing the Si-Si interaction Tersoff was unable to find a single parameter set to describe well both, bulk and surface properties.
171 Due to this and since the first approach labeled T1 \cite{tersoff_si1} turned out to be unstable \cite{dodson87}, two further parametrizations exist, T2 \cite{tersoff_si2} and T3 \cite{tersoff_si3}.
172 While T2 describes well surface properties, T3 yields improved elastic constants and should be used for describing bulk properties.
173 However, T3, which is used in the Si/C potential, suffers from an underestimation of the dimer binding energy.
174 Similar behavior is found for the C-C interaction.
176 For this reason, Erhart and Albe provide a reparametrization of the Tersoff potential based on three independently fitted parameter sets for the Si-Si, C-C and Si-C interaction \cite{albe_sic_pot}.
177 The functional form is similar to the one proposed by Tersoff.
178 Differences in the energy functional and the force evaluation routine are pointed out in appendix \ref{app:d_tersoff}.
179 Concerning Si the elastic properties of the diamond phase as well as the structure and energetics of the dimer are reproduced very well.
180 The new parameter set for the C-C interaction yields improved dimer properties while at the same time delivers a description of the bulk phase similar to the Tersoff potential.
181 The potential succeeds in the description of the low as well as high coordinated structures.
182 The description of elastic properties of SiC is improved with respect to the potentials available in literature.
183 Defect properties are only fairly reproduced but the description is comparable to previously published potentials.
184 It is claimed that the potential enables modeling of widely different configurations and transitions among these and has recently been used to simulate the inert gas condensation of Si-C nanoparticles \cite{erhart04}.
185 Therefore the Erhart/Albe (EA) potential is considered the superior analytical bond order potential to study the SiC precipitation and associated processes in Si.
187 \subsection{Verlet integration}
188 \label{subsection:integrate_algo}
190 A numerical method to integrate Newton's equation of motion was presented by Verlet in 1967 \cite{verlet67}.
191 The idea of the so-called Verlet and a variant, the velocity Verlet algorithm, which additionaly generates directly the velocities, is explained in the following.
192 Starting point is the Taylor series for the particle positions at time $t+\delta t$ and $t-\delta t$
194 \vec{r}_i(t+\delta t)=
195 \vec{r}_i(t)+\delta t\vec{v}_i(t)+\frac{\delta t^2}{2m_i}\vec{f}_i(t)+
196 \frac{\delta t^3}{6}\vec{b}_i(t) + \mathcal{O}(\delta t^4)
197 \label{basics:verlet:taylor1}
200 \vec{r}_i(t-\delta t)=
201 \vec{r}_i(t)-\delta t\vec{v}_i(t)+\frac{\delta t^2}{2m_i}\vec{f}_i(t)-
202 \frac{\delta t^3}{6}\vec{b}_i(t) + \mathcal{O}(\delta t^4)
203 \label{basics:verlet:taylor2}
205 where $\vec{v}_i=\frac{d}{dt}\vec{r}_i$ are the velocities, $\vec{f}_i=m\frac{d}{dt^2}\vec{r}_i$ are the forces and $\vec{b}_i=\frac{d}{dt^3}\vec{r}_i$ are the third derivatives of the particle positions with respect to time.
206 The Verlet algorithm is obtained by summarizing and substracting equations \eqref{basics:verlet:taylor1} and \eqref{basics:verlet:taylor2}
208 \vec{r}_i(t+\delta t)=
209 2\vec{r}_i(t)-\vec{r}_i(t-\delta t)+\frac{\delta t^2}{m_i}\vec{f}_i(t)+
210 \mathcal{O}(\delta t^4)
213 \vec{v}_i(t)=\frac{1}{2\delta t}[\vec{r}_i(t+\delta t)-\vec{r}_i(t-\delta t)]+
214 \mathcal{O}(\delta t^3)
216 the truncation error of which is of order $\delta t^4$ for the positions and $\delta t^3$ for the velocities.
217 The velocities, although not used to update the particle positions, are not synchronously determined with the positions but drag behind one step of discretization.
218 The Verlet algorithm can be rewritten into an equivalent form, which updates the velocities and positions in the same step.
219 The so-called velocity Verlet algorithm is obtained by combining \eqref{basics:verlet:taylor1} with equation \eqref{basics:verlet:taylor2} displaced in time by $+\delta t$
221 \vec{v}_i(t+\delta t)=
222 \vec{v}_i(t)+\frac{\delta t}{2m_i}[\vec{f}_i(t)+\vec{f}_i(t+\delta t)]
225 \vec{r}_i(t+\delta t)=
226 \vec{r}_i(t)+\delta t\vec{v}_i(t)+\frac{\delta t^2}{2m_i}\vec{f}_i(t) \text{ .}
228 Since the forces for the new positions are required to update the velocity the determination of the forces has to be carried out within the integration algorithm.
230 \subsection{Statistical ensembles}
231 \label{subsection:statistical_ensembles}
233 Using the above mentioned algorithms the most basic type of MD is realized by simply integrating the equations of motion of a fixed number of particles ($N$) in a closed volume $V$ realized by periodic boundary conditions (PBC).
234 Providing a stable integration algorithm the total energy $E$, i.e. the kinetic and configurational energy of the paticles, is conserved.
235 This is known as the $NVE$, or microcanonical ensemble, describing an isolated system composed of microstates, among which the number of particles, volume and energy are held constant.
237 However, the successful formation of SiC dictates precise control of temperature by external heating.
238 While the temperature of such a system is well defined, the energy is no longer conserved.
239 The microscopic states of a system, which is in thermal equilibrium with an external thermal heat bath, are represented by the $NVT$ ensemble.
240 In the so-called canonical ensemble the temperature $T$ is related to the expactation value of the kinetic energy of the particles, i.e.
242 \langle E_{\text{kin}}\rangle = \frac{3}{2}Nk_{\text{B}}T \text{, }
243 E_{\text{kin}}=\sum_i \frac{\vec{p}^2_i}{2m_i} \text{ .}
246 The volume of the synthesized material can hardly be controlled in experiment.
247 Instead the pressure can be adjusted.
248 Holding constant the pressure in addition to the temperature of the system its states are represented by the isothermal-isobaric $NpT$ ensemble.
249 The expression for the pressure of a system derived from the equipartition theorem is given by
251 pV=Nk_{\text{B}}T+\langle W\rangle\text{, }W=-\frac{1}{3}\sum_i\vec{r}_i\nabla_{\vec{r}_i}U
255 where $W$ is the virial and $U$ is the configurational energy.
257 Berendsen~et~al.~\cite{berendsen84} proposed a method, which is easy to implement, to couple the system to an external bath with constant temperature $T_0$ or pressure $p_0$ with adjustable time constants $\tau_T$ and $\tau_p$ determining the strength of the coupling.
258 Control of the respective variable is based on the relations given in equations \eqref{eq:basics:ts} and \eqref{eq:basics:ps}.
259 The thermostat is achieved by scaling the velocities of all atoms in every time step $\delta t$ from $\vec{v}_i$ to $\lambda \vec{v}_i$, with
261 \lambda=\left[1+\frac{\delta t}{\tau_T}(\frac{T_0}{T}-1)\right]^\frac{1}{2}
264 where $T$ is the current temperature according to equation \eqref{eq:basics:ts}.
265 The barostat adjusts the pressure by changing the virial through scaling of the particle positions $\vec{r}_i$ to $\mu \vec{r}_i$ and the volume $V$ to $\mu^3 V$, with
267 \mu=\left[1-\frac{\beta\delta t}{\tau_p}(p_0-p)\right]^\frac{1}{3}\text{ ,}
269 where $\beta$ is the isothermal compressibility and $p$ corresponds to the current pressure, which is determined by equation \eqref{eq:basics:ps}.
271 Using this method the system does not behave like a true $NpT$ ensemble.
272 On average $T$ and $p$ correspond to the expected values.
273 For large enough time constants, i.e. $\tau > 100 \delta t$, the method shows realistic fluctuations in $T$ and $p$.
274 The advantage of the approach is that the coupling can be decreased to minimize the disturbance of the system and likewise be adjusted to suit the needs of a given application.
275 It provides a stable algorithm that allows smooth changes of the system to new values of temperature or pressure, which is ideal for the investigated problem.
277 \section{Denstiy functional theory}
280 Dirac declared that chemistry has come to an end, its content being entirely contained in the powerul equation published by Schr\"odinger in 1926 \cite{schroedinger26} marking the beginning of wave mechanics.
281 Following the path of Schr\"odinger the problem in quantum-mechanical modeling of describing the many-body problem, i.e. a system of a large amount of interacting particles, is manifested in the high-dimensional Schr\"odinger equation for the wave function $\Psi({\vec{R}},{\vec{r}})$ that depends on the coordinates of all nuclei and electrons.
282 The Schr\"odinger equation contains the kinetic energy of the ions and electrons as well as the electron-ion, ion-ion and electron-electron interaction.
283 This cannot be solved exactly and finding approximate solutions requires several layers of simplification in order to reduce the number of free parameters.
284 Approximations that consider a truncated Hilbert space of single-particle orbitals yield promising results, however, with increasing complexity and demand for high accuracy the amount of Slater determinats to be evaluated massively increases.
286 In contrast, instead of using the description by the many-body wave function, the key point in density functional theory (DFT) is to recast the problem to a description utilizing the charge density $n(\vec{r})$, which constitutes a quantity in real space depending only on the three spatial coordinates.
287 In the following sections the basic idea of DFT will be outlined.
288 As will be shown, DFT can formally be regarded as an exactification of the Thomas Fermi theory \cite{thomas27,fermi27} and the self-consistent Hartree equations \cite{hartree28}.
289 A nice review is given in the Nobel lecture of Kohn \cite{kohn99}, one of the inventors of DFT.
291 \subsection{Born-Oppenheimer approximation}
293 Born and Oppenheimer proposed a simplification enabling the effective decoupling of the electronic and ionic degrees of freedom \cite{born27}.
294 Within the Born-Oppenheimer (BO) approximation the light electrons are assumed to move much faster and, thus, follow adiabatically to the motion of the heavy nuclei, if the latter are only slightly deflected from their equilibrium positions.
295 Thus, on the timescale of electronic motion the ions appear at fixed positions.
296 On the other way round, on the timescale of nuclear motion the electrons appear blurred in space adding an extra term to the ion-ion potential.
297 The simplified Schr\"odinger equation no longer contains the kinetic energy of the ions.
298 The momentary positions of the ions enter as fixed parameters and, therefore, the ion-ion interaction may be regarded as a constant added to the electronic energies.
299 The Schr\"odinger equation describing the remaining electronic problem reads
301 \left[-\frac{\hbar^2}{2m}\sum_j\nabla^2_j-
302 \sum_{j,l} \frac{Z_le^2}{|\vec{r}_j-\vec{R}_l|}+
303 \frac{1}{2}\sum_{j\neq j'}\frac{e^2}{|\vec{r}_j-\vec{r}_{j'}|}
304 \right] \Psi = E \Psi
307 where $Z_l$ are the atomic numbers of the nuclei and $\Psi$ is the many-electron wave function, which depends on the positions and spins of the electrons.
308 Accordingly, there is only a parametrical dependence on the ionic coordinates $\vec{R}_l$.
309 However, the remaining number of free parameters is still too high and need to be further decreased.
311 \subsection{Hohenberg-Kohn theorem and variational principle}
313 Investigating the energetics of Cu$_x$Zn$_{1-x}$ alloys, which for different compositions exhibit different transfers of charge between the Cu and Zn unit cells due to their chemical difference and, thus, varying electrostatic interactions contributing to the total energy, the work of Hohenberg and Kohn had a natural focus on the distribution of charge.
314 Although it was clear that the Thomas Fermi (TF) theory only provides a rough approximation to the exact solution of the many-electron Schr\"odinger equation the theory was of high interest since it provides an implicit relation of the potential and the electron density distribution.
315 This raised the question how to establish a connection between TF expressed in terms of $n(\vec{r})$ and the exact Schr\"odinger equation expressed in terms of the many-electron wave function $\Psi({\vec{r}})$ and whether a complete description in terms of the charge density is possible in principle.
316 The answer to this question, whether the charge density completely characterizes a system, became the starting point of modern DFT.
318 Considering a system with a nondegenerate ground state there is obviously only one ground-state charge density $n_0(\vec{r})$ that correpsonds to a given potential $V(\vec{r})$.
319 In 1964 Hohenberg and Kohn showed the opposite and far less obvious result \cite{hohenberg64}.
320 Employing no more than the Rayleigh-Ritz minimal principle it is concluded by {\em reductio ad absurdum} that for a nondegenerate ground state the same charge density cannot be generated by different potentials.
321 Thus, the charge density of the ground state $n_0(\vec{r})$ uniquely determines the potential $V(\vec{r})$ and, consequently, the full Hamiltonian and ground-state energy $E_0$.
322 In mathematical terms the full many-electron ground state is a unique functional of the charge density.
323 Im particular, $E$ is a functional $E[n(\vec{r})]$ of $n(\vec{r})$.
325 The ground-state charge density $n_0(\vec{r})$ minimizes the energy functional $E[n(\vec{r})]$, its value corresponding to the ground-state energy $E_0$, which enables the formulation of a minimal principle in terms of the charge density \cite{hohenberg64,levy82}
327 E_0=\min_{n(\vec{r})}
329 F[n(\vec{r})] + \int n(\vec{r}) V(\vec{r}) d\vec{r}
332 \label{eq:basics:hkm}
334 where $F[n(\vec{r})]$ is a universal functional of the charge density $n(\vec{r})$, which is composed of the kinetic energy functional $T[n(\vec{r})]$ and the interaction energy functional $U[n(\vec{r})]$.
335 The challenging problem of determining the exact ground-state is now formally reduced to the determination of the $3$-dimensional function $n(\vec{r})$, which minimizes the energy functional.
336 However, the complexity associated with the many-electron problem is now relocated in the task of finding the well-defined but, in contrast to the potential energy, not explicitly known functional $F[n(\vec{r})]$.
338 It is worth to note, that this minimal principle may be regarded as exactification of the TF theory, which is rederived by the approximations
340 T=\int n(\vec{r})\frac{3}{10}k_{\text{F}}^2(n(\vec{r}))d\vec{r}
344 U=\frac{1}{2}\int\frac{n(\vec{r})n(\vec{r}')}{|\vec{r}-\vec{r}'|}d\vec{r}d\vec{r}'
348 \subsection{Kohn-Sham system}
350 Inspired by the Hartree equations, i.e. a set of self-consistent single-particle equations for the approximate solution of the many-electron problem \cite{hartree28}, which describe atomic ground states much better than the TF theory, Kohn and Sham presented a Hartree-like formulation of the Hohenberg and Kohn minimal principle \eqref{eq:basics:hkm} \cite{kohn65}.
351 However, due to a more general approach, the new formulation is formally exact by introducing the energy functional $E_{\text{xc}}[n(vec{r})]$, which accounts for the exchange and correlation energy of the electron interaction $U$ and possible corrections due to electron interaction to the kinetic energy $T$.
352 The respective Kohn-Sham equations for the effective single-particle wave functions $\Phi_i(\vec{r})$ take the form
355 -\frac{\hbar^2}{2m}\nabla^2 + V_{\text{eff}}(\vec{r})
356 \right] \Phi_i(\vec{r})=\epsilon_i\Phi_i(\vec{r})
357 \label{eq:basics:kse1}
361 V_{\text{eff}}(\vec{r})=V(\vec{r})+\int\frac{e^2n(\vec{r}')}{|\vec{r}-\vec{r}'|}d\vec{r}'
362 + V_{\text{xc}(\vec{r})}
364 \label{eq:basics:kse2}
367 n(\vec{r})=\sum_{i=1}^N |\Phi_i(\vec{r})|^2
369 \label{eq:basics:kse3}
371 where the local exchange-correlation potential $V_{\text{xc}}(\vec{r})$ is the partial derivative of the exchange-correlation functional $E_{\text{xc}}[n(\vec{r})]$ with respect to the charge density $n(\vec{r})$ for the ground-state $n_0(\vec{r})$.
372 The first term in equation \eqref{eq:basics:kse1} corresponds to the kinetic energy of non-interacting electrons and the second term of equation \eqref{eq:basics:kse2} is just the Hartree contribution $V_{\text{H}}(\vec{r})$ to the interaction energy.
374 %V_{\text{xc}}(\vec{r})=\frac{\partial}{\partial n(\vec{r})}
375 % E_{\text{xc}}[n(\vec{r})] |_{n(\vec{r})=n_0(\vec{r})}
378 The system of interacting electrons is mapped to an auxiliary system, the Kohn-Sham (KS) system, of non-interacting electrons in an effective potential.
379 The exact effective potential $V_{\text{eff}}(\vec{r})$ may be regarded as a fictious external potential yielding a gound-state density for non-interacting electrons, which is equal to that for interacting electrons in the external potential $V(\vec{r})$.
380 The one-electron KS orbitals $\Phi_i(\vec{r})$ as well as the respective KS energies $\epsilon_i$ are not directly attached to any physical observable except for the ground-state density, which is determined by equation \eqref{eq:basics:kse3} and the ionization energy, which is equal to the highest occupied relative to the vacuum level.
381 The KS equations may be considered the formal exactification of the Hartree theory, which it is reduced to if the exchange-correlation potential and functional are neglected.
382 In addition to the Hartree-Fock (HF) method, KS theory includes the difference of the kinetic energy of interacting and non-interacting electrons as well as the remaining contributions to the correlation energy that are not part of the HF correlation.
384 The self-consistent KS equations \eqref{eq:basics:kse1}, \eqref{eq:basics:kse2} and \eqref{eq:basics:kse3} are non-linear partial differential equations, which may be solved numerically by an iterative process.
385 Starting from a first approximation for $n(\vec{r})$ the effective potential $V_{\text{eff}}(\vec{r})$ can be constructed followed by determining the one-electron orbitals $\Phi_i(\vec{r})$, which solve the single-particle Schr\"odinger equation for the respective potential.
386 The $\Phi_i(\vec{r})$ are used to obtain a new expression for $n(\vec{r})$.
387 These steps are repeated until the initial and new density are equal or reasonably converged.
389 Again, it is worth to note that the KS equations are formally exact.
390 Assuming exact functionals $E_{\text{xc}}[n(\vec{r})]$ and potentials $V_{\text{xc}}(\vec{r})$ all many-body effects are included.
391 Clearly, this directs attention to the functional, which now contains the costs involved with the many-electron problem.
393 \subsection{Approximations for exchange and correlation}
394 \label{subsection:ldagga}
396 As discussed above, the HK and KS formulations are exact and so far no approximations except the adiabatic approximation entered the theory.
397 However, to make concrete use of the theory, effective approximations for the exchange and correlation energy functional $E_{\text{xc}}[n(\vec{r})]$ are required.
399 Most simple and at the same time remarkably useful is the approximation of $E_{\text{xc}}[n(\vec{r})]$ by a function of the local density \cite{kohn65}
401 E^{\text{LDA}}_{\text{xc}}[n(\vec{r})]=\int\epsilon_{\text{xc}}(n(\vec{r}))n(\vec{r}) d\vec{r}
403 \label{eq:basics:xca}
405 which is, thus, called local density approximation (LDA).
406 Here, the exchange-correlation energy per particle of the uniform electron gas of constant density $n$ is used for $\epsilon_{\text{xc}}(n(\vec{r}))$.
407 Although, even in such a simple case, no exact form of the correlation part of $\epsilon_{\text{xc}}(n)$ is known, highly accurate numerical estimates using Monte Carlo methods \cite{ceperley80} and corresponding paramterizations exist \cite{perdew81}.
408 Obviously exact for the homogeneous electron gas, the LDA was {\em a priori} expected to be useful only for densities varying slowly on scales of the local Fermi or TF wavelength.
409 Nevertheless, LDA turned out to be extremely successful in describing some properties of highly inhomogeneous systems accurately within a few percent.
410 Although LDA is known to overestimate the cohesive energy in solids by \unit[10-20]{\%}, the lattice parameters are typically determined with an astonishing accuracy of about \unit[1]{\%}.
412 More accurate approximations of the exchange-correlation functional are realized by the introduction of gradient corrections with respect to the density \cite{kohn65}.
413 Respective considerations are based on the concept of an exchange-correlation hole density describing the depletion of the electron density in the vicinity of an electron.
414 The averaged hole density can be used to give a formally exact expression for $E_{\text{xc}}[n(\vec{r})]$ and an equivalent expression \cite{kohn96,kohn98}, which makes use of the electron density distribution $n(\tilde{\vec{r}})$ at positions $\tilde{\vec{r}}$ near $\vec{r}$, yielding the form
416 E_{\text{xc}}[n(\vec{r})]=\int\epsilon_{\text{xc}}(\vec{r};[n(\tilde{\vec{r}})])n(\vec{r}) d\vec{r}
418 for the exchange-correlation energy, where $\epsilon_{\text{xc}}(\vec{r};[n(\tilde{\vec{r}})])$ becomes a nearsighted functional of $n(\tilde{\vec{r}})$.
419 Expressing $n(\tilde{\vec{r}})$ in a Taylor series, $\epsilon_{\text{xc}}$ can be thought of as a function of coefficients, which correspond to the respective terms of the expansion.
420 Neglecting all terms of order $\mathcal{O}(\nabla n(\vec{r})$ results in the functional equal to LDA, which requires the function of variable $n$.
421 Including the next element of the Taylor series introduces the gradient correction to the functional, which requires the function of variables $n$ and $|\nabla n|$.
422 This is called the generalized gradient approximation (GGA), which expresses the exchange-correlation energy density as a function of the local density and the local gradient of the density
424 E^{\text{GGA}}_{\text{xc}}[n(\vec{r})]=\int\epsilon_{\text{xc}}(n(\vec{r}),|\nabla n(\vec{r})|)n(\vec{r}) d\vec{r}
427 These functionals constitute the simplest extensions of LDA for inhomogeneous systems.
428 At modest computational costs gradient-corrected functionals very often yield much better results than the LDA with respect to cohesive and binding energies.
430 \subsection{Plane-wave basis set}
432 Finally, a set of basis functions is required to represent the one-electron KS wave functions.
433 With respect to the numerical treatment it is favorable to approximate the wave functions by linear combinations of a finite number of such basis functions.
434 Covergence of the basis set, i.e. convergence of the wave functions with respect to the amount of basis functions, is most crucial for the accuracy of the numerical calulations.
435 Two classes of basis sets, the plane-wave and local basis sets, exist.
437 Local basis set functions usually are atomic orbitals, i.e. mathematical functions that describe the wave-like behavior of electrons, which are localized, i.e. centered on atoms or bonds.
438 Molecular orbitals can be represented by linear combinations of atomic orbitals (LCAO).
439 By construction, only a small number of basis functions is required to represent all of the electrons of each atom within reasonable accuracy.
440 Thus, local basis sets enable the implementation of methods that scale linearly with the number of atoms.
441 However, these methods rely on the fact that the wave functions are localized and exhibit an exponential decay resulting in a sparse Hamiltonian.
443 Another approach is to represent the KS wave functions by plane waves.
444 In fact, the employed {\textsc vasp} software is solving the KS equations within a plane-wave (PW) basis set.
445 The idea is based on the Bloch theorem \cite{bloch29}, which states that in a periodic crystal each electronic wave function $\Phi_i(\vec{r})$ can be written as the product of a wave-like envelope function $\exp(i\vec{kr})$ and a function that has the same periodicity as the lattice.
446 The latter one can be expressed by a Fourier series, i.e. a discrete set of plane waves whose wave vectors just correspond to reciprocal lattice vectors $\vec{G}$ of the crystal.
447 Thus, the one-electron wave function $\Phi_i(\vec{r})$ associated with the wave vector $\vec{k}$ can be expanded in terms of a discrete PW basis set
449 \Phi_i(\vec{r})=\sum_{\vec{G}
450 %, |\vec{G}+\vec{k}|<G_{\text{cut}}}
451 }c_{i,\vec{k}+\vec{G}} \exp\left(i(\vec{k}+\vec{G})\vec{r}\right)
453 %E_{\text{cut}}=\frac{\hbar^2 G^2_{\text{cut}}}{2m}
456 The basis set, which in principle should be infinite, can be truncated to include only plane waves that have kinetic energies $\hbar^2|\vec{k}+\vec{G}|^2/2m$ less than a particular cut-off energy $E_{\text{cut}}$.
457 Although coefficients $c_{i,\vec{k}+\vec{G}}$ corresponding to small kinetic energies are typically more important, convergence with respect to the cut-off energy is crucial for the accuracy of the calculations.
458 Convergence with respect to the basis set, however, is easily achieved by increasing $E_{\text{cut}}$ until the respective differences in total energy approximate zero.
460 Next to their simplicity, plane waves have several advantages.
461 The basis set is orthonormal by construction and, as mentioned above, it is simple to check for convergence.
462 The biggest advantage, however, is the ability to perform exact calculations by a discrete sum over a numerical grid.
463 This is due to the related construction of the grid and the PW basis.
464 Ofcourse, exactness is restricted by the fact that the PW basis set is finite.
465 The simple form of the PW representation of the KS equations
467 \sum_{\vec{G}'} \left[
468 \frac{\hbar^2}{2m}|\vec{k}+\vec{G}|^2 \delta_{\vec{GG}'}
469 + \tilde{V}(\vec{G}-\vec{G}')
470 + \tilde{V}_{\text{H}}(\vec{G}-\vec{G}')
471 + \tilde{V}_{\text{xc}}(\vec{G}-\vec{G}')
472 \right] c_{i,\vec{k}+\vec{G}} = \epsilon_i c_{i,\vec{k}+\vec{G}}
473 \label{eq:basics:pwks}
475 reveals further advantages.
476 The various potentials are described in terms of their Fourier transforms.
477 Equation \eqref{eq:basics:pwks} is solved by diagonalization of the Hamiltonian matrix $H_{\vec{k}+\vec{G},\vec{k}+\vec{G}'}$ given by the terms in the brackets.
478 The gradient operator is diagonal in reciprocal space whereas the exchange-correlation potential has a diagonal representation in real space.
479 This suggests to carry out different operations in real and reciprocal space, which requires frequent Fourier transformations.
480 These, however, can be efficiently achieved by the fast Fourier transformation (FFT) algorithm.
482 There are likewise disadvantages associated with the PW representation.
483 By construction, PW calculations require a periodic system.
484 This does not pose a severe problem since non-periodic systems can still be described by a suitable choice of the simulation cell.
485 Describing a defect, for instance, requires the inclusion of enough bulk material in the simulation to prevent or reduce the interaction with its periodic, artificial images.
486 As a consequence the number of atoms involved in the calculations are increased.
487 To describe surfaces, sufficiently thick vacuum layers need to be included to avoid interaction of adjacent crystal slabs.
488 Clearly, to appropriately approximate the wave functions and the respective charge density of a system composed of vacuum in addition to the solid in a PW basis, an increase of the cut-off energy is required.
489 According to equation \eqref{eq:basics:pwks} the size of the Hamiltonian depends on the cut-off energy and, therefore, the computational effort is likewise increased.
490 For the same reason, the description of tightly bound core electrons and the respective, highly localized charge density is hindered.
491 However, a much more profound problem exists whenever wave functions for the core as well as the valence electrons need to be calculated within a PW basis set.
492 Wave functions of the valence electrons exhibit rapid oscillations in the region occupied by the core electrons near the nuclei.
493 The oscillations maintain the orthogonality between the wave functions of the core and valence electrons, which is compulsory due to the exclusion principle.
494 Accurately approximating these oscillations demands for an extremely large PW basis set, which is too large for practical use.
495 Fortunately, the impossibility to model the core in addition to the valence electrons is eliminated in the pseudopotential approach discussed in the next section.
497 \subsection{Pseudopotentials}
499 As discussed in the last part of the previous section, an extremely large basis set of plane waves would be required to perform an all-electron calculation and a vast amount of computational time would be required to calculate the electronic wave functions.
500 It is worth to stress out one more time, that this is due to the orthogonalization wiggles of the wave functions of valence electrons near the nuclei.
501 Thus, existing core states practically prevent the use of a PW basis set.
502 However, the core electrons, which are tightly bound to the nuclei, do not contribute significantly to chemical bonding or other physical properties of the solid.
503 This fact is exploited in the pseudopotential approach \cite{} by removing the core electrons and replacing the atom and the associated strong ionic potential by a pseudoatom and a weaker pseudopotential that acts on a set of pseudo wave functions rather than the true valance wave functions.
504 Certain conditions need to be fulfilled by the constructed pseudopotentials and the resulting pseudo wave functions.
505 Outside the core region, the pseudo and real wafe functions as well as the generated charge densities need to be identical.
507 A pseudopotential is called norm-conserving if the pseudo and real charge contained within the core region match.
510 \subsection{Brillouin zone sampling}
512 Following Bloch's theorem only a finite number of electronic wave functions need to be calculated for a periodic system.
513 However, to calculate quantities like the total energy or charge density, these have to be evaluated in a sum over an infinite number of $\vec{k}$ points.
514 Since the values of the wave function within a small interval around $\vec{k}$ are almost identical, it is possible to approximate the infinite sum by a sum over an affordable number of $k$ points, each representing the respective region of the wave function in $\vec{k}$ space.
515 Methods have been derived for obtaining very accurate approximations by a summation over special sets of $\vec{k}$ points with distinct, associated weights \cite{baldereschi73,chadi73,monkhorst76}.
516 If present, symmetries in reciprocal space may further reduce the number of calculations.
517 For supercells, i.e. repeating unit cells that contain several primitive cells, restricting the sampling of the Brillouin zone (BZ) to the $\Gamma$ point can yield quite accurat results.
518 In fact, with respect to BZ sampling, calculating wave functions of a supercell containing $n$ primitive cells for only one $\vec{k}$ point is equivalent to the scenario of a single primitive cell and the summation over $n$ points in $\vec{k}$ space.
519 In general, finer $\vec{k}$ point meshes better account for the periodicity of a system, which in some cases, however, might be fictious anyway.
521 \subsection{Hellmann-Feynman forces}
523 \section{Modeling of defects}
524 \label{section:basics:defects}
526 \section{Migration paths and diffusion barriers}
527 \label{section:basics:migration}