# Forces for the Embedded Atom Model

It is not easy to find the forces written down for embedded atom model.

The potential is always written as

$U_i = F_i(\rho) + \frac{1}{2}\sum_{j \neq i} \phi(r_{ij})$

with

$\rho = \sum_{j\neq i} f_j(r_{ij})$

Probably the reason for forces written down is that different fitting functions for $F$, $\rho$, $\phi$ are used depending on the researcher.  For this reason, LAMMPS uses splines for their implementation.

As usual, the force is calculated by

$\boldsymbol F_i = \nabla U_i$

$\varphi_{ij} = \frac{\partial U}{\partial r_{ij}} = -\frac{\partial F(\rho_i)}{\partial \rho}\frac{\partial f_i}{\partial r_{ij}} \nabla_i r_{ij} -\frac{\partial F(\rho_j)}{\partial \rho}\frac{\partial f_j}{\partial r_{ij}}\nabla_i r_{ij}- \frac{1}{2}\frac{\partial \phi}{\partial r_{ij}}\nabla_i r_{ij}$

Cai and Ye suggest the following forms will in the following forms (with derivatives added by me). Note that we have replaced $r_{ij}$ with a more compact notation given by $r$.

$f(r) = f_e \mathrm{exp}\left(-\chi (r- r_e) \right)$

$\frac{\partial f}{\partial r} = -f_e \chi \mathrm{exp}\left(-\chi (r - r_e) \right)$

$F(\rho) = -F_0\left[1- \mathrm{ln}\left(\frac{\rho}{\rho_e} \right)^n \right]\left(\frac{\rho}{\rho_e} \right)^n + F_1 \left(\frac{\rho}{\rho_e} \right)$

$\frac{\partial F}{\partial \rho} = -F_0n\left( \frac{\rho}{\rho_e} \right)^{n-1}\left(\frac{1}{\rho_e} - 1 \right) + F_0n^2 \frac{\rho^{n-1}}{\rho_e^n}\mathrm{ln}\left(\frac{\rho}{\rho_e} \right) + \frac{F_1}{\rho_e}$

$\phi(r) = -\alpha\left[1 + \beta\left(\frac{r}{r_a}-1 \right) \right]\mathrm{exp}\left(\beta\left(\frac{r}{r_a}-1 \right) \right)$

$\frac{\partial \phi}{\partial r} = - \alpha \beta \left[1 + \beta\left(\frac{r}{r_a} -1 \right)\right] \mathrm{exp}\left(\beta\left(\frac{r}{r_a} -1 \right) \right) + -\alpha \frac{\beta}{r_a}\mathrm{exp}\left(\beta\left(\frac{r}{r_a} -1 \right) \right)$

# A Brief Introduction to Phase Field Modeling of Solidification

This is copy and pasted from my Master’s thesis, and is a very short introduction into phase field modeling of solidification. It is my notes on Boettingers 2002 review article Phase-Field Simulation of Solidification
A free energy functional governing solid-liquid phase transition is defined as follows
$F = \int_V \left[ f(\phi, c, T) - \frac{\epsilon_c^2}{2}\vert \nabla c\vert^2 - \frac{\epsilon_\phi^2}{2}\vert \nabla \phi\vert^2 \right] \mathrm{dV}$
where $f$, $c$ and $\phi$ are the free energy density, concentration and phase field, respectively with $\epsilon_c$ and $\epsilon_\phi$ are the associated gradient entropy coefficients. The free energy density $f$ contains at least one minima for each phase (a two phase system is typically a double well). The time dependent changes can be derived from variational techniques in combination of mass continuity \cite{Boettinger2002} and are given by
$\frac{\partial \phi}{\partial t} = -M_{\phi}\left(\frac{\partial f}{\partial \phi} - \epsilon_\phi^2 \nabla^2 \phi \right)$
$\frac{\partial c}{\partial t} = \nabla \cdot \left(M_{c} c(1-c) \nabla \left(\frac{\partial f}{\partial c} - \epsilon_c^2 \nabla^2 c \right)\right) \label{eq:phase_conserved}$
where $M_{\phi}$ and $M_{c}$ are positive mobilities related to the material kinetics and the diffusion process. The difference between the two equations is due to the fact that $c$ (in the bottom equation) is conserved while $\phi$ (the top equation) is not. If the solid and liquid free energies are available the free energy density can be constructed as
$f(\phi, c, T) = \left[(1-c)W_A + cW_B\right] g(\phi) + (1-p(\phi))f^S(c,t) + p(\phi) f^L(c,T)$
where $g(\phi)$ is a double well, $W_A$ is the energy barrier between material A liquid phase and solid phase, $W_B$ is the energy barrier between material B liquid and solid phase, $p(\phi)$ is the interpolation function–typically $\phi$ or $C_1 \int g(\phi) \mathrm{d}\phi$ where $C_1$ is a constant. $f^S(c,t)$ is the solid phase free energy and $f^L(c,t)$ is the liquid phase free energy. For pure elements, the following simplification is frequently made. Take the solid state as the standard state $f_S^A(T) = 0$ and expand the difference between the liquid and solid free energies around the melting point
$f_L^A(T) - f_S^A(T) = \frac{L(T_M^A - T)}{T_M^A}$
where $T_M^A$ is the melting temperature of the material. This gives the free energy
$f_A(\phi,T) = W_A g(\phi) + L_A \frac{T-M^A -T}{T_M^A} p(\phi)$
The free enthalpy which is used to derive the heat equation is described as
$h = h_0 + C_p T + L \phi$
This gives our heat and phase field equations for a pure element as follows
$C_p \frac{\partial T}{\partial t} + L \frac{\partial \phi}{\partial t} = \nabla \cdot (k\nabla T)$
$\frac{\partial \phi}{\partial t} = M_\phi \epsilon_\phi\left(\nabla^2 \phi - \frac{2 W_A}{\epsilon_\phi} g'(\phi)\right) - \frac{M_\phi L}{T_M}(T_M - T)$
The reader is directed to the review paper from Boettinger et al. for further detail and examples.

I can never remember the Laplacians, gradients and divergence formulas in spherical and cylindrical coordinates. Usually I use Griffiths Introduction to Electrodynamics, however I don’t always carry the book with me. As such I am leaving them here as a useful place for me to reference. While there are many places to reference they are usually not front in center.

I hope you find them useful too.

##### Cylindrical Polar Coordinates

$\nabla U = \frac{\partial U}{\partial r}\boldsymbol{e}_r + \frac{1}{r}\frac{\partial U}{\partial \phi} \boldsymbol{e}_\phi + \frac{\partial U}{\partial z} \boldsymbol{e}_z$

$\nabla \cdot \boldsymbol{F} = \frac{1}{r}\left[\frac{\partial(rF_r)}{\partial r} + \frac{\partial(rF_\phi)}{\partial \phi} + \frac{\partial(rF_z)}{\partial z} \right]$

$\nabla^2 U = \frac{1}{r}\frac{\partial}{\partial r}r \frac{\partial U}{\partial r} +\frac{1}{r^2} \frac{\partial^2 U}{\partial \phi^2} + \frac{\partial^2U}{\partial z^2}$

##### Spherical Polar Coordinates

$\nabla U = \frac{\partial U}{\partial r}\boldsymbol{e}_r + \frac{1}{r}\frac{\partial U}{\partial \theta} \boldsymbol{e}_\theta + \frac{1}{r \mathrm{sin}\theta}\frac{\partial U}{\partial \phi} \boldsymbol{e}_\phi$

$\nabla \cdot \boldsymbol{F} = \frac{1}{r^2 \mathrm{sin}\theta}\left[\frac{\partial (r^2 \mathrm{sin} \theta F_r)}{\partial r} + \frac{\partial (\mathrm{sin}\theta F_\theta)}{\partial \theta} + \frac{\partial(rF_\phi)}{\partial \phi} \right]$

$\nabla^2 U = \frac{1}{r^2}\frac{\partial}{\partial r}r^2 \frac{\partial(U)}{\partial r} + \frac{1}{\mathrm{sin}\theta} \frac{\partial}{\partial \theta}\left(\mathrm{sin}\theta \frac{\partial}{\partial \theta}\right) +\frac{1}{\mathrm{sin}^2\theta} \frac{\partial^2 U}{\partial \phi^2}$

# Attention is Everything

Being human has one distinctive drawback.  Namely we are finite. Finite means we are limited.  We are limited in the number of things we can think about, the number of things we can experience, the number of things we can do and the number of things we can invest our energy into.  As such you should realize something.  Where you direct your attention isn’t just important it is everything.

Therefore you should closely guard what you pay attention to.

Don’t fill your head with garbage.  Fill it with the things of your choosing.  Things that are important to you.

This wasn’t supposed to be about life advice and back to probabilistic reasoning and the rational mind.

Because your attention and energy are limited the brain has to choose what it should pay attention to.  This is the primary reason people dislike dealing with uncertainty. Dealing with uncertainty distracts their attention away from what is important to them. And uncertainty requires more brain energy than dealing with certainty.  So as much as possible are mind will attempt at least in the conscience portion to make things certain.

A quick Google search on “stupid people politics” lead to the following question on quora.coma: “Why do Americans elect stupid politicians?”  To which the number one answer is “Because people are stupid?”  I disagree I don’t think people are stupid.  They might be under informed, but that doesn’t make them stupid.  For many people giving attention to politics decreases their quality of life, so they rationally choose to pay little attention to it.  Additionally many issues in politics are complicated.  Predictions in complicated systems are very difficult. Furthermore many decisions in politics are zero sum decisions.  That is they hurt as much as they help, or at least they help some people while hurting others.  For instance free trade hurts unskilled laborers, especially manufacturers.  But it helps people whose jobs are location dependent such as plumbers electricians real estate agents are helped by it (lower prices).

There are many issues other than politics where people rationally choose to be under informed How many of us don’t think deeply about the way engines are designed.  And why should we?  Knowing the details of engines is unlikely to make much of a difference in your life unless you happen to be an engineer or a mechanics.   And although to me, it seems ridiculous that people wouldn’t want to know how engines work, it is also true that there are many things that I don’t know or care to investigate such as pop culture (e.g. movies, entertainment and fashion) and professional sports.

The point here is that where we give our attention directs what we know about what we can speak intelligently about but also what it is that we are under informed about.  What we are good at and what we “need” more practice in.

Because attention and energy are limited, we should expect that the mind 1.) has good priors about things which are important for survival and reproduction and 2.) Uses very efficient “algorithms”  when making decisions and lots of short cuts.  That is we expect the mind to have intuition about “life” decisions.  And we expect that we will utilize processes that settle on a suboptimal solution in exchange for faster learning.

# Experience Graphs and Road Maps

Recently research out of Carnegie Mellon, U Penn and Willow Garage developed a new optimization tool, namely Experience Graphs.  Their paper is available here.  Lets start with an example.

Suppose you are visiting a new city.  For example, London.  London is a good choice for me because I have never been there and it is also a popular tourist destination .  We are in London staying at a hotel and decide we want to go visit Big Ben.  So we pull up google maps and ask google the best way to get there. Now once we go there we have a good time, then we drive home.

On the next day we want to go to Buckingham Palace or Trafalgar Square.  Instead of plotting a new route from home, we might instead remember how to get to Big Ben, and then look up how to get to Buckingham Palace from Big Ben.  Eventually, we begin to learn a few of the main streets and figure out how to get to places using our known paths to get there.

I hope we would all agree that this is an efficient way to quickly learn your way around a city.  Since some of you like using your GPS lets try a different example.

I am beginning to learn how to woodwork.  I know how to use bolts but not nails. (Technically I know how to use nails, I just don’t know how to not hammer my thumb when putting them in so I avoid them like the plague). There are certainly many situations where nails are a better tool than screws.  (Screws and bolts are stronger but also more expensive and take longer to put in). Because I am familiar with screws, I use them instead of learning how to use nails.  Why? Because there is a cost associated with learning to become good with nails, so I accept a suboptimal, but sufficient, solution instead of creating the optimal solution.

These are, in a sense, bad examples of why experience is useful. But we all have many tasks which we can do more rapidly because we have done it before.  For instance, I have just started putting eye drops into my dogs eyes and it is finally beginning to become easier after three weeks.  That is experience.  What is interesting about the experience graph is that it is a mathematical method of apply experience into robots and artificial intelligence.

The way an experience graph works is lets assume we have a graph.  We then create a subgraph (that is a graph consisting of parts of the original graph) which is based on experiences, i.e. things we already know how to do.  This new graph is called the experience graph.  When problem solving, solutions which use processes from the experience graph are then prioritized over solutions which don’t use the experience graph.  This allows the “artificial intelligence” to learn.

# Fake News, Facts and Opinions

I have a different view on facts than most. Before we get into that, lets look at their definition. A dictionary.com defines a fact as

a thing that is indisputably the case

While an opinion is

a view or judgement formed about something, not necessarily based on fact or knowledge.

These definitions mean the classification of fact or opinion is subjective.  Lets look at an example. Consider the following example

Russia influenced our latest election.  According to John Schiendler, a columnist at the Observer who I respect.

Crazy to deny: Russian espionage & covert action influenced the 2016 election.

Crazy to believe: Russians literally “hacked” our election.

And yet Ted Cruz, Donald Trump and Ann Coulter all deny the former.

So is the idea that Russian espionage & convert action influenced our election is that an opinion or is it a fact.  If I had to classify it, I would say that it is an opinion. But I suspect Mr. Schiendler would call the Russian influence a fact. If you agree, lets go down the rabbit hole a little more.

• Wikileaks is a GRU (Russian Intelligence Front).
• The Source of the hacks on Hillary’s emails were Russian.
• Donald Trump was elected fairly.
• Donald Trump is a buffoon.
• Hillary Clinton had a pay for play scheme with the Clinton Foundation.

Fact, opinion or false fact?  Who knows.  To be frank, I have a hard time enough time figuring out what is true anymore.

Because of this, I prefer a different classification.  Data and conclusions.  Data consists of things we measure i.e. touch, taste, see, hear and smell while conclusions are what we believe to be true because of the data.  Each conclusion can then have a degree of confidence based on how strongly the data supports the conclusion.

I read John Schindler’s arguments for how and why Russian influences elections and the examples he gave with Brexit, Crimea, Ukraine and the US (the data being Schiendler claimed ____).  I therefore concluded that the Russians attempted to influence the U.S. election through social media and wiki leaks.  Or perhaps it is better to say that I accepted John’s conclusions. I further suspect that they also succeeded in their attempt.

Ignoring the concept of fact and opinion and instead focusing on data and conclusions has been helpful in keeping the different narratives from the many sources straight during the last confusing year.  I hope the framework helps you deal with the “era of fake news”.

# LAMMPS indent example with comments

I have recently been learning LAMMPS for one of my projects and while the documentation is decent, learning everything is a bit of a challenge. I might turn this into a decent tutorial on how to run LAMMPS. But for now, I have gone through the indent example and commented all the lines of code.  If you are learning LAMMPS as I am, I hope this helps.

# 2d indenter simulation

dimension 2
boundary p s p
#periodic in the x and z, and shrink wrapped in the y direction

atom_style atomic #sets atom style to atomic (only default values position, mass,
neighbor 0.3 bin
neigh_modify delay 5

# create geometry

lattice hex 0.9 #hexagonal lattice with 0.9 units between atoms
region box block 0 20 0 10 -0.25 0.25 #creates a geometric domain
create_box 2 box #creates a simulation box in region box of atom type two
create_atoms 1 box #creates atoms of atom type 1 in a box pattern

mass 1 1.0 #mass of atom 1
mass 2 1.0 #mass of atom 2

# LJ potentials

pair_style lj/cut 2.5 #lennard jones with cutoff of 2.5
pair_coeff * * 1.0 1.0 2.5 #lenard jones potential for all atoms with epsilon =1 (energy), sigma = 1.0 (distance) cutoff1 = 2.5

# define groups

region 1 block INF INF INF 1.25 INF INF #creates infinite region of all atoms with x<1.25
group lower region 1 #groups atoms called lower in region 1 (just defined above)
group mobile subtract all lower #groups into group mobile all atoms that aren’t in lower. Note that mobile is the indenter and lower is the
set group lower type 2 #sets atoms in group lower to type 2

# initial velocities

compute new mobile temp #id new computes temp of mobile group (indenter)
velocity mobile create 0.2 482748 temp new
fix 1 all nve
#holds constant number of particles, volume and total energy
fix 2 lower setforce 0.0 0.0 0.0
#zeros forces on all particles
fix 3 all temp/rescale 100 0.1 0.1 0.01 1.0
#resets the temperature of the particles by scaling their velocity every 100 steps with a desired temperature of 0.1. Will rescale temperature only if the temperature is more than .01 from its desired tempearture

# run with indenter

timestep 0.003 #sets time steps
variable k equal 1000.0/xlat
variable y equal “13.0*ylat – step*dt*0.02*ylat”

fix 4 all indent \$k sphere 10 v_y 0 5.0 #command for creating an indenter with force k.
#the keyword sphere means
fix 5 all enforce2d
#forces model to be 2d, no movement in the z direction

thermo 1000
thermo_modify temp new

dump 1 all atom 250 dump.indent

#dump 2 all image 1000 image.*.jpg type type &