Elk FP-LAPW

Jump to: navigation, search


Elk-Logo.png

Elk code is an all-electron full-potential linearized augmented-plane-wave (FP-LAPW) code for determining the properties of crystalline solids.




New to Kogence?

  1. Why Kogence?: Watch the Video
  2. How Kogence Woks?: Watch the Video
  3. Step by step instruction on how to execute your first simulation model on kogence: How To Use Kogence

Elk FP-LAPW Access on Kogence

On Kogence, Elk FP-LAPW is pre-configured and pre-optimized and is available to be run on a single cloud machine of your choice or on an autoscaling cloud HPC cluster. On Kogence, Elk FP-LAPW runs in a docker container. Elk FP-LAPW is available under Free Individual Plans.

Versions Available on Kogence

Following versions are deployed, provisioned and performance optimized on Kogence.

  1. Elk 5.2.14

Single Node Invocation Options

For using Elk FP-LAPW on Kogence, you would first create or copy an existing Elk FP-LAPW model. You can see list of ready to copy and execute Elk FP-LAPW model on Kogence here: Category:Elk FP-LAPW. Step by step instructions for creating and copying models is here How To Use Kogence.

Open your model, go to Settings -> Software. Click + button to add a software. Select Elk FP-LAPW from dropdown menu to select the Elk FP-LAPW software container. Then you specify the binary/solver that you want to invoke and specify the invocation options -- both on the empty textbox next to the dropdown menu. For example, you can type following in the empty textbox:

  1. Using CAD GUI: Elk does not come with any GUI. The code requires a single user input file, elk.in which you can edit using Kogence built-in browser based code editor accessible by double clicking on elk.in under File tab of your Elk model. Elk outputs data in standard formats and can easily be plotted with several common charting and data analysis utilities such as VESTA , ParaView, Matlab or Octave, all of which are available on Kogence.
  2. Using Solvers in Batch Mode
    1. elk elk.in: Executes model(s) in batch mode. Results are saved in various *.OUT files. When you execute elk through Kogence console (i.e. from settings->software) on a single node, then elk will use OpenMP parallelism automatically and you don't have to do anything. If you like you can also use MPI parallelism. For using MPI parallelism, please see sections below on Shell Terminal Access and Bash Script Executions.
    2. eos eos.in: Fits equations of state to energy-volume data.
    3. spacegroup spacegroup.in: Produces crystal geometries from spacegroup data.
  3. Bash Script Executions: In the settings->software page, first select the container Elk FP-LAPW. Leave the textbox empty, i.e. don't supply any solver name or inputs/arguments to the solver. This will make the Elk docker containers available in your workflow to be invoked from your bash script. Then press the + button to add additional software. Choose CloudShell from the dropdown menu. In the empty textbox you can type BashShell followed up any command, for example you can type:
    1. BashShell mpirun -np #cpu elk elk.in : This will invoke elk on single node with MPI parallelism instead of OpenMP.
    2. BashShell YourScript.sh YourScriptArgs: Will run your custom shell script. Make sure you add/upload your bash script under the Files tab of your model before running the model. You can use/adapt the example single node shell script provided below. These scripts are already included in the public example models that you can copy and execute.
  4. Shell Terminal Access: In the settings->software page, first select the container Elk FP-LAPW. Leave the textbox empty, i.e. don't supply any solver name or inputs/arguments to the solver. This will make the Elk docker containers available in your workflow to be invoked from shell terminal. Then press the + button to add additional software. Choose either xterm or gnome-terminal. When you run the model, you can go to Visualizer tab and you will see your shell terminal. Here you can also execute your own bash scripts. Make sure you add/upload your bash script under the Files tab of your model before running the model. You can use/adapt the example single node shell script provided below. These scripts are already included in the public example models that you can copy and execute.

Autoscaling Cluster Invocation Options

On Kogence, you can easily send large jobs or a set of parallel jobs to an HPC cluster that automatically scales up and down depending upon the workload submitted. All you have to do is to select the "Run on Autoscaling Cluster" checkbox in the Settings->Machine tab of your model. Please note that all of the invocation methods discussed in previous section are still valid but all of those will execute on the master node and will not be submitted to scheduler managing the cluster. To send jobs to scheduler please use methods described below.

For maximum performance, it is recommended to use hybrid parallelism. OpenMP on each node and MPI across nodes. You can achieve this in either of following methods.

  1. Bash Script Executions: In the settings->software page, first select the container Elk FP-LAPW. Leave the textbox empty, i.e. don't supply any solver name or inputs/arguments to the solver. This will make the Elk docker containers available in your workflow to be invoked from your bash script. Then press the + button to add additional software. Choose CloudShell from the dropdown menu. In the empty textbox you can type BashShell followed up any command. for example you can type: BashShell YourScript.sh YourScriptArgs: Will run your custom shell script. Make sure you add/upload your bash script under the Files tab of your model before running the model. You can use/adapt the example autoscaling cluster shell script provide below. These scripts are already included in the public example models that you can copy and execute.
  2. Shell Terminal Access: In the settings->software page, first select the container Elk FP-LAPW. Leave the textbox empty, i.e. don't supply any solver name or inputs/arguments to the solver. This will make the Elk docker containers available in your workflow to be invoked from shell terminal. . Then press the + button to add additional software. Choose either xterm or gnome-terminal. When you run the model, you can go to Visualizer tab and you will see your shell terminal. Here you can execute your own bash scripts. Make sure you add/upload your bah script under the Files tab of your model before running the model. You can use/adapt the example autoscaling cluster shell script provide below. These scripts are already included in the public example models that you can copy and execute.

A typical scientific simulation workflow involves three steps. STEP 1: pre-processing using CAD environment on master node. STEP2: Sending solver jobs to multiple worker nodes in non-graphical batch mode. STEP3: Post-processing suing CAD environment on master node. Furthermore, pre-processing, post-processing and solvers may all be different software packages. Kogence Settings->Software tab allows you to configure all such complex workflows at ease.

First you will select CAD programs. These will come up in master node. Next you will add a shell terminal program and directly send your shell script to cluster-bash interpreter. This shell script will send jobs to job scheduler managing the autoscaling cluster. Then once shell script exits, your post processing CAD programs will kick in on the master node and worker nodes would automatically scale down/terminate.

Combining Multiple Software in Workflow

You may want to pre-process you model using different software such as VESTA , ParaView, Matlab or Octave, run elk solver in batch mode on a cluster and then post-process your data in different software such as VESTA , ParaView, Matlab or Octave. Kogence allows you to build such workflows on settings-software. Please note that on Kogence all software and solvers are deployed on docker containers. Once you select the containers you want, Kogence automatically composes them so you can call one program from another just like you would on your onprem workstation or on your desktop. Therefore, before you start calling programs in shell terminal or in your bash script make sure you selected all software and solvers that your bash script needs otherwise those solvers would not be available to be used in the bash script. You can skip automatically invoking them through Kogence software-settings by not providing an inputs after the solver name/binary name. This way you can make them available to your bash script and then invoke them from your bash script.

  1. Blocking vs No Blocking Executions: By default all software selected in Settings->Software tab work in blocking model. Meaning second software/solver will not start until first one finished. If you want them to get started simultaneously then place & at the end of the text you typed in the empty textbox for the first entry.

Output and Error Logs

On Kogence, stdout is automatically saved under Files tab of your model in a file called ___titusiOut. Similarly, stderr is automatically saved in another file called ___titusiError. While your model is running, you will see Visualizer button being active in the top right corner of the NavBar. You will see errors and outputs being printed live on the screen. Once simulation ends and machine/cluster terminates, the Visualizer will become inactive. Updated ___titusiOut and___titusiError files will automatically sync back to the app webserver and you will can view these updated files under the Files tab of your model.

Model Input and Result Files

You can upload and edit your input files under Files tab of your model. Similarly,, model outputs and results are also saved under Files tab of your model. Please make sure that you upload and edit your model files before launching the simulation. Once simulation is launched, editing and uploading under Files tab is locked and Kogence web app is connected with high spared NFS on both the master and the worker nodes. So your data is automatically shared in live mode between all your master and worker nodes. Once simulation has been launched, and if requested a terminal shell program such as xterm or gnome-terminal under software->settings, then you can use the shell terminal to check all your model inputs and outputs in you job folder. After simulation ends, the lock on the Files tab would be removed and you can see the updated files under the Files tab again.

Single Node Shell Script Example

Here is an example shell script that you can adapt to your use case.
#!/bin/bash
# Run the simulation engine on a set of project
# files, one file at a time, on the local machine.
#
# submit-to-local.sh MyProg file1 [file2 ... [fileN]]
#
# The command arguments are:
#
# MyProg    Name of the binary or solver engine. 
#
# file*     The name of a project file. One file is required
#           but multiple can be specified on the same command line.
#
# Example: Use a regular expression to submit all project files 
# in the ./path-to-projects directory
#
# submit-to-local.sh elk ./path-to-projects/*.in
#
# *********************************************************************************

#Locate the directory of this script so we can find template relative to this path
# We are not using these but you may need to use them to adapt this script to suite your needs.
CURRENTDIR="$(pwd)"   # invocation dir
SCRIPTDIR="$(dirname "$(readlink -f "$0")")"   # directory where this script lives

MYPROG=$(which "$1")
shift

export OMP_NUM_THREADS=$(nproc)

#Loop over each file provided on the command line
OPTIONS=( "$@" )
INDEX=0

while(( $# > 0 ))
do
   # You should change the file extension (.in) to suite your needs 
   if [[ ${1: -3} == ".in" ]]; then
    set -x
    "$MYPROG" "${OPTIONS[@]:0:${INDEX}}" "$1"
    set +x

   else
      INDEX=$((INDEX+1))
   fi
shift
done

Cluster Shell Script Example

On Kogence Sun Grid Engine (SGE) is the default job scheduler.

Here is an example shell script that you can adapt to your use case. This script taken multiple input files, then generate SGE submission script for each one of them and then submit them to SGE.

#!/bin/bash
# This script will create a SGE style job submission script for
# project files using the template provided in sge-template.sh. 
# The generated scripts are then submitted with qsub command.
#
# The calling convention for this script is:
#
# submit-to-sge.sh MyProg [-n <procs>] file1 [file2 ... [fileN]]
#
# The arguments are as follows:
#
# MyProg    Name of the binary or solver engine. 
#
# -n        The number of processors to use on each node for the job(s).
#           If no argument is given a default value of 8 is used.
#           Make sure you choose this number to be equal to number 
#           of CPU in the worker nodes of your cluster.  
#
# file*     A project file. One is required, but
#           multiple can be specified on one command line
#
# Users may wish to customize this script and the template file to suit 
# their needs. 
#
# Example: Use a regular expression to submit all project files 
# in the ./path-to-projects directory
#
# submit-to-sge.sh elk -n 4 ./path-to-projects/*.in
#
##########################################################################

#Locate the directory of this script so we can find template relative to this path

CURRENTDIR="$(pwd)"   # invocation dir
SCRIPTDIR="$(dirname "$(readlink -f "$0")")"   # directory where this script lives

#The location of the template file to use when submitting jobs
#The line below can be changed to use your own template file
TEMPLATE=$SCRIPTDIR/sge-template.sh

MYPROG=$(which "$1")
shift

#Determine number of processors to use. Default is 8 if no -n argument is
#given
PROCS=8
if [ "$1" = "-n" ]; then
    PROCS=$2
    shift
    shift
fi


#For each fsp file listed on the command line, generate the
#submission script and submit it with qsub
while(( $# > 0 ))
do

#Path of in file
DIRIN=`dirname $1`

#in file name without path
FILENAME=`basename $1`

# Output file name
OUTNAME=$DIRIN/${FILENAME%.in}.out

# error file name
ERRNAME=$DIRIN/${FILENAME%.in}.err

# SGE submission file
SHELLFILE=${1%.in}.sh

    #Generate the submission script by replacing the tokens in the template
    #Addional arguments  can be added to template to fine tune

#The replacements
sed -e "s#<n>#$PROCS#g" \
    -e "s#<dir_fsp>#$DIRIN#g" \
    -e "s#<filename>#$FILENAME#g" \
    -e "s#<outname>#$OUTNAME#g" \
    -e "s#<errname>#$ERRNAME#g" \
	-e "s#<myprog>#$MYPROG#g" \
    $TEMPLATE > $DIRIN/$SHELLFILE

#Submit the job scrtipt using qsub
echo Submitting: $SHELLFILE
qsub $DIRIN/$SHELLFILE

shift
done
Here is the SGE template file that above shell script uses. Feel free to adapt to your own use case.
#!/bin/bash
#$ -V
#$ -cwd
#$ -pe mpi <n>
#$ -S /bin/bash
#$ -o <outname>
#$ -e <errname>

cd $SGE_O_WORKDIR
echo "Starting run at: `date`"
echo "Running on $NSLOTS processors."
MY_PROG="<myprog>"
INPUT="<filename>"

export OMP_NUM_THREADS=<n>

mpirun  -pernode  -np  <n>  $MY_PROG ./${INPUT}

echo "Job finished at: `date`"
exit


Elk Input File (elk.in) Format

Elk does not come with any GUI. The code requires a single user input file, elk.in, which for most cases should be easier to understand than a GUI. Elk outputs data in standard formats and can easily be plotted with several common charting and data analysis utilities such as VESTA , ParaView, Matlab and Octave, all of which are available on Kogence.

elk.inis written as a set of input blocks. Some of the common input blocks are defined below.

atoms

Defines the atomic species as well as their positions in the unit cell and the external magnetic field applied throughout the muffin-tin. These fields are used to break spin symmetry and should be considered infinitesimal as they do not contribute directly to the total energy.
atoms
  2                                 : nspecies
  'Al.in'                           : spfname
  1                                 : natoms; atposl below
  0.0   0.0   0.0
  'As.in'
  1
  0.25  0.25  0.25
nspecies defines the number of atomic species in a unit cell (2 in this example: Al and As). Line following is the name of the first species. Line following that defines the number of atoms of that species in a unit cell. Line following that defines the position of those atoms in lattice coordinates. This pattern then repeats for the next species.

avec

Lattice vectors of the crystal in atomic units (Bohr).
avec
  0.5  0.5  0.0
  0.5  0.0  0.5
  0.0  0.5  0.5

tasks

A list of tasks for the code to perform sequentially. The list should be terminated with a blank line.
tasks
  0
  21
In this example, we are telling Elk to calculate the ground state (i.e. task 0) as well as band structure plot that includes the s, p, d, f character (i.e. task 21).

task 0 computes the self-consistent Kohn-Sham ground-state.  General information is written to the file INFO.OUT. First- and second-variational eigenvalues, eigenvectors and occupancies are written to the unformatted files EVALFV.OUT, EVALSV.OUT, EVECFV.OUT, EVECSV.OUT and OCCSV.OUT. The density, magnetisation, Kohn-Sham potential and magnetic field are written to STATE.OUT.

task 20 as well as task 21 produce a band structure along the path in reciprocal space which connects the vertices in the array vvlp1d.  The band structure is obtained from the second-variational eigenvalues and is written to the file BAND.OUT with the Fermi energy set to zero.  If required (i.e. task 21), band structures  are  plotted  to  files  BAND Sss Aaaaa.OUT  for  atom  aaaa  of  species  ss,  which include the band characters for each l  component of that atom in columns 4 onwards. Column 3 contains the sum over l of the characters.  Vertex location lines are written to BANDLINES.OUT.

Each task has an associated integer as follows:

  • -1 Write out the version number of the code.
  • 0 Ground state run starting from the atomic densities.
  • 1 Resumption of ground-state run using density in STATE.OUT.
  • 2 Geometry optimisation run starting from the atomic densities, with atomic positions written to GEOMETRY.OUT.
  • 3 Resumption of geometry optimisation run using density in STATE.OUT but with positions from elk.in.
  • 5 Ground state Hartree-Fock run.
  • 10 Total, partial and interstitial density of states (DOS).
  • 14 Plots the smooth Dirac delta and Heaviside step functions used by the code to calculate occupancies.
  • 15 Output L, S and J total expectation values.
  • 16 Output L, S and J expectation values for each k-point and state in kstlist.
  • 20 Band structure plot.
  • 21 Band structure plot which includes angular momentum characters for every atom.
  • 25 Compute the effective mass tensor at the k-point given by vklem.
  • 31, 32, 33 1/2/3D charge density plot.
  • 41, 42, 43 1/2/3D exchange-correlation and Coulomb potential plots.
  • 51, 52, 53 1/2/3D electron localisation function (ELF) plot.
  • 61, 62, 63 1/2/3D wavefunction plot: |Ψik(r)|^2.
  • 65 Write the core wavefunctions to file for plotting.
  • 72, 73 2/3D plot of magnetisation vector field, m(r).
  • 82, 83 2/3D plot of exchange-correlation magnetic vector field, Bxc(r).
  • 91, 92, 93 1/2/3D plot of ∇ · Bxc(r).
  • 100 3D Fermi surface plot using the scalar product p(k) = Πi(�ik − �F).
  • 101 3D Fermi surface plot using separate bands (minus the Fermi energy).
  • 102 3D Fermi surface which can be plotted with XCrysDen.
  • 105 3D nesting function plot.
  • 110 Calculation of M¨ossbauer contact charge densities and magnetic fields at the nuclear sites.
  • 115 Calculation of the electric field gradient (EFG) at the nuclear sites.
  • 120 Output of the momentum matrix elements hΨik| − i∇|Ψjki.
  • 121 Linear optical dielectric response tensor calculated within the random phase approximation (RPA) and in the q → 0 limit, with no microscopic contributions.
  • 122 Magneto optical Kerr effect (MOKE) angle.
  • 125 Non-linear optical second harmonic generation.
  • 130 Output matrix elements of the type hΨik+q| exp[i(G + q) · r]|Ψjki.
  • 135 Output all wavefunctions expanded in the plane wave basis up to a cut-off defined by rgkmax.
  • 140 Energy loss near edge structure (ELNES).
  • 142, 143 2/3D plot of the electric field E(r) ≡ ∇VC(r).
  • 152, 153 2/3D plot of m(r) × Bxc(r).
  • 162 Scanning-tunneling microscopy (STM) image.
  • 180 Generate the RPA inverse dielectric function with local contributions and write it to file.
  • 185 Write the Bethe-Salpeter equation (BSE) Hamiltonian to file.
  • 186 Diagonalise the BSE Hamiltonian and write the eigenvectors and eigenvalues to file.
  • 187 Output the BSE dielectric response function.
  • 190 Write the atomic geometry to file for plotting with XCrySDen and V Sim.
  • 195 Calculation of X-ray density structure factors.
  • 196 Calculation of magnetic structure factors.
  • 200 Calculation of phonon dynamical matrices on a q-point set defined by ngridq using the supercell method.
  • 202 Phonon dry run: just produce a set of empty DYN files.
  • 205 Calculation of phonon dynamical matrices using density functional perturbation theory (DFPT).
  • 210 Phonon density of states.
  • 220 Phonon dispersion plot.
  • 230 Phonon frequencies and eigenvectors for an arbitrary q-point.
  • 240 Generate the q-dependent phonon linewidths and electron-phonon coupling constants and write them to file.
  • 245 Phonon linewidths plot.
  • 250 Eliashberg function α 2F(ω), electron-phonon coupling constant λ, and the McMillan-Allen-Dynes critical temperature Tc.
  • 300 Reduced density matrix functional theory (RDMFT) calculation.
  • 320 Time-dependent density functional theory (TDDFT) calculation of the dielectric response function including microscopic contributions.
  • 330, 331 TDDFT calculation of the spin-polarised response function for arbitrary qvectors. Task 331 writes the entire response function ←→χ (G, G0, q, ω) to file.
  • 400 Calculation of tensor moments and corresponding DFT+U Hartree-Fock energy contributions.
  • 450 Generates a laser pulse in the form of a time-dependent vector potential A(t) and writes it to AFIELDT.OUT.
  • 460 Time evolution run using TDDFT under the influence of A(t).

plot1d

Defines the path in either real or reciprocal space along which data for a 1D plot is to be produced. The user should provide nvp1d vertices in lattice coordinates (i.e. coordinates for 3 vertices in the example below). npp1ddefines the number of data points in the plot.
plot1d
  3  200                                : nvp1d, npp1d
  0.5  0.0  0.0                         : vlvp1d
  0.0  0.0  0.0
  0.5  0.5  0.0

plot2d

Defines the path in either real or reciprocal space along which data for a 1D plot is to be produced. The user should provide nvp1d vertices in lattice coordinates (i.e. coordinates for 3 vertices in the example below). npp1ddefines the number of data points in the plot.
plot1d
  3  200                                : nvp1d, npp1d
  0.5  0.0  0.0                         : vlvp1d
  0.0  0.0  0.0
  0.5  0.5  0.0

plot3d

Defines the path in either real or reciprocal space along which data for a 1D plot is to be produced. The user should provide nvp1d vertices in lattice coordinates (i.e. coordinates for 3 vertices in the example below). npp1ddefines the number of data points in the plot.
plot1d
  3  200                                : nvp1d, npp1d
  0.5  0.0  0.0                         : vlvp1d
  0.0  0.0  0.0
  0.5  0.5  0.0

Features

• High precision all-electron DFT code

• FP-LAPW basis with local-orbitals

• APW radial derivative matching to arbitrary orders at muffin-tin surface (super-LAPW, etc.)

• Arbitrary number of local-orbitals allowed (all core states can be made valence for example)

• Every element in the periodic table available

• Total energies resolved into components

• LSDA and GGA functionals available

• Potential-only meta-GGA available with Libxc

• Core states treated with the radial Dirac equation

• Simple to use: just one input file required with all input parameters optional

• Multiple tasks can be run consecutively

• Determination of lattice and crystal symmetry groups from input lattice and atomic coordinates

• Determination of atomic coordinates from space group data (with the Spacegroup utility)

• XCrysDen and V_Sim file output

• Automatic reduction from conventional to primitive unit cell

• Automatic determination of muffin-tin radii

• Full symmetrisation of density and magnetisation and their conjugate fields

• Automatic determination and reduction of the k-point set

• Spin polarised calculations performed in the most general way: only (n(r); m(r)) and (vs(r); Bs(r)) are referred to in the code

• Spin symmetry broken by infinitesimal external fields

• Spin-orbit coupling (SOC) included in second-variational scheme

• Non-collinear magnetism (NCM) with arbitrary on-site magnetic fields

• Fixed spin-moment calculations (with SOC and NCM)

• Fixed tensor moment calculations (experimental)

• Spin-spirals for any q-vector

• Spin polarised cores

• Automatic determination of the magnetic anisotropy energy (MAE) (experimental)

• Band structure plotting with angular momentum character

• Total and partial density of states with irreducible representation projection

• Charge density plotting (1/2/3D)

• Plotting of exchange-correlation and Coulomb potentials (1/2/3D)

• Electron localisation function (ELF) plotting (1/2/3D)

• Fermi surface plotting (3D)

• Magnetisation plots (2/3D)

• Plotting of exchange-correlation magnetic field, Bxc (2/3D)

• Plotting of ∇⋅Bxc (1/2/3D)

• Wavefunction plotting (1/2/3D)

• Electric field (E=-∇V) plotting (1/2/3D)

• Simple scanning tunnelling microscopy (STM) imaging based on the local density of states (LDOS) (experimental)

• Forces - including incomplete basis set (IBS) and core corrections

• Forces work with spin-orbit coupling, non-collinear magnetism and LDA+U

• Structural optimisation of both atomic positions and lattice vectors

• Iso-volumetric optimisation of unit cell

• Phonons for arbitrary q-vectors computed with density functional perturbation theory (DFPT)

• Phonons computed with the supercell method

• Phonon dispersion and density of states

• Thermodynamic quantities calculated from the phonon DOS: free energy, entropy, heat capacity

• Phonon calculations can be distributed across networked computers

• Electron-phonon coupling matrices

• Phonon linewidths

• Eliashberg function, α2F(ω)

• Electron-phonon coupling constant, λ

• McMillan-Allen-Dynes critical temperature, Tc

• Eliashberg equations solved self-consistently (experimental)

• Exact exchange (EXX) optimised effective potential (OEP) method (with SOC and NCM) (experimental)

• EXX energies (with SOC and NCM) (experimental)

• Hartree-Fock for solids (with SOC and NCM) (experimental)

• LDA+U: fully localised limit (FLL), around mean field (AFM) and interpolation between the two; works with SOC, NCM and spin-spirals

• Reduced density matrix functional theory (RDMFT) for solids (experimental)

• Bethe-Salpeter equation (BSE), including beyond the Tamm-Dankoff approximation; works with SOC and NCM

• Time-dependent density functional theory (TDDFT) for linear optical response calculations

• GW approximation spectral functions; works with SOC and NCM (experimental)

• Mössbauer hyperfine parameters: isomer shift, EFG and hyperfine contact fields (experimental) 

• First-order optical response

• Kerr angle and Magneto-Optic Kerr Effect (MOKE) output (experimental) 

• Generalised DFT correction of L. Fritsche and Y. M. Gu, Phys. Rev. B 48, 4250 (1993) (experimental)

• Energy loss near edge structure (ELNES) 

• Non-linear optical (NLO) second harmonic generation 

• LS, and J expectation values

• Effective mass tensor for any state

• Equation of state fitting (with the EOS utility)

• Iterative diagonalisation with fine-grained parallelisation

• Interface to the ETSF Libxc exchange-correlation functional library (experimental)

• OpenMP parallelisation

• Message passing interface (MPI) parallelisation

• Efficient OpenMP+MPI hybrid parallelism


References

  1. http://elk.sourceforge.net/