hello, I was wondering about the computing power of 200 CPU credit hours
as in what is offered in 1 credit hour?
like how many nodes are offered, how much storage, type of core etc
what is vcpu?
1cpu-hrall inclusive in a single price that only count How many vcpu you are using.
Let's say you have 100credits. You can run a cluster of 5nodes, each with 16cpu for little more than 1hr.
We don't charge separately for memory, storage, bandwidth etc.
Vcpu is vs what MS windows shows as virtual processors if you go to task manager.
Are you a robot?
Ok ) Id like to say that Elk FP-LAPW code you installed here is outdated. It's version is 4.0.15, while the current version is 5.2.14
Yes...I just tried to run my simulation with this code for 5 hours...but it worked for a few minutes and then stopped while the simulation hasn't finished.
Not sure...it seemed to me that it wasn't running in parallel mode. So if the engineers will reinstall the code, it is crucial to check that it is compiled in MPI version.
OK, I'll at this, thank you!
I meant I will have a look )
Thank you, bye!
No, I am not! How can I help you?
Thanks for letting me know. I will let the engineering team know.
Are you interested in using this software? In that case we can prioritize updating to latest version.
Do you think that it failed because it was using older version of Elk?
Sure, I will let them know.
By the way, are youan experienced user with ELk?
We are looking for some experts to benchmark software on kogence and offer several perks in return.
Thanks. We will get back to you by tomorrow regarding newer version.
By the way
If you can add email@example.com as collaborator in the model that is failing, our team can take a look at what is wrong with it.
Make sure you give firstname.lastname@example.org adminstrative priviledges.
We will get back to you.
Latest version (5.2.14) of Elk all-electron full-potential linearized augmented-plane-wave (FP-LAPW) code is now available to be run on Kogence autoscaling clusters. Default parallelism on Kogence is hybrid -- meaning OpenMP on each node and OpenMPI across nodes. You can change this behavior if you like. Details here: https://kogence.com/app/docs/Elk_FP-LAPW
We tested this with our example models and all of them are running well.
I tried running your model (https://kogence.com/app/docs/Alpha-Pu) on a 4cpu machine but it seems to stop after few minutes. I did not look at you input deck at all. Have you setup your input deck such that it needs specific # of cpus?
I have changed Software settings for you to properly start this simulation with latest version of elk. In the software setting I showed you how to use either openMP or MPI as you like. You only need to use one and you can delete the other one.
Please let me know if this works for you or if you need further help.
We shared 2 models with you. One uses OpenMP and other uses MPI. Both run well. Your model needs large amount of RAM so make sure you select large enough machines. We tried on 128cpu+2TB RAM and 64cpu+1TB RAM. Both worked well.
I saw that you guys have free resources for quantum espresso is that true?
I'm curious on how that works
what type of account should I sign up for
is it possible to run quantum espresso from ssh?
also I just wanted to know how much is 200 credits
as in how much computer power
Yes, we do.
Individual free plan is fine to begin with until you want to upgrade
Easiest way to get started is to first login. Then search for some good Quantum Espresso models in Model Library page.
One of our scientist Mukul Agrawal has some good working demo models.
YOu can make a Copy of it by pressing Copy button. Give it a new name.
Then just press Run button. First make sure it runs as it is.
Then you can go to Files tab and modify as you like.
just add xterm or gnome-terminal in Settings->Software; click Run button. Wait for Visualizer button to be active.
You will be in a remote desktop with shell terminal available.
have different plans on pricing page. We sell credits in "Blocks". Bigger the block, cheaper per unit cost is.
I belive smallest block is 200credits for $10 .... but price keeps changing so please check the website.
200credits is 200cpu-hrs
you can do 100hrs on 2cpu machine or you can do 2hrs on 100cpus
Are gpus available
@Oldstyle, yes GPU's are available under Team and Enterprise plans. Which software are you interested in using with GPU's ?
how to use juypter notebook
no,i dont get that
Did you follow instructions on the site?
Easiest way for you to get started is to fork an existing model and just trying running it as it is first.
I want to run Quantum Espresso calculations.
How can I do that?
Could you help me?
Trying to run a lammps script with two stages, namly a run command then "change something" and a second run command.
The job stops after the first run without any errors. Running exactly the same script on a local machine work fine.
Is there a limit on multiple "runs"?
Somehow your message skipped our attention. Pls accept our apologies.
Pls go to collaboration tab of your model and share your model with email@example.com with admin privileges. We will take a look and provide the fix. Thanks.
I am getting the following error message Error condition encountered during test: exit status = 2
Can you help me in resolving it?
Where do you see that error being reported?
Can you open your model go to "Colloboration" tab and add firstname.lastname@example.org as a colloborator with admin rights?
We will take a look.
I am trying to vc-relax a 2D structure. Please tell me what mistake I am making. Thank you
You always always check for errors and printed outputs under Files -> ___titusiError and Files -> ___titusiOutput. For example, check this file for your model:
On line #121, you mis-typed the file extension ".in" twice.
I have fixed this for you and reran the simulation.
New output seems to have been generated:
Let us know if you need any further assistance.
I wish to know if the HPC facility can run Thermo_pw.x in the quantum espresso software. I have some jobs to send now. Thanks
If it is possible, i would also like to know the version. Thanks.
Yes, thermo_pw.x is available. Version is 0.7.9. QE version is 6.1.
Let us know if you face any problems in using it?