Help:KComposition

Jump to: navigation, search
  • KComposition.png

On Kogence all application software, simulators and solvers are deployed in independent individual Docker containers. Each software application is provisioned as an HPC microservice running on HPC cluster hardware you requested in the Cluster tab of your model. You can build complex simulation workflows combining multiple containers using the Stack tab of your model. Kogence' kComposition technology automatically composes the microservices into full software stack without any effort on your part. Solver binaries in one of the containers can be called from any of the other containers selected in the stack. You do not need to write any composition scripts.

Resource Managers + Dockers: kComposition provides beautiful integration of Docker containers with HPC resource managers. We support all popular resource managers: SGE/Univa, Torque/PBS, LSF, Slurm. You can submit jobs to the cluster using your onprem submission scripts that do not need to be aware of the fact that applications are provisioned as docker containers. E.g., you type matlab -desktop and Matlab desktop will come up. From your python script, which will run inside a python conatiner, you call matlab -r myscript.m and your myscript.m will run in a separate Matlab container and share the results back with your calling container. You can do qsub -pe mpi matlab -r myscript.m and will send your job to autoscaling cluster. Fact that each software applications is running as a microservices using user defined hardware resources is completely transparent to the end user. You use each software application as if they are all installed on your desktop machine. What is the benefit of provisioning each software application as a separate microservice? Please see below.

MPI + Dockers: kComposition integrates the Message Passing Interface (MPI) with Docker containers. We support openMPI, MPICH and Intel MPI. We support all versions of these MPI libraries. Anyone who has tried to use MPI with dockers would understand the challenges. There are 2 potential pathways: starting MPI outside the dockers or starting it inside the containers. Both have challenges that have so far been insurmountable. Each container gets its own networking stack, meaning it has its own IP address. If you started MPI outside the containers you will loose the shared memory parallelism. If you started it inside the container then you break the tight integration with resource managers and you cannot scale across multiple nodes. Please try the Kogence integration and let us know what you think. We are very eager!

Deploy Your Own Software Application on Kogence: Kogence platform is extensible and you can deploy their own containers. If you have developed a new solver, simulator or software application, you can now deploy it on Kogence. Just click on Conatiners tab on top NavBar and then click Create New Container and provide link to your container repository. We support any and all Debian based linux containers. You only need to package your solver in your container - you do not need anything else - no schedulers, no OS utilities, no graphics utilities, no ssh. Nothing. Period. You can decide if you want to restrict your solver usage to yourself, to your colleagues/collaborators or to let other Kogence users use your solver. If you open it to other Kogence users, they will not have access to either your source code or your binaries. They will be able to link your solver/software to their models and use the functionality that your solver/software provides. kComposition works seamlessly with your custom containers. Under the Stack tab of your model you can connect multiple containers, including your custom container, and create a complex software stack.

Secure Dockers: Docker containers are not secure from HPC perspective. In Kogence platform, we orchestrate and compose microservices such that the logged in user remains a non-privildged user both in the host machine as well as in any of the running containers.