Event Calendar

The Max Planck Institute for Polymer Research regularly hosts scientific as well as public events and conferences. Here you will find an overview of future events.

Dynamic Load Balancing for Parallel Particle Simulations

Parallel computing has developed as a central tool in scientific computing to solve large scale problems involving huge number of degrees of freedom, complex geometries or coupled applications. The parallel efficiency is key for estimating to which degree the computational resources are used, or whether there is still potential to speed up an application by organising data or workflow in a different way across processors. To reduce the wall clock time of an application, a goal might be to use as many processors of a parallel architecture as possible. However, scalability of a parallel application depends on a number of characteristics, among which is efficient communication, equal distribution of work or efficient data layout.Many parallel applications, especially particle or mesh based algorithms like Molecular Dynamics or Lattice Boltzmann methods, are implemented by domain decomposition techniques, where processors administrate certain geometrical regions of a physical system. In such cases, unequal work load in the processor network is to be expected when particles are not distributed homogeneously or the computation cost of particle interactions is not equal in each part of the system. Also in the case where heterogeneous architecture components are coupled together in a complex cluster network (e.g. CPU-GPU, different types of CPUs or different network speeds) wall clock times for solving a problem with the same number of degrees of freedom will vary across the parallel application. For these scenarios the code has to decide how to redistribute the work among processes according to a work sharing protocol or to dynamically adjust computational domains, to balance the workload.In the seminar, I will give an introduction to the problem of load balancing and discuss various methods to redistribute data or re-organise the domain decomposition to improve and optimise the work load and to improve parallel efficiency and scalability. As an outlook I will discuss developments from the European Centre of Excellence E-CAM, where different methods have been implemented into a library, which can be used in community codes. [more]
I will present techniques to find reaction coordinates to be used in conjunction with free energy biasing techniques such as the adaptive biasing force method. This allows for instance to improve the sampling of configurations of complex proteins. However, reaction coordinates are often based on an intuitive understanding of the system, and one would like to complement this intuition or even replace it with automated tools. One appealing tool is autoencoders, for which the bottleneck layer provides a low dimensional representation of high dimensional atomistic systems. I will discuss some mathematical foundations of this method, and present illustrative applications including alanine dipeptide. Some on-going extensions to more demanding systems, namely HSP90, will also be mentioned by Zineb Belkacemi, the PhD student working on this project. [more]
Go to Editor View