Intel MPI Library 2017 focuses on making applications perform better on Intel architecture-based clusters, implementing the high-performance MPI-3.1 standard on multiple fabrics. It enables you to quickly deliver maximum end user performance, even if you change or upgrade to new interconnects, without requiring changes to the software or operating environment.
Use this high-performance MPI message library to develop applications that can run on multiple cluster interconnects chosen by the user at runtime. Get excellent performance for enterprise, divisional, departmental, workgroup and personal high-performance computing.
- Now optimised for 2nd Generation Intel Xeon Phi processors and Intel Omni-Path Architecture
- Designed and developed for high scalability
- Supports the latest MPI-3.1 standard
- MPICH ABI compatibility
Intel MPI Library - Features
- Performance: An optimised, shared-memory path for multicore platforms allows more communication throughput and lower latencies. Native InfiniBand interface also provides support for lower latencies. Brand new support for Open Fabrics Interface has been added for optimal performance on Intel Omni-Path solutions. Multi-rail capability for higher bandwidth and increased interprocess communication and Tag Matching Interface (TMI) support for higher performance on Intel True Scale, Qlogic PSM and Myricom MX solutions.
- Scalability: Implementing the high-performance MPI-3.1 standard on multiple fabrics, Intel MPI Library for Windows and Linux focuses on making applications perform better on Intel architecture-based clusters. Intel MPI Library enables you to quickly deliver maximum end-user performance, even if you change or upgrade to new interconnects, without requiring major modifications to the software or operating environment.
- Interconnect Independence and Flexible Runtime Fabric Selection: Whether you need to run TCP sockets, shared memory or one of many Remote Direct Memory Access based interconnects, including InfiniBand, Intel MPI Library covers all configurations by providing an accelerated, universal, multi-fabric layer for fast interconnects via the Direct Access Programming Library or the Open Fabrics Association methodology.
- MPI-3.1 Standard Support: The next major evolution of the Message Passing Interface is with the release of the MPI-3.1 standard. Significant changes to remote memory access one-sided communications, addition of non-blocking collective operations and large counts messages greater than 2GB will enhance usability and performance. Now available in the Intel MPI Library 2017.
- Binary compatibility: Intel MPI Library offers binary compatibility with existing MPI-1.x and MPI-2.x applications. Even if youre not ready to move to the new standard, you can still take advantage of the latest Intel MPI Library performance improvements without recompiling.
- Support for Mixed Operating Systems: Run a single MPI job using a cluster with mixed operating systems (Windows and Linux) under the Hydra process manager. Get more flexibility in job deployment with this added functionality.
- Latest Processor Support: Intel consistently offers the first set of tools to take advantage of the latest performance enhancements in the newest Intel product, while preserving compatibility with older Intel and compatible processors. New support includes AVX2, TSX, FMA3 and AVX-512.
Intel MPI Library - System Requirements
- Processors: Intel processors, coprocessors and compatibles
- Languages: Natively supports C, C++ and Fortran development
- Development Environments: Microsoft Visual Studio (Windows) Eclipse/CDT (Linux)
- Operating Systems: Linux and Windows
- Interconnect Fabric Support:
- Shared memory.
- RDMA-capable network fabrics through DAPL (e.g., InfiniBand, Myrinet).
- Sockets (e.g., TCP/IP over Ethernet, Gigabit Ethernet) and others.
- Shared memory.