Odeint is designed in a very flexible way such that the algorithms are completely independent from the underlying containers and even from the basic algebraic computations. That means odeint works not only with the standard containers like vector< double > or array< double , N > but also nicely coorperates with many other libraries.
- Cuda : With odeint you can solve ODEs on GPUs by using nVidias CUDA technology.
- MTL : Solve ODEs defined on matrices by using the Matrix Template Library 4.
- MKL : To get top performance on your CPU odeint can use Intel's Math Kernel Library for the calculations.
- NetEvo : odeint is used as backend for simulating complex networks.
- Google Summer of Code : The development was supported by the boost community as a project of the Google Summer of Code 2011.
Odeint can solve ODEs on your CUDA-GPU by using the Thrust Library. Just use thrust::device_vector as state type and thrust_algebra/thrust_operations when defining the stepper and odeint runs all computations on the GPU:
typedef thrust::device_vector state_type;
runge_kutta4< state_type , double , state_type , double , thrust_algebra ,
thrust_operations > rk4;
For more details see the chapter on CUDA and Thrust in the documentation.
Odeint also easily supports the vector and matrix types from the Matrix Template Library 4. This gives you an easy way of defining linear systems of ODEs by matrix-vector products. See examples/mtl for an example of how MTL can be used.
By using Intel's Math Kernel Library as backend for the computations in odeint you can get top performance on modern CPUs. See performance/odeint_rk4_phase_lattice_mkl.cpp for an example of how the MKL is included in odeint.
NetEvo is a computing framework and collection of end-user tools designed to allow researchers to investigate evolutionary aspects of dynamical complex networks. NetEvo currently undergoes a redesign and the future version will be based on modern C++ and will use odeint for simulating the dynamics on networks.