Simulation of Maxwell's Equations on GPU Using a High-Order Error-Minimized Scheme

Tony W.H. Sheu, S. Z. Wang, J. H. Li, Matthew R. Smith

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)


In this study an explicit Finite Difference Method (FDM) based scheme is developed to solve the Maxwell's equations in time domain for a lossless medium. This manuscript focuses on two unique aspects - the three dimensional time-accurate discretization of the hyperbolic system of Maxwell equations in three-point non-staggered grid stencil and it's application to parallel computing through the use of Graphics Processing Units (GPU). The proposed temporal scheme is symplectic, thus permitting conservation of all Hamiltonians in the Maxwell equation. Moreover, to enable accurate predictions over large time frames, a phase velocity preserving scheme is developed for treatment of the spatial derivative terms. As a result, the chosen time increment and grid spacing can be optimally coupled. An additional theoretical investigation into this pairing is also shown. Finally, the application of the proposed scheme to parallel computing using one Nvidia K20 Tesla GPU card is demonstrated. For the benchmarks performed, the parallel speedup when compared to a single core of an Intel i7-4820K CPU is approximately 190x.

Original languageEnglish
Pages (from-to)1039-1064
Number of pages26
JournalCommunications in Computational Physics
Issue number4
Publication statusPublished - 2017 Apr 1

All Science Journal Classification (ASJC) codes

  • Physics and Astronomy (miscellaneous)


Dive into the research topics of 'Simulation of Maxwell's Equations on GPU Using a High-Order Error-Minimized Scheme'. Together they form a unique fingerprint.

Cite this