A Deep Learning Approach to Universal Binary Visible Light Communication Transceiver

Hoon Lee, Tony Q.S. Quek, Sang Hyun Lee

Research output: Contribution to journalArticlepeer-review

18 Citations (Scopus)

Abstract

This paper studies a deep learning (DL) framework for the design of binary modulated visible light communication (VLC) transceiver with universal dimming support. The dimming control for the optical binary signal boils down to a combinatorial codebook design so that the average Hamming weight of binary codewords matches with arbitrary dimming target. An unsupervised DL technique is employed for obtaining a neural network to replace the encoder-decoder pair that recovers the message from the optically transmitted signal. In such a task, a novel stochastic binarization method is developed to generate the set of binary codewords from continuous-valued neural network outputs. For universal support of arbitrary dimming target, the DL-based VLC transceiver is trained with multiple dimming constraints, which turns out to be a constrained training optimization that is very challenging to handle with existing DL methods. We develop a new training algorithm that addresses the dimming constraints through a dual formulation of the optimization. Based on the developed algorithm, the resulting VLC transceiver can be optimized via the end-to-end training procedure. Numerical results verify that the proposed codebook outperforms theoretically best constant weight codebooks under various VLC setups.

Original languageEnglish
Article number8891920
Pages (from-to)956-969
Number of pages14
JournalIEEE Transactions on Wireless Communications
Volume19
Issue number2
DOIs
Publication statusPublished - 2020 Feb

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Electrical and Electronic Engineering
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'A Deep Learning Approach to Universal Binary Visible Light Communication Transceiver'. Together they form a unique fingerprint.

Cite this