Software

This is a selected list of the open source projects I have developed or contributed to as part of my research. See my GitHub profile for a more detailed list.

The asterisk (*) on author names in publication entries indicates equal contribution to the work.

wavetorch

wavetorch

This package provides recurrent neural network (RNN) modules for computing time-domain solutions and gradients of the wave equation with pytorch. This library is the basis for our analog machine learning paper (missing reference).

    GitHub Repository
    ceviche

    ceviche

    This is a package developed primarily by Tyler Hughes based on our code and learnings from angler and fdfdpy. Ceviche is designed to use the flexible automatic differentiation (AD) capabilities of the HIPS autograd package. This design choice carries over into optimizing photonic devices because AD simplifies the process of constructing objective / loss functions and taking gradients of several simulations simultaneously. Having a fully differentiable optical simulation framework also facilitates the integration of inverse design and machine learning models.

    This code is the basis for the results presented in our forward mode differentiation paper [1].

    1. Forward-Mode Differentiation of Maxwell’s Equations
      Tyler W Hughes, Ian A.D. Williamson, Momchil Minkov, Shanhui Fan
      ACS Photonics, vol. 6, num. 11, pp. 3010-3016

      DOI PDF PDF (supporting info)

      We present a previously unexplored ’forward-mode’ differentiation method for Maxwell’s equations, with applications in the field of sensitivity analysis. This approach yields exact gradients and is similar to the popular adjoint variable method, but provides a significant improvement in both memory and speed scaling for problems involving several output parameters, as we analyze in the context of finite-difference time-domain (FDTD) simulations. Furthermore, it provides an exact alternative to numerical derivative methods, based on finite-difference approximations. To demonstrate the usefulness of the method, we perform sensitivity analysis of two problems. First we compute how the spatial near-field intensity distribution of a scatterer changes with respect to its dielectric constant. Then, we compute how the spectral power and coupling efficiency of a surface grating coupler changes with respect to its fill factor.

    GitHub Repository Notebooks Slides

    neurophox

    Simulation of optical neural networks (ONNs) based on on-chip interferometer meshes and electro-optic nonlinear activation functions; based on Tensor Flow and Python. This framework was used for the simulations in our ONN activation function paper [1] and our parallel nullificaiton paper [2].

    1. Reprogrammable Electro-Optic Nonlinear Activation Functions for Optical Neural Networks
      Ian A. D. Williamson, Tyler W. Hughes, Momchil Minkov, Ben Bartlett, Sunil Pai, Shanhui Fan
      IEEE Journal of Selected Topics in Quantum Electronics, vol. 26, num. 1, pp. 1-12

      DOI PDF

      We introduce an electro-optic hardware platform for nonlinear activation functions in optical neural networks. The optical-to-optical nonlinearity operates by converting a small portion of the input optical signal into an analog electric signal, which is used to intensity -modulate the original optical signal with no reduction in processing speed. Our scheme allows for complete nonlinear on–off contrast in transmission at relatively low optical power thresholds and eliminates the requirement of having additional optical sources between each of the layers of the network Moreover, the activation function is reconfigurable via electrical bias, allowing it to be programmed or trained to synthesize a variety of nonlinear responses. Using numerical simulations, we demonstrate that this activation function significantly improves the expressiveness of optical neural networks, allowing them to perform well on two benchmark machine learning tasks: learning a multi-input exclusive-OR (XOR) logic function and classification of images of handwritten numbers from the MNIST dataset. The addition of the nonlinear activation function improves test accuracy on the MNIST task from 85% to 94%.

    2. Parallel Fault-Tolerant Programming of an Arbitrary Feedforward Photonic Network
      Sunil Pai, Ian A. D. Williamson, Tyler W. Hughes, Momchil Minkov, Olav Solgaard, Shanhui Fan, David A. B. Miller
      arXiv:1909.06179 [physics]

      arXiv PDF

      Reconfigurable photonic mesh networks of tunable beamsplitter nodes can linearly transform \N\-dimensional vectors representing input modal amplitudes of light for applications such as energy-efficient machine learning hardware, quantum information processing, and mode demultiplexing. Such photonic meshes are typically programmed and/or calibrated by tuning or characterizing each beam splitter one-by-one, which can be time-consuming and can limit scaling to larger meshes. Here we introduce a graph-topological approach that defines the general class of feedforward networks commonly used in such applications and identifies columns of non-interacting nodes that can be adjusted simultaneously. By virtue of this approach, we can calculate the necessary input vectors to program entire columns of nodes in parallel by simultaneously nullifying the power in one output of each node via optoelectronic feedback onto adjustable phase shifters or couplers. This parallel nullification approach is fault-tolerant to fabrication errors, requiring no prior knowledge or calibration of the node parameters, and can reduce the programming time by a factor of order \N to being proportional to the optical depth (or number of node columns in the device). As a demonstration, we simulate our programming protocol on a feedforward optical neural network model trained to classify handwritten digit images from the MNIST dataset with up to 98% validation accuracy.

    GitHub Repository Notebooks
    FDFD.jl

    FDFD.jl

    This is a pure Julia package for solving Maxwell’s equations with the finite difference frequency domain (FDFD) method, with support for dynamic modulation and eigenmode analysis.

    GitHub Repository
    angler

    angler

    Python-based library for simulating and optimizing linear and nonlinear optical devices, as demonstrated in our paper [1]. The underlying algorithms are the finite difference frequency domain (FDFD) method and adjoint variable method (AVM). The underlying FDFD code is based on fdfdpy.

    1. Adjoint Method and Inverse Design for Nonlinear Nanophotonic Devices
      Tyler W. Hughes*, Momchil Minkov*, Ian A. D. Williamson, Shanhui Fan
      ACS Photonics, vol. 5, num. 12, pp. 4781-4787

      DOI PDF PDF (supporting info)

      The development of inverse design, where computational optimization techniques are used to design devices based on certain specifications, has led to the discovery of many compact, nonintuitive structures with superior performance. Among various methods, large-scale, gradient-based optimization techniques have been one of the most important ways to design a structure containing a vast number of degrees of freedom. These techniques are made possible by the adjoint method, in which the gradient of an objective function with respect to all design degrees of freedom can be computed using only two full-field simulations. However, this approach has so far mostly been applied to linear photonic devices. Here, we present an extension of this method to modeling nonlinear devices in the frequency domain, with the nonlinear response directly included in the gradient computation. As illustrations, we use the method to devise compact photonic switches in a Kerr nonlinear material, in which low-power and high-power pulses are routed in different directions. Our technique may lead to the development of novel compact nonlinear photonic devices.

    GitHub Repository

    fdfdpy

    Python-based library for solving Maxwell’s equations with the finite difference frequency domain (FDFD) method. Most of this code was later used as the basis for angler.

    GitHub Repository