Your Browsing History May Cost You

We audit different online platforms to determine how differential pricing affects users. By generating different user profiles, we observe how some customers are charged higher prices than others because of their browsing histories.

Layer-wise Orthogonalization for Training rObust enSembles (LOTOS)

We investigate the effect of Lipschitz continuity on the transferability of adversarial examples. We observe that while decreasing the Lipschitz constant makes each model of the ensemble individually more robust, it increases the transferability rate among them which in turn hinders the overall ensemble robustness. To resolve this adverse effect, we introduce our novel training paradigm, Layer-wise Orthogonalization for Training rObust enSembles (LOTOS), which orthogonalizes the corresponding affine layers of the models with respect to one another.

Spectrum Extraction and Clipping for Implicitly Linear Layers

Controlling the largest singular value of linear layers, which is the same as the largest singular value of their Jacobians, not only contributes to the generalization of the model but makes the model more robust to adversarial perturbations. Convolutional layers are a major class of implicitly linear layers that are used in many models in various domains. We present efficient algorithms for computing the spectrum of these layers and clipping their spectral norm.

Quadratic Surveying—Getting Truthful Responses from Online Surveys

Can we redesign surveys to elicit true preferences? If we could, corporations, policymakers, and academics who rely on surveys would make better-informed conclusions. We have a clever idea, Quadratically Constrained Surveys, that can help with online surveys.