Example outputs from our model

Hacking Generative Models with Differentiable Network Bending

Giacomo Aldegheri1 ✉️, Alina Rogalska2 †, Ahmed Youssef3 †, Eugenia Iofinova4 †,
1University of Amsterdam 2Independent Researcher 3University of Cincinnati 4IST Austria Indicates Equal Contribution

Abstract

In this work, we propose a method to ’hack’ generative models, pushing their outputs away from the original training distribution towards a new objective. We inject a small-scale trainable module between the intermediate layers of the model and train it for a low number of iterations, keeping the rest of the network frozen. The resulting output images display an uncanny quality, given by the tension between the original and new objective that can be exploited for artistic purposes.