The fusion of drawing and photography
MagicBrush is a drawing program that works with “stuff” instead of pixels and pigments. Instead of using brushes that simulate acrylic or watercolor, the app simulates real objects while drawing.
Here, the user draws the outer shape of the object, while MagicBrush handles the texturing, lighting and content.
For this task, MagicBrush only needs enough photo material of the object to understand the content. No matter if it’s an existing dataset, or a video recording, or just the photos of your pet you’ve been collecting for years – once the brush is trained, you can start right away.
How it works
MagicBrush is based on the image synthesizing pix2pixHD algorithm to create these objects. Thus, new brush shapes can also be generated – all it takes is a dataset of photo material of the object and a new MagicBrush can be trained.
Before training, the photomaterial is examined for content and subjected to edge detection. This is important to translate the drawing into the photo afterwards. For training, the original photos and the drawings created by edge detection are paired so that the algorithm can learn the relationship of the two images. After the training phase, the algorithm is able to independently generate new images based on line drawings. These do not have to look like the original object, but the algorithm tries to interpret the drawing accordingly.
There are no limits to creativity: what does a nectarine with a face look like, for example? And what does the finished drawing finally look like as a charcoal drawing or watercolor painting? All this is just a stroke and a tap away.
MagicBrush is a proof of concept and not a working app yet. But the technologies described above and the principles are tested and working. The processing power of mobile devices is still partly a limiting factor for the current algorithms to draw HD results, but could be overcome with appropriate adaptation of the code and the necessary technical know-how.