Artists, designers and engineers have been perennially trying to push their work further with every generation. The topic of what the next big movement might look like has been generating some discussion recently, and it’s clear that new design technology is likely to play a crucial role in shaping it. As a developer who's interested in design, it’s great to see both disciplines come together to solve a problem in a creative and effective way. At Toaster, I've been given a chance to explore how technology can be used to benefit the design process, helping designers with their work in surprising and inventive ways.
In this article, we’ll walk through some experiments that apply machine learning, in the form of a simple tool, to allow designers to more rapidly explore possibilities. First, check out some of these experiments first to get a sense of the tool, then I’ll explain what they do, how they work, and why.
Generative design tools in many forms have been around for a while, but newer generations will make more use of machine learning rather than just carefully handcrafted logic. With these tools we are now able to explore both creative and efficient solutions for complex problems, replacing the far more difficult and time consuming work in the past. Applications for this improved process are popping up in the worlds of car design, bridge design and even antenna design. Generative tools are getting noticed, but the possibilities are just being scratched.
The experiments I’ll show follow an approach based on evolutionary algorithms - a classic tenet of machine learning, which is also having a bit of a resurgence. The core principles behind it are similar to what you’ll remember back in biology class. In simple terms, select two objects, combine features from both to produce a new object and repeat. Given selections are made wisely, the new object is more often an improvement compared to the two objects used to create it.
In the case of wild animals, this process is driven by natural factors relating to survival and natural selection (shout out to Darwin!). Using it as an algorithm, we can decide the driving factors for ourselves.
In particular, this approach lends itself well to following the guidance of some sharp-eyed creative folk, which we just so happen to have a lot of here at Toaster.
Everything in its right place
Starting with the scale experiment, the objective is to train the system to generate images where one of the circles is much larger than the other. We can train it to complete this objective by consistently selecting options that feature this underlying trait. You can also try training with a different goal: for example, try to ensure the left side (or the right side) is always the smaller one. You may even find that you naturally tend towards this pattern anyway by personal preference.
To use this scale principle in a design context, try the headings experiment, where the goal is to train the system to produce images in which the heading is larger than the subheading.
Taking this further, the next experiment attempts to evolve a simple landscape composition, which is a bit more involved. There are are four components used here, all supplied by a designer: a house, tree, sun (with clouds) and some hills. Let’s assume our objective is to train the system to arrange them in a realistic and aesthetically pleasing composition. The variables that manipulate each component include their position, scale and layer order. To provide a starting point, the initial generations or renderings have these variables set randomly.
Immediately, we can see that most random generations have an obvious problem: they’re not very realistic e.g. the house is floating in the sky, or the tree is way too small or large. Occasionally though, completely by chance, some of the landscapes are acceptable (even if not yet particularly pleasing). This fact is the foundation that we’ll fine tune and build on.
Next, generated landscapes are automatically combined to produce new ones with potential improvements. The algorithm selects pairs of landscapes from the pool of your selections and mixes different visual elements from each at random. For example, the position of the house may come from one parent and the position of the sun from the other. Finally, a brand new set of landscapes for your consideration appears on screen.
As the trainer, you select your favourite option to be added to the current population, ready for the next generation/improvement. Any remaining ‘unchosen’ landscapes are then discarded and will never be selected as parents. The process repeats ad infinitum.
Since only the best landscapes get combined to create the next generation, the results tend to gradually improve over a number of generations. This is the beauty of self-directed automated art. The process can be stopped once you decide the results have become consistently pleasing and realistic enough. At this point, we can consider the system to be adequately trained. Note that experiments can be reset by refreshing the page, so you can easily explore a whole new set of possibilities and results.
Measurable real-world applications
Surprisingly, evolutionary algorithms (EAs) like this are able to find solutions to a huge range of difficult problems, only by changing the inputs, constraints or objectives. The kicker is that progress towards the objective must be measurable. Not a problem for evolving say, the tallest giraffe (you’ll just need a super long ruler!), but can be impossible to define for abstract, subjective concepts like art which rely more on personal understanding. This type of task is where our personal judgement remains a vital part of the process, guiding the end result towards an outcome that we prefer in some way.
Exploring the surreal side of design
All of the experiments here are using the same algorithm, the difference being mostly initial configuration: assets, variables, constraints and the objective given to the trainer to make the judgments against. Altering objectives is a great way to demonstrate how evolutionary algorithms can naturally adapt to different tasks with little or no changes.
So let’s change the objective we will be thinking about when training the landscape experiment. This is where we can actually start getting more creative and experimental, upping the ante of fun. What if we’d actually like to see something surreal? You could intentionally train the house to always be flying in the sky, like something from The Wizard of Oz. How about scenes with ‘tree houses’ in them, where the house always sits atop of the tree?
We can do that with no code changes necessary, we just need to let the system understand our unconventional preference. Getting weird with generative design is that simple.
Closing thoughts and further exploration
Over the course of this article, I’ve given you an overview about how you can use machine learning to help solve design problems--and how it could become a powerful tool in the designer’s arsenal.
If you’d like to dive deeper, my next article will take this idea further. I’ll be discussing the book cover experiment in detail and I’ll talk about some of the challenges involved in rendering, genetic representations, training approaches and more. We’ll draw some provocative conclusions, discuss limitations and imagine how it could positively influence future work.