In the middle of a product design sketching class back in the early ’90s, the instructor inspected what I thought was a portfolio-worthy new toaster I’d just rendered in ball-point pen and markers.
‘Nice job’, he said, ‘but I challenge you to scrunch it up and toss it into the bin’.
Noting my mortified expression, he explained that the creative process should be a constant flow, like a waterfall, not a stagnant pond. If you get tied to just one idea, you could miss out on a much better idea. He encouraged me to go again, to design another toaster, one with more innovation, more function, and more beauty. He finished with this: ‘When you think it’s ready, toss it in the bin and go again!’
Ideation is key to a properly-conceived, fully mature design. This involves exploring dozens if not hundreds of options before arriving at a solution that is inevitably more functional, more innovative and more desirable. No matter your talent, a great new design hardly ever flows from the first stroke of the pen. Instead, it generally has to be coaxed out of the paper with the dexterous brain and practised hand of one who has put in the sketch hours. But now all of that is changing, and it’s happening inconceivably fast.
Those hours and more often days’ worth of creativity-stretching ideation have, with the mainstream introduction of AI-powered text-to-image tools, become the task of mere minutes. There are several of these ‘tools’ to choose from, with Midjourney, Dall-e, Vizcom and Stable Diffusion the current torchbearers for this ‘deep-learning module’ technology. Basically, all it takes to generate design ideas is a carefully composed text prompt – a highly efficient version of what we used to call a design brief.
Here’s how it works: You simply open the application, type in your prompt (e.g. highly detailed photorealistic 3D rendering of a McLaren SUV with off-road tyres), and in less time than it takes to squeeze an espresso from a compostable pod you’re gifted several, usually four, design variations that are uncannily accurate interpretations of your input text. It’s the ease, allied to the anticipated exponential improvement of the software that is shaking industry perceptions.
Predictably, the fallout has been divisive, with equally vocal critics and supporters. Meme-driven Instagram account, @thelonelycardesigner often highlights the plagiaristic aspect of AI art in general. Essentially, the application’s algorithm sifts and seamlessly blends appropriate elements from the billions of images already in cyberspace and spits out convincing new combinations, each time adding to the source reference library. With over a million officially paid-up Midjourney users and millions more free trialists each generating multiple artworks every day, that library is growing at an alarming rate. It’s easy to see how this could be deemed a major contributing factor to the slow death of human creativity, all while rendering certain traditional design skills pretty much redundant.
However, the argument in favour of AI is just as compelling: these applications are merely another tool in the designer’s arsenal. Just as programs such as Photoshop and Procreate have all but replaced traditional air-brushing and marker rendering, so text-to-image AI will likely replace the ideation sketch pad. Thankfully though, human creativity has not been completely usurped. Once an initial ‘control image’ has been generated, designers must still envisage where and how to refine the design. The skill lies first in how to tweak your overall prompt, followed by specific areas of the concept.
The exciting for some, and scary for others, part is that these tools are still in their infancy. It seems inevitable that dedicated car design sub-apps with all the legislative requirements and engineering realities, along with the basic tenets of good design (proportion, etc.) already built in will soon become a mainstay of every OEM studio.
And the next stage could be even more liberating. Imagine if a tooling-ready Class A surface model of your AI-generated concept sketch could be constructed – and just as easily tweaked – for you in a matter of minutes? Alias Autostudio and Catia modellers, consider yourselves warned.
While the physical clay modellers essentially continue to practise a skill that has not changed since the inception of car design as a discipline, 3D software has undergone a revolution in recent decades. From the early days of having to painstakingly adjust adjacent face tangency by manually manipulating the X,Y and Z coordinates of individual indices, through the onset of Nurbs and Sub-division modelling, to the VR headset-enabled ‘virtual clay’ of for example, Gravity Sketch, today’s tools for creating the three-dimensional data required for realistic real-time visualisation of increasingly complex concept cars have become infinitely more powerful.
The current and growing trend of incorporating variably scaled geometry into body panels – the intricately patterned grille of a Chery Omoda 5 for example – would never have been possible without the advent of procedural parametric design functions. It’s precisely these types of intensely computational data-driven surface features that computers excel at. All of which points to a looming explosion in workflow productivity. Happily, the role of car designer will continue for a long while yet, but it’s likely they will have to become less like artists, and more like poets.
It’s an interesting time for car design, especially as exterior style has become a progressively more important brand differentiator given that powertrain parity is being legislatively mandated. Will all cars look the same in the future? Current evidence suggests that will depend entirely on a designer’s ability to describe in words what they can see in their imagination. But then, car designers have always had to explain – using words – the rationale behind their concepts, so perhaps not much has changed after all.