Disclaimer: This article was written by a human, not Chat GPT. We reserve our rights to write with typos and grammatical errors.
The latter half of 2022 has seen a proliferation of publicly available generative AI models. From image generation using DALL-E 2, Midjourney or Stable Diffusion; to text generation using ChatGPT; we seem to be ever closer to yet another one of the important milestones in recent history where computers show skills we previously prescribed (competent) human beings.
There is no shortage of reporting on the topic either, with titles like “Is DALL-E going to take my job?” or “Is Chat GPT the end of Google?” dominating the scene. Even The Economist has reported on the topic, additionally using Midjourney to produce a front cover.
And while it indeed would be fun to beat the dead horse some more by philosophising about how fast I myself am going to be out of a job—replaced surely by the next generation of AI using a “guess the best next startup decision artificial neural network” (GTBNSDANN?) with trillions of parameter—I thought I would do something slightly more productive: Use the technology to actually create something.
At Ur Solutions we boast excellence in the field of digital product development. We mostly create web applications, websites and native applications. We do everything from ideation to design to implementation to release, hosting and service. I think it's fair to say that it is proved beyond the shadow of a doubt that generative AI can assist businesses like ours with tasks such as generating SEO-content (you have to beware of Google busting you, though), headings, writing copy, generating illustration images and so on. But can generative AI simplify entire creative processes within our domain of website, webapp and app development?
Or to put it differently: Powered by the new suite of AIs, could I create something better or quicker than I previously could have? This is the topic I will explore here, and in a series of following articles.
The task I have set to solve in this article is simple: Create a simple portfolio website for a fake architect—let’s call him Tormod Haugland. The tools I will use for the task are:
- Ideation of high-level design: Midjourney.
- Design: Figma and Photoshop.
- Implementation: Next.js with TailwindCSS.
- Text generation: Chat GPT.
- Hosting: Vercel.
- CMS: Sanity.
Let’s get started.
Step 1: Ideation
Midjourney is a Discord-only service, allowing you to enter text-commands telling Midjourney what types of images you want it to create. Midjourney converts a text-prompt into images best matching the prompt.
E.g. “cyberpunk cityscape”:
Or “open world rpg adventurer game”:
Anything you can describe with text, Midjourney can convert into images. DALL-E and Stable Diffusion mentioned earlier are analogous services, with slight variations in the underlying algorithms and training data.
Midjourney always gives you 4 images as a baseline, which you can select for further processing or upscaling.
So, what if we ask it to create a website? For instance, an online web shop for ecological home goods?
Or a website for, say, a children’s game?
At first glance, this looks great. Midjourney clearly has a good eye for website composition. All the websites have elements, images, and colours not unlike what one might see if the “real thing” got built by professionals. The rendered text—including language, letters, and font—does not really map onto any real equivalents, however, so this is by no means “ready to go”. But we get a starting point.
Returning to the task at hand, I asked Midjourney to render me a portfolio website for an architect. These were the results:
Compared to my intrinsic creative design skills, these are a great. Websites 2, 3 and 4 are all close to what I envisioned before getting started. I found the second design the most appealing to move forward with due to its simplicity and it containing a call-to-action button in the design already.
Step 2: Design (10 minutes in)
The next step was to convert the idea into an actual design that can be implemented. In theory, I could have skipped this step and just freestyled the design into existence while implementing. But knowing my ad hoc artistic abilities all too well, I found it more practical to recreate the design in Figma first.
After copying the image into Figma, I created an adjacent frame with the sizing of a MacBook 16 pro (which in Figma is 1728x1117 pixels) and started recreating the design at the best of my abilities.
Selecting, scaling, and colouring the fonts correctly was for me the most challenging part of this endeavour. After some experimentation I ended up with Raleway for the headings and Nunito for the regular text. I also created a 9-step colour palette for the “brand” colour here, with the pivotal colour being the brown-grey #706868.
This was the result:
The images are mostly taken from Pexels, and the names from my imagination. Compared to the original, the design implementation has lost some of its distinctness—the text looking blander, and the portfolio images not having the same great colouring. But fake-me can just hire a better photographer and/or do some work inLightroom at some point in the future.
The summary text below the title name is written by Chat GPT, prompting for: “Create an ingress text for an architecture portfolio website, 350 words for an architect ‘Tormod Haugland’ who was born in 1991”. I took the best of the 350 words and composed it to get the result (Full disclosure: I was planning on querying for a text of 350 characters. I guess I should delegate prompting Chat GPT to Chat GPT).
Next, I found the top left image to lack purpose. I tried putting a menu there, but gave up on that prospect, having no intention of implementing additional linked to pages anyways. However, I decided to scale the image down a bit vertically to distinguish it a bit from the portfolio entries:
Good enough for me! On to implementation.
Step 3: Implementation (1 hour in)
Nothing particularly interesting happened at the implementation stage. Just regular programming using Next.js.
Not being particularly familiar with Tailwind CSS and having forgotten what it’s like to implement Sanity in Next, documentation lookups took about 50% of the time spent here. All in all, I think I probably spent around 90 minutes on actual implementation.
The design ended up being implemented close enough to the design. Some more creative adjustments were introduced to handle responsiveness and other issues related to scaling. Very large screens are not accommodated appropriately just yet however.
The source code can be found here.
Step 4: Hosting with Vercel, and Sanity setup (3 hours in)
Hosting a Next.js website with Vercel is criminally easily. After hooking up the GitHub repository to Vercel using the Vercel-dashboard, it is just a matter of pressing a button.
Using Sanity’s Vercel-integration to set up a new Sanity project, all required environment variables are added to the build pipeline automatically.
Final touches
My OCD really needs me to remove default favicons. Also, any good fake solopreneur architect needs a fantastic logo. Why not ask Midjourney to create it?
This looks pretty cool. The text doesn’t make sense, but is easily removed using Photoshop or GIMP.
Results and conclusion (4 hours in)
The final resulting website can be viewed here.
The task we set out to solve was to create something “better or quicker than previously possible”, with our new set of tools. In this respect, we have undoubtedly succeeded.
Is this a world-class portfolio website for a solopreneur architect? No.
Does it need more content and detail behaviour? Yes.
Would it convert actual clients for the owner? Perhaps. Likely not.
Is this better than many Wordpress-templates for similar types of websites? You bet.
Is this better than what I would’ve been able to produce without Midjourney and Chat GPT in the same span of time? Definitely.
In my opinion, this nicely summarises the current utility of generative AI. It is truly impressive how fast these pieces of software can produce content, be it images or text, with at least mediocre quality. They place themselves neatly into the tool belt of the modern developer, designer, advertiser, author, and artist. They are particularly powerful as assistants in ideation phases, and I already see us changing some of our internal processes to incorporate the tools.
However, they are no silver bullet taking over large parts of the creative process, just yet. At least not for any creative process that is more complex than creating a few pieces of generic and mediocre+ content. At the end of the day, you need real human beings to alter, fix, improve and implement the outputs of the algorithms.
Additionally, while Chat GPT has some level of “memory” of what you have asked it earlier, Midjourney does not. You can apparently use some techniques for approximating such behaviour, but I found it almost impossible to get Midjourney to coherently design other parts of the portfolio website for me.
Oh well, I guess I have to go back to the archaic ways of pestering an actual designer about that.
Bottom image by MidJourney: "Annoying CEO pestering his designer colleague about fixing his design, in the style of a manga, colourful --ar 3:2"