For as long as computers existed, applying them in any practical way required some technical skills. Training is often required even for simple applications like Word and Excel. But what if you could simply tell the computer what you need, and your electronic companion would do the rest?
In fact, the recent wave of AI is doing exactly that. It is capable of interpreting natural human expression as well as translating it into code and back. Will this development change the importance of creative skills vis-à-vis programming? And can we trust a computer with so much creative power and agency?
Synergy Happens When There’s Trust
While solutions such as no-code visual programming made building new software simpler, some amount of technical savvy was still required. New algorithms however, which can translate natural expression into code, have changed the playing field in a more profound way. Here are some examples:
- A designer outlines a video game in simple English in a text editor – or verbally (since the AI engine can recognise speech). The AI engine translates that into code. Soon, a game similar to Space Invaders emerges as the designer continues to describe the playing rules.
- A songwriter uploads a chord progression to a musical algorithm which arranges the tune in seconds. The songwriter tweaks the melody further. After some improvisation with the computer, a song which could easily blend into many Spotify playlists is recorded.
- A painter doodles a sketch and lets the algorithm extrapolate it into a complete painting. He adds more details to refine the artwork. He can also ask the machine to animate the drawing and turn it into a cartoon, if he wants.
Trust Brings Out Latent Potential
Look at these scenarios as the start of a sharply accelerating trend, and it becomes hard not to be excited about the power AI puts in the hands of creators, entrepreneurs and visionaries. As this technology matures, writers and designers, painters and musicians, philosophers and layers will foreseeably become as important as the engineers who build these tools.
Lack of Trust Seeds Doubts
Over time, we can expect the AI to increasingly interpret ideas at a higher level of abstraction, and offer innovative solutions and tools – even beyond human comprehension. Trust becomes a paramount concern against this backdrop. Is the app built by your AI free from backdoors? Is the strategy designed by the algorithmic analyst really good for your organisation in the long run? How about the software that built that app, which coincidentally was also developed by an AI?
Only Humans Can Give Trust
It is fair to say that we do not have all the answers. Perhaps, we will keep humans in the loop for as long as we can. Possibly, we will use AI auditors to check their developer counterparts for irregularities. Regardless of the future solutions, it is clear that for this new generation of tools to reach its potential, we will need to trust them in the same way we trust any other complex technology – such as the bridge will not fall when we cross it, or the car will stop when we step on the brakes. We are entering the age of human-computer interaction – and trust is the fundamental human quality that will determine its course.
First published in the IT Society Magazine from the Singapore Computer Society