The chat across the internet for the past few weeks or so has been of AI (artificial intelligence). From chatbots to the controversial AI generated art, the social, technological and economic implications have been debated to no end. I’ve previously discussed the use of chatbots by companies in customer service, but the ones under discussion today make those look like cheap toys. AI has advanced to the point where even the creators aren’t aware of its full capabilities, and leaders in the tech industry are calling for a pause in their development. AI has a lot to offer humanity, but are the risks being sufficiently considered?

Where are we now with AI?

At the moment, the most powerful AI in the world is regarded to be GPT-4 from OpenAI, an American AI research lab. It has reportedly “developed the ability to hold human-like conversation, compose songs and summarise lengthy documents.” But it’s not such an advanced AI that’s we’re coming across in our daily lives. Some examples of how AI is being used right now are: search engines, advertising, digital and home assistants, cybersecurity, etc. It has quickly permeated our daily lives.

You’ve probably come across various AI chatbots whilst online shopping, and ones like Replika offer friendship and entertainment (like SmarterChild back in the day). The extremely popular ChatGPT (also from OpenAI) can seemingly write whatever you want. For example, the title of today’s blog came from ChatGPT. I also tried to get it to write the introduction. Whilst what it came up with was perfectly adequate, it had no flair or personality. Basically, it was dull, and displayed the limitations of the system. (I also got it to write me a series of poems about capybaras. It claimed they had large feathers, so its facts are also a little dodgy.) However, we need to keep in mind that the standard version of ChatGPT uses GPT-3.5, rather than the most recent v.4, which is significantly more powerful.

Are robots going to take all our jobs?

Possibly. The problem is, nobody knows. Whilst technological developments often come with the claim that they’ll be beneficial to the economy, since the 80s they’ve displaced more jobs than they’ve created. A report from Goldman Sachs claims that AI would replace 300 million full-time jobs. Although it’s impossible to know the long-term effects of AI on the world of work, in the short-term a reduction in employment opportunities seems likely. Additionally, wage cuts across industries may also occur as skills become redundant and jobs become “easier” to carry out. In the article from the BBC linked above, the example is given of GPS and Uber causing wage cuts for taxi drivers, since they don’t need to learn their areas by heart anymore. Considering the ever increasing cost of living (at least across Europe), this isn’t exactly ideal, and could have a brutal financial impact on the average person.

AI could literally end the world.

This possibility isn’t just for pulpy sci-fi novels anymore. In his article for TIME, Eliezer Yudkowsky, an AI and decision theorist, describes how unprepared we are for AI with “superhuman intelligence.” He points out the insufficient, or complete lack of, contingency plan in the event that humanity builds a too-powerful AI. Furthermore, he predicts that this event is guaranteed if we continue to develop AI as we are now. When that happens, humanity, and all life on earth, will be wiped out. Because what can we do against something that thinks infinitely faster than we do? That sees no value in the clusters of cells that are us? GPT-4‘s creators already don’t fully understand how it works, yet GPT-5 is set to come out later this year. Yudkowsky, unsurprisingly, calls for it all to be shut down.

The open letter calling for a pause to AI experiments, whose signatories include Elon Musk and Steve Wozniak, shows that recent advances in AI are a widespread cause for concern. World governments have done little to regulate the AI industry (in fact the UK is actively encouraging it). However many are seeing that it could quickly get out of control. With sufficient preparation, we can stop the doomsday Yudkowsky predicts, but are we capable of that?

What I imagine an AI apocalypse would look like (with War of the Worlds Tripod sounds of course)

But it could benefit us if we use it correctly.

These high level AIs can have a lot to offer us. It could offer “intelligence” to those who may not have access. I have a couple of examples of this. AI has all legal knowledge, so if someone can’t afford a lawyer, they have access to very specific legal advice. Or, it could be used for educational purposes. If teachers in specific fields are lacking, or the information needs to be conveyed at a specific level.

There’s even a tangible example right here. LifeBonder‘s platform aims to use privacy-safe AI to enable authentic, modest and anonymous profile matching. This will allow users to form bonds and relationships with like-minded individuals. AI doesn’t just have to be another tool tying us to our phones and computers. Maybe if we can thrive in reality with authentic social bonds, trying to replicate intelligence with AI won’t seem as necessary.

I shall leave you with this:

Endless streams of code,

AI learns and adapts fast,

The future is near.

ChatGPT, from the prompt “write me a haiku about AI.”

Polly Cumming

Polly Cumming is a British literary graduate keen on writing about human existence in this moment in time. She's thrilled to see some positive change in the world of social media.