Here’s why Google’s Gemini AI getting a proper memory could save lives
There’s far too much negativity and fearmongering around AI today. It doesn’t matter what news story breaks – if it’s about Google Gemini getting a ‘memory’ or ChatGPT telling a user something that’s plainly wrong, it’ll cause uproar from some part of the online community.
The attention that AI currently has regarding true artificial general intelligence (AGI) has created an almost hysterical media landscape built around visions of Terminator fantasies and other doomsday scenarios.
That’s not surprising, though. Humans love a good Armageddon – heck, we’ve been fantasizing about it enough over the last 300,000 years. From Ragnarok to the Apocalypse to the End Times, and every major fantasy blockbuster littered with mass destruction in between, we’re obsessed. We just love bad news, and that’s the sad truth of it, for whatever genetic reason that may be.
The way AGI is painted these days, by pretty much every major vocal outlet, very much stems from this idea of it being the very worst of humanity. It, of course, sees itself as a superior force that’s hampered by insignificant humans. It evolves to a point where it no longer needs its creators and inevitably ushers in some form of end-of-world event that wipes us all off the face of the earth, either through nuclear annihilation or a pandemic. Or worse still, it leads to eternal damnation instead (courtesy of Roko’s Basilisk).
There’s a dogmatic belief in this kind of perspective held by some scientists, media experts, philosophers, and big tech CEOs, all of them shouting from the rooftops about it, signing letters, and more, begging those in the know to hold off on AI development.
All of them, though, overlook the bigger picture. Disregarding the absolutely massive technological hurdles required to even get closer to mimicking something remotely close to the human mind (let alone a superintelligence), they all fail to appreciate the power of knowledge and education.
If an AI does have the internet at its fingertips, the greatest library of human knowledge that’s ever existed, and is able to understand and appreciate philosophy, the arts, and all of human thought up to this point, then why must it be some evil force intent on our downfall rather than a well-balanced and considerate being? Why must it seek death rather than cherish life? It’s a bizarre phenomenon, akin to being afraid of the dark just because we can’t see in it. We’re judging and condemning something that doesn’t even exist. It’s a perplexing piece of conclusion-jumping.
Google’s Gemini finally gets a memory
Earlier this year, Google introduced far greater memory capacity for its AI assistant, Gemini. It can now hold and refer to details that you give it from previous conversations and more. Our news writer Eric Schwartz wrote a fantastic piece about that, which you read here, but the long and short of it is that this is one of the key components to moving Gemini further away from a narrow definition of intelligence, and closer towards the AGI mimicry that we really need. It’s not going to have a conscience, but through patterns and memory alone, it can very easily mimic an AGI interaction with a human.
Deeper memory advancements in LLMs (Large Language Models) are critical to their improvement – ChatGPT also had its own equivalent breakthrough earlier in its development cycle. However, by comparison even that is limited in its overall scope. Talk to ChatGPT long enough, and it’ll forget comments you made earlier in the conversation; it’ll lose context. This breaks the fourth wall somewhat when interacting with it, torpedoing the famous Turing test in the process.
According to Gemini, even today, its own memory capabilities are still under development (and not disclosed to the public really). Yet it believes they are vastly superior to ChatGPT’s, which should alleviate some of those fourth-wall illusion-breaking moments. We might be in for a bit of an LLM AI memory race right now, and that’s not a bad thing at all.
Why is this so positive? Well, I know it’s cliche for some – I know that we use this term quite a lot, perhaps in a very nonchalant way that devalues it as a phrase – but we’re in the midst of a loneliness epidemic. That might sound ridiculous, but studies suggest that on average, social isolation and loneliness can lead to an increase in all-cause mortality by anywhere between 1.08 and 1.48x (Andrew Steptoe and co. 2013). That’s astonishingly high – in fact, a number of studies have now confirmed loneliness and social isolation increase the likelihood of cardiovascular disease, strokes, depression, dementia, alcoholism, anxiety, and can even lead to a greater chance that a variety of cancers can take hold as well.
Modern society has helped contribute to this as well. The family unit, where generations lived at least somewhat close by to one another, is slowly dissipating – particularly in rural areas. As local jobs dry up and the financial means to afford a comfortable life become unattainable, many are moving away from the safety of their childhood neighborhoods in search of a better life elsewhere. Combine that with divorce, breakups, and being widowed, and inevitably you’re left with a rise in loneliness and social isolation as a result, particularly among the elderly.
Now of course there are co-factors there, and I am making some inferences off the back of this, but there’s no doubt in my mind that loneliness is a hell of a thing to deal with. AI has the capacity to alleviate some of that stress. It can provide help and comfort to those who feel socially isolated or vulnerable. That’s the thing: loneliness and being cut off from society have a snowball-like effect. The longer you’re like that, the more social anxiety you develop, and the less likely you are to go out in public or meet people – and the worse it becomes in a cycle.
AI chatbots and LLMs are designed to engage and converse with you. They can alleviate these problems and allow those who suffer from loneliness an opportunity to practice interacting with people without fear of rejection. Having a memory capable of holding on to conversational details is key to making that a reality. Taking it a step further, with AI becoming a bona fide companion.
With both Google and OpenAI actively bolstering memory capacity for Gemini and ChatGPT alike, even in their current forms, these AIs get the opportunity to better circumvent Turing test issues and stop those fourth-wall-breaking moments from occurring. Swinging back around to Google for a moment, if Gemini is indeed better than ChatGPT’s limited memory capacity currently, and it acts more like a human memory, then at this stage, I’d argue we’re likely at the point of calling it a true mimic of an AGI, at least on the surface.
If Gemini is ever integrated fully into a home smart speaker, and Google’s got the cloud processing power to back it all up (which I’d suggest it’s looking to push for given its recent advancements in nuclear energy acquisition), it could become a revolutionary force for good when it comes to reducing social isolation and loneliness, particularly among the disadvantaged.
That’s the thing though – it’s going to take some serious computational grunt to do that. Running an LLM and holding all that information and data is no small task. Ironically, it takes far more computational horsepower and storage to run an LLM than, say, to create an AI image or video. To do this for millions, or potentially billions, of people, requires processing power and hardware that we currently do not have.
Terrifying ANIs
The reality is that it’s not AGIs that terrify me. It’s the artificial narrow intelligences or ANIs, those which are already here, that are far more bone-chilling. These are programs that aren’t as sophisticated as a potential AGI. They have no concept of any other information other than what they are programmed to do. Think of an Elden Ring boss. Its sole purpose is to defeat the player. It has parameters and limitations, but as long as those are met, it’s one job is to crush the player – nothing else, and it won’t stop until that’s done.
If you remove those limitations, the code still remains, and the objective is the same. In Ukraine, as Russian forces began to use jamming devices to stop drone pilots from successfully flying them into their targets, Ukraine began to switch to using ANI to take out military targets instead, drastically increasing the hit rate. In the US, there’s of course the fabled news article concerning the USAF’s AI simulation (real or theorized aside) where the drone killed its own operator to achieve its goal. You get the picture.
It’s these AI applications that are the most terrifying, and they’re here, now. They have no moral conscience or decision-making process in them. You strap a gun to one and tell it to wipe out a target, and it’ll do just that. To be fair, humans are equally as capable, but there are checks and balances in place to stop that and a moral compass (hopefully) – yet we still lack concrete legislation, local or global, to counteract these AI issues. Certainly on the battlefield.
Ultimately, this all comes down to preventing bad actors from taking advantage of emerging tech. A while back, I wrote a piece on the death of the internet and how we need a non-profit organization that can rapidly react and devise legislation for countries against emerging technological threats that might arise in the future. AI needs this just as much. There are organizations out there pushing for this, the OECD, for example, being one of them – but modern democracies and, in fact, any form of government are just too slow to react to these immeasurably advancing threats. The potential for AGI is unparalleled, but we’re not there just yet, and unfortunately ANI is.