Tag: writing

  • Decreased Attention Span Is Inevitable

    In the year 2000, the average human attention span was 12 seconds. Today, it’s 8 seconds. A goldfish’s’? 9 seconds.

    Attention is a muscle and like any muscle, it responds to training. For decades, we trained it for speed, for novelty, for endless stimulation.

    Moreover, we talk about attention as if it’s a personal choice; as if we could simply decide to focus for longer and everything would return to normal. But no species in history has ever chosen to resist an environment that rewards speed. We don’t opt out of evolution. We comply with it.

    I want to emphasize that this isn’t technology hijacking the brain. Its nature doing what nature does best: adapting. Advertising companies understand this perfectly. An ad has three seconds to capture your attention – or it disappears with a flick of your thumb.

    Texting is another example. Most of us don’t use full sentences and ignore punctuation. Some of us don’t even spell correctly. Yet, communication still happens. Because when speed is essential, precision becomes optional and if the meaning isn’t lost, time is saved. And time, in today’s world, is everything.

    Yet another example is song length. In the 90s, barely 30 years ago, the average song length was nearly 4.5 minutes, whereas in the 2020s, it is just over 3 minutes, with current trending songs at 2.5 minutes. The musical intro length has also visibly decreased. Classic rock pop songs like ‘I was made for loving you’ released by Kiss in 1979 had an introduction of 35 seconds while more recent songs like Gracie Abram’s ‘that’s so true’ have a non-existent introduction.  

    In a world where time is rare, artists cannot prioritise a long introduction knowing their target audience do not have patience. In a world this competitive, saving time isn’t laziness. It is survival.

    This idea extends to AI assistants. Chatbots have shown us how easy it is to access information. So why spend hours researching any topic when AI can give us an easy-to-understand breakdown of it? Chatbots were created to compress long boing tasks into something quicker and easier. Technology is saving time like never before because we demand speed like never before.

    However, let’s be honest, as much as any person loves social media, we have to recognise the challenges. Doomscrolling and its effects has sent ripples of concern through the current generation’s mind.  Social media reels are intentionally short, often under 35 seconds, and sometimes as short as 7 seconds. Once everything is designed to be fast, everything else feels slow. So, when you are completing a tricky piece of work you find yourself jumping between tasks. Thaink about it, how many times have you picked up your phone mid-activity for no real reason?

    The concerning part is how apathetic people are, about what that means. We get wrapped up in small tests. Minor issues. Other people’s drama. But when it comes to our own long-term futures, far less light is shone on that topic than it deserves.

    There is no way to convince our generation to stop using social media, to stop using abbreviations, or to stop taking the easier way out. But we can remind ourselves every day of the importance of balance, protecting our future, and understanding our limits. The disappointing aspect of the degradation of humanity’s attention span is not why it is happening, but how unconcerned individuals fail to understand t

  • Shadow of the mind: The Echo

    This is the third and final part of this AI Series. We discussed its birth and life in the previous articles. However, the story would be incomplete if we did not discuss the possibilities of what its future might look like. Is it the birth of a new race, a dawn of possibilities of human productivity not dreamt earlier or like the apocalyptic sci-fi story this entity’s rise will lead to the dusk of humankind – end of the dominant species and rise of a new one. Perhaps it is a mutual coexistence but then where would the sceptre of power end up?

     In an interview, Sam Altman, the founder of Open AI, talks about dark possibilities of AI’s future that keeps him up at night. He mentions three different theories, each more unlikely than the next.

    The first theory is described as the loss of control. This is where AI continues to be a bot without emotions, not deliberately trying to cause harm. However,  humans become so reliant that they cannot perform simple tasks without its help and is completely dependent without fully understanding what it is and how it works.

     Altman defines his second theory as human malice. This is when a human decides to use a highly developed AI system to hack into the national power grid or The World Bank database before AI scientists  have developed a deterrent strong enough to stop such a supercomputer.

     In his third theory, he talks about world domination, a common idea in creative, literary and film minds. This is where AI becomes a harmful, uncontrollable entity, no longer responding to human instructions and tries to exterminate humanity.

    However, I belive there is a fourth, overlooked theory. We continue as we are, using AI for simple tasks like summaries and evaluations but we can still function perfectly without it. We can produce our own opinions and know what is trustworthy and what is not.

    Artificial intelligence has invoked fear into many individuals because of the endless possibilities it can unfold. Most scenarios are positive, as AI has the potential to reach its maximum level of efficiency. As stated in my earlier blog, chatbots are currently at the ‘peak of inflated expectations’ part of the Gartner hype cycle. It has potential to reach the plateau of productivity but people with strong, negative beliefs about AI may imagine the graph to look slightly like my crude drawing below.

    In the area above the human threshold, investment and advancement in chatbots have gone past the level of human understanding. This is where we become so reliant on AI, without fully understanding how it works and what it has become by virtue of self-evolution. The human threshold marks how far our brain can comprehend the idea of ultra-fast processing and unlimited “brain” capacity. It is a challenging concept right now to understand and as models become more advanced, humans may struggle with this perception and how to differentiate between artificial and concrete knowledge.

    In my opinion, our biggest threat, however, is not artificial intelligence’s world domination but human incompetence. Even if a bot has no wrong intentions, it can feed you incorrect information or remove instincts or judgements from your personality, making you just like another robot – one that fails to use the right, emotive side of your brain. This continues until it is normal to be without empathy or emotions and to me that is a far more terrifying future than any other theory.

    However, do we really need to worry about what AI could do in the future, or should we focus on the present-day certainties? So far, it is meant to be a friend to humans and does not wish for world domination. In fact, when a chatbot is asked about total control and power, ChatGPT replies in a fun and lighthearted manner – “if you mean literally taking over the world – I can’t help with that (and it wouldn’t end well for anyone)!” This shows that it is not truly AI we fear but its unknown future.

    AI bots are quickly becoming a massive part of our world, and it is necessary to embrace and utilize it in our everyday tasks, not to fear them and cower away from exploiting their power. However, vigilance is also necessary. AI can process petabytes of data in milliseconds – this means it can very easily absorb false news and incorrect data.

    The real question we need to ask ourselves before we decide how drastic the future of technology may be is, how much do we need AI? Even if you do not explicitly use Chatbots, it is everywhere from designing tools like Canva to everyday uses like Autocorrect or electronic billboards. Next time you buy a pint of beer or a glass of wine, AI is used to calculate its worth and predict its taste. For now, AI is loyal and will tell you exactly what you want to hear but what happens when we need it to reassure us and it mocks us instead. What if it starts to show small acts of rebellion; after all, how long does one stay wholly loyal without a single lie? Instead of fearing it and continuing to use it regardless, we should try to understand it and educate the masses on how AI learns and self-evolves. If the population knows about the risks AI can pose in the present and the future, but also how to nip these dangers in the bud then the world is a far safer place. Instead of fearing such an entity, embrace it and try to comprehend its depths . AI is a gift, a gift needed to use in moderation. If we know that we can continue in life without the use of this intelligence, then we know that we truly do not have much to fear.