top of page
Search

Stop Telling Me to ‘Use AI’: When AI Becomes Too Normal and We Surrender Our Intelligence.

  • S. A-Benstead
  • Jun 2
  • 10 min read

-- How Use of AI Negates the Need For Conscious, Critical Effort and Devalues Our Self-Worth. 


ree

AI is a great tool when used conscientiously. Thanks to the development of various AI technologies: medical research, surgery, drug development, and even personalised mental health care have taken huge leaps forward—allowing earlier detection and treatment for life threatening conditions. However, that’s not all we use AI for, is it? Websites like ChatGPT have quickly become the norm in everyday society, from trends like the recent AI action figures and my friends using it like google, to quick grammar checks and the more extreme rise in AI ‘art’—which takes original works of literature and/or art and reproduces them without consent, acknowledgement or credit—so we end up with ‘quick,’ hollow products putting real human creators on the back foot in an already difficult industry. 

Don’t get me wrong, I acknowledge how much easier AI can and does make our lives. I would be lying if I claimed to have never used it to bounce ideas off during my long, continuous job hunt, but lately I’ve found myself wondering ‘at what cost is this ease of living?’ As a Neurodivergent adult attempting to navigate my 30s while battling ‘Former Gifted Kid Syndrome,’ I worry that, rather than give me an advantage, using AI to ‘help me’ will ultimately do more damage to my mental well-being than the daily employment barriers I already face. Then again, in today’s fast-paced world where, despite AI detection software supposedly countering the unfair advantages AI use creates, human ability is measured beside AI perfection as job ads ask for ‘OCD level attention to detail’ in a bed of overly complicated syntax. So can I really afford not to use it?



Human vs Machine


According to the Department for Work and Pensions, in an article published Jan 2025, just 31% of people with a neurodiverse condition are currently employed compared, to 54.7% of disabled people overall. Highlighting a significant employment gap as it estimates then that around 69% of Neurodiverse people are currently out of work, and yet they only make up approximately 15-20% (1 in 7) of the UKs adult population. Trust me, this statistic is not due to a lack of trying. 

             For the past five and a bit months I’ve been actively looking for work while trying to create enough content to successfully market my book relatively stress free when the time comes for publication. It is difficult and it is draining. Every rejection email, or worse complete lack of response, from companies chips away at my confidence and solidifies the belief that maybe I’m just not capable of having the life the highflying overachiever I used to be envisioned for herself. All my life I’ve been told ‘work hard and you can be anything.’ Well, I didn’t realise ‘anything’ was limited to being unemployed with Functional Regression, Depression, and continuous burnout or employed in the same job I’ve had since university, with Functional Regression, Depression, and continuous burnout with equally low chance of actual career progression because the only way I can do what I love while living well, is to do it as an unpaid hobby. Which equally frustratingly doesn’t seem to qualify as actual work experience. 

Despite my personal setbacks, mental health, and the tiring navigation of a system not suited to my needs, I refuse to remain part of the 69% of unemployed Neurodiverse adults. 

Enter AI. 

Recently, well meaning friends, therapists, and Neurodiverse chat rooms have asked me, "why not just use AI to answer the application questions…to tailor your resume…to plan your book launch…to write your cover letter…blurb…etc…?” Considering how many times I’ve also been asked “Did AI write this?” about my own, personally written, blogs and cover letters, it kind of seems like a no-lose situation to start using AI to actually do it. Even in employment, neurodiverse employees report sometimes getting accused of being chatbots in their emails and written communication styles by clients and/or external companies, so realistically by actually using chatGPT or similar it should take the stress and risk of burnout out of the equation—and apparently no one will be able to tell the difference anyway so I can just relax, right? 

Wrong.

  I’ve seen a surge in students, both neurodiverse and neurotypical, coming forward on platforms like Reddit and Linkedin to seek help on how to counter the fact their original work is being flagged as 50-70% AI generated by TurnItIn AI detection. This is an automatic fail, and when it comes to job applications this would mean immediate candidate rejection for a lot of companies. So even if my efforts are already barely making it to human eyes, the harm in switching it up and actually getting AI to do it for me still seems counter intuitive. Though with original work being flagged this highly as AI, perhaps it would be safe to assume that a lot of people who are already using AI are doing it successfully, despite being noted as AI, through sheer luck, and on the heads of non-users wrongly flagged at the same percentage.  

But also, maybe the way AI learns and exploits human work is actually a vicious cycle fed by human input? The reason people are beginning to get flagged as AI for original work is because so much original work is passed through AI every day in order to determine whether it’s AI or not. It learns the ‘correct patterns’ to look for as well as highlighting quotes or ideas as potentially plagiarised content, but as soon as you put anything into the machine, it is no longer your own and can be replicated, and real humans start to lose their voice. Real effort becomes part of the pattern and the occasional sentence run through ChatGPT or Grammarly becomes suspect to plagiarism. Herein lies my reluctance to conform to the ‘ease of AI.’ Yes, it seems my applications are likely already being written off by potential employers as AI, however, the risk of using AI, for me, outweighs the ‘easing of that stress.’ 

It’s a slippery downward slope from point A—getting ChatGPT to structure my application answers, to point B—No longer being capable of structuring them myself. I already struggle with ‘Functional Regression’, the last thing I need is to feel like my remaining abilities as a critical thinking, problem solver are being replaced by a dependency on AI. My final year at university was deliberately 99% essay based because I genuinely loved writing them and my brain works better that way—I didn’t need AI to complete my degree with 2:1 Hons or write my 133,000 word novel, I shouldn’t/refuse to need it now. 

Besides, if I start using AI and actually get the job, do I even deserve it? 


The Singularity Cometh 

Humans are mammals, so what is it that actually separates us from other animals? 

In 1871 Charles Darwin suggested it was our “God-like intellect,” and even 200 years before this William Shakespeare used Hamlet to marvel at the “noble reason” and “infinite” faculties of man. It seems safe to say that what makes us different is our mental capacity to think critically, problem solve and empathise (Having recently read ‘Eve’ by Cat Bohannon I am confident it’s not our physicality—if anything it seems to me like evolution has been trying to wipe us out and it’s only because of our cognitive abilities to reason that we’ve survived the attempts.)

However, it’s one thing to develop a capacity for ‘noble reasoning’ and ‘intellect’ but it’s another thing to maintain it. I argued in last months blog post that I feel a need for a resurgence in challenging, speculative literature in order to maintain critical thinking and empathy within our polarised society, but it won’t be enough if our need of these skills is constantly called into question by our new all-powerful problem-solver, AI. The use of which constantly minimises how much we need to actually think through our own problems or develop our own arguments. When humans are up against AI users and AI detection both, in order to compete for a fair chance, cognitive reasoning skills and even a requirement for deeper understanding and research ability becomes kind of redundant. Rather than being a tool to help our own advancement, AI could be the very step which finally sees our evolutionary downfall. 

In ‘The Singularity is Near’ (2005) Ray Kurwiel predicts a technological singularity—a moment where technological intelligence surpasses human intelligence. This is a topic many of my friends will be familiar with because, ‘The Singularity is coming,’ is a phrase I’ve preached almost constantly since I read it in 2012. At that time though I assumed, like many, that Kurzweil’s view of the form it would take was a bit extreme and, though the singularity is likely coming, it wouldn’t be the complete human-AI integration Kurzwiel predicted for the later half of the 2030s. Since, as far as I could tell, AI would always need humans to educate it. ChatGPT cannot produce true human emotion when it writes, and it can never be truly original in anything it creates (the failure of AI detection in flagging human voices aside).

 It’s also shown time and again in movies and literature that often the logical, mechanical approach to things is not always the morally or ethically correct choice. AI will always require human intellect and ‘noble reason’ to prevent corrupt, detrimental choices, and can therefore never truly surpass it...right?


Though there are definitely arguments for human creative originality also being a myth; seen in formulaic writing and Archetypal Theory which pulls universal characters and stories into a limited number of ‘new’ variations, and Intertextuality and Postmodern views that suggest originality lies in the reshaping and conversational interconnectedness of the familiar works that came before. Shakespeare 100% did it and many popular modern writers and artists are clearly heavily influenced by literature or art that they know and love. So maybe AI is just a faster route to the same outcome? But this is a topic for another day.  

 

My point is, if we continue to rely on AI for things we once had to do for ourselves Ie - structuring and writing essays, drafting, problem solving, analysing text, grammar correction, creating art and critical thinking/observation etc, we could be viewed as exacerbating our own cognitive regression that leads to AI eventually managing to surpass us in intelligence and sparking the need to, as Kurzweil suggests, merge with AI to extend life, cognition, and our own continuation as a species. We create our own mental downfall that results in the necessity of Kurzwiel’s predicted outcome. If AI is going to become a threat to humanity, it’s not because it will eventually turn on us as it does in films like ‘Terminator’, ‘The Matrix’ or ‘Age of Ultron’ but because we actively and consciously dumb ourselves down by growing so dependent upon it that we lose that which separates us from other animals. The brain is a muscle after all, and if you stop using it you lose it, creating our own dependency. AI becomes the Sapiens and we the Neanderthal left behind, or most likely, a mass of people easily manipulated to the whims of the few who maintain the power to control/manipulate how AI opporates and the information it has access to. The Singularity will more likely take the shape of Jack Paglen’s ‘Transcendence’ or even Richard K. Morgan’s ‘Altered Carbon,’ in which those who cannot afford to merge with AI will be left behind, forced to rebuild their cognitive abilities dampened by the original use of AI, but never able to fully compete anymore. Already human writers are criticised or accused of being ‘fake’ because they use em-dashes or lists of three and apparently that’s an AI trait…but where do you think AI learnt those traits? Here’s a hint…from humans. 


Look on the Bright Side  

Don’t get me wrong, there are plenty of instances in which the use of AI is an evolutionary game changer and frankly, though I’m likely to be one of the poor souls left behind, a future like Transcendence (minus the Anti-AI groups set on its destruction) sounds incredibly exciting.

 The role AI currently plays in medical research for example has helped humanity take leaps forward in creating effective vaccines and medications for diseases like cancer or HIV. Tools like DeepMind can diagnose eye diseases and breast cancer with an accuracy that rivals human experts to a degree humans just cannot replicate alone. Epic and IBM Watson systems can accurately predict patient deterioration as well as interpret and identify genetic mutations much earlier than humans, leading to better screening and overall survival rates in those affected. Once it’s done that, AI can then even predict and analyse how different compounds interact with the human body and which factors may have an impact on those interactions, lowering the risk to humans during drug trials and development, and shortening the time taken to create new drugs through a comprehensive analysis of existing medicines that would take human scientists years of research. The most recent, high profile, example is how AI was used to counter COVID-19 by identifying which existing vaccines would be most effective against the virus, and allowing medical experts to focus their attention on repurposing those drugs rather than attempting to create an entirely new vaccine. That’s medicine, I haven’t even mentioned how it helps to advance prosthetic limbs, cell regrowth, implants, agriculture and climate, space travel, deep sea travel...the list goes on!

However, in everyday life, like my job applications, book editing or fact checking, AI feels like a slippery slope towards a less intelligent population and an easier way to cheat in an already vastly imbalanced societal game. ChatGPT isn’t a helpful tool to support your child in completing their assignments, it’s a semi-intelligent crutch, detrimental to their abilities to think for themselves. More often than not, if I push chatGPT to give me its sources, it can’t, or it backtracks to correct an error I didn’t even realise it made. It’s definitely faster than google, but it’s not always as accurate as a human should be when sifting out truth or relevance. Additionally what seems like a silly little, 2 second cartoon creation on the wings of a larger AI trend, is actually damaging to not just real people and their livelihoods but also our environment and personal mental well-being as well. 

Maybe the use of AI is too readily available and companies setting up AI detection to try and counter the unfair advantage/use is not enough. Maybe it shouldn’t be available outside medical/scientific research where it can actually be used to enhance our abilities and well-being rather than wilfully weaken us. Or maybe, I am simply the natural result of generational adaptation, the new phase of ‘phones will give you brain damage’ or ‘back in my day you had to wait a week for the next episode of Supernatural!’ 


Either way, kindly stop telling me to ‘just use AI’ because ‘everyone else does’ or ‘it will make my life simple right now’ because if the work isn’t my own, why am I doing it? I don’t want to end up relying on a machine to think for me, I would rather succeed and fail on my own merit because I can learn and grow from that. I would rather write nonsense and have another human correct it for me so that maybe I can start to remember how capable I still am. Am I being stupid because ultimately I’m likely up against a host of other applicants perfectly happy to use AI to help tailor their applications and cover letters? Probably. Maybe one day I’ll be forced to cave—like I did with the iPhone—to once again survive in a world that doesn’t seem to want me. Or maybe I’ll find a way to survive in the dark side spaces with my analog brain and dumb little literary novels, following with fascination the new humanoid robots of the coming Singularity.





 
 
 

Comments


bottom of page