Me v AI: An Experiment

I woke up today to an anxiety-inducing headline that preyed on one of my biggest ongoing fears: AI taking our jobs. While recently this fear had been mitigated due to what I would call an enlightening period – where after an enormous layoff season we have begun to come to terms with AI’s limits (with hallucinations and what not) and realize AI was not nearly at the point of replacing actual human work (especially in software development) – this new prediction from Anthropic CEO Dario Amodei has reignited that fear: 

“The possible mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, especially entry-level gigs.”

When I told my mom about the current NIST CSF project I’m working on, she even mentioned to me, “Make sure you produce something AI couldn’t.” I was actually stumped. It is no question that AI can far outwork me when it comes to summarizing large amounts of data in a fraction of the time. While I started this project not to get it done in a short amount of time (rather to actually learn the material), I started to worry: How would anyone online be able to tell that I actually wrote this material? Aside from my own personal satisfaction and the expertise I would gain from doing it myself, what is stopping me from just using AI to write the whole thing?

I will gladly admit I use an LLM on a daily basis already. I am no Luddite and I invite the ease of operation that AI brings. Whether it be for something I could easily Google like diagnosing a dying plant, assistance troubleshooting a broken down network, or guidance on coding a project from scratch, ChatGPT has sneakily become my top mentor and confidant.

I even used it during the NIST CSF project. At any given point in time, my ChatGPT tab is open to answer any questions I have about compliance frameworks and to point me to relevant resources. And I don’t plan on stopping. Hey, I’m not paying $20 a month to not use it. However for right now, like Amodei stated in the AXIOS article, I simply am using AI for “augmentation — helping people do a job” (VandeHei & Allen, 2025). Apparently this won’t be the case for much longer. We are moving towards automation.

Funnily enough before I was made aware of the AXIOS article, yesterday I went down a Youtube rabbit hole of AI predictions in the cybersecurity realm. A few weeks ago – when my mom mentioned to me that I shouldn’t be investing time in anything AI could do in seconds – it planted a seed in my brain. I needed to find out what AI’s weaknesses are. Naturally I first had to go straight to the source:

The response I received did not curb any of my fears. In totality it answered:

  • Define Your Unique Angle
  • Use Practical Demonstrations
  • Add Industry Commentary
  • Pull in Interviews or Quotes
  • Create Contextual Series
  • Show Your Own Journey
  • Infuse Your Personality

… Not quite encouraging. So the only thing I could do was have a personality, interview someone, or already be a subject matter expert? And even with that, I’ve seen AI take on personality before. It’s not hard to replicate.

In the Youtube videos I watched yesterday, I listened to an hour long webinar between 6clicks CEO Anthony Stevens and GRC Pundit Michael Rasmussen as they discussed the future of GRC and AI. This was originally posted last year, and much has changed since then. At the time they concluded AI would not be overtaking any SME work, only automating processes such as compliance mapping, gap analysis, and control testing. They agreed AI would not replace humans – only act as our “co-pilot” (6clicks, 2024).

I don’t think that’s necessarily true anymore. Or at least, won’t be for much longer. Yes, we are still not confident enough with AI to automate security actioning (since we have seen the very real hallucinegenic output from AI), but let’s be real, what does a GRC analyst do that AI can’t?

Trust me, as someone who had thought they finally found their niche with GRC, it does break my heart to say it. With the criminal justice background that I have, I loved the idea of being able to regulate technology with legislation. I love doing research. I love writing. I love presenting. And so does AI.

As I continue my NIST CSF project (and yes I will continue it because like ChatGPT said itself – nothing can replace real-world, lived-in experience), I also want to run a side experiment. Now, I’m not going to stop using ChatGPT during this project. However I want to see the side-by-side comparison of a human using AI augmentation and pure AI automation output. Yes this is subject to change depending on how rapidly new iterations of AI are produced, but for now let’s see where we are at.

At the end of my project I will have graded the VA’s policy and practice maturity with the NIST CSF 2.0 and NIST Privacy frameworks. I will post my results. Then I will have ChatGPT automate it’s findings. We will see what we concluded differently and ultimately decide who is irrelevant. Me or AI?


SOURCES

VandeHei, J., & Allen, M. (2025, May 28). Behind the Curtain: A white-collar bloodbath. Axios. https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic

6clicks. (2024, August 15). AI and the Future of GRC [Video]. YouTube. https://youtu.be/srFZoE_3rR8

By:


Leave a comment

Design a site like this with WordPress.com
Get started