top of page

Playbook: AI Rollout in the Workforce



This is rather different than my typical posts. A Playbook is a new kind of post meant to serve as a guide, a strategy, a consultation of sorts. Less speculation, more direction. These entries shall not be mainly about observing trends like a philosopher or unloading thoughts like a diary. I shall be more direct and absolute, choosing to make, well, decisions instead of leaving it up in the air, or instead of leaving it up to the reader to make their decision.


When I’ve looked at all the moving parts, all the hidden mechanics, I will talk directly to you, the reader, who may also be those who are navigating it all. The executives, managers, policymakers, startups, corporations.


This entry, to no one's surprise, is about AI and how it’s already changing the workforce, what companies are doing (or not doing), and what they should be doing.


What kicked this off was me thinking about getting a job at a defense contractor and having this imaginary conversation in my head as if speaking with friends, talking about how safe my job would be if AI started taking over jobs. I argued that my job would be safe because it would involve security clearance, and AI right now is mainly offered by OpenAI or Google. I don’t know if any company like Lockheed Martin, Raytheon, or Booz Allen has anything as capable as that, I very much doubt it. So, defense contractors would absolutely not trust ChatGPT with security clearance-level jobs and above. Until they can develop their own in-house version of ChatGPT, those roles are safe.


That led me to think about how regular businesses are currently using AI and how they might proceed from here if ChatGPT continues to improve. Businesses should absolutely not try to roll out AI across all possible jobs at once. Even if AI could technically handle it, the PR would be a nightmare. People would hear about it, news would report on it, and headlines would read: "Business replaces entire workforce with AI, dozens fired."


Some companies might be big enough, with enough money and clout, to survive a more cautious version of that kind of transition where just a few departments are replaced. But right now, it seems like most businesses are testing the waters, slow-rolling their AI integration to avoid that backlash.


Here's how I think companies should be thinking:


  • If you’re a startup: If you’ve got the money, then by all means, go crazy and use AI as much as possible. You don’t have employees or full departments to fire, so you’re lucky in that regard.

    • Start with AI baked in from the beginning. Even if it doesn't work out, it's a lot easier to "fire" a lot of AI and avoid backlash, if any, than the equivalent with humans.

  • If you’re an established small business:

    • Depending on your industry, start utilizing or planning to utilize AI for monotonous roles. AI will only get better from here.

    • Don’t heavily invest in any position that involves sitting at a computer all day doing repetitive tasks. You likely don’t need custom AI yet because ChatGPT’s business-level subscription tier exists, and it works. Take advantage of it.

    • And if your business really starts to take off? Then yes, invest heavily, but instead of building out full departments, invest in people who know how to use ChatGPT Business and understand your business needs. You’ll probably end up with a leaner, faster, more organized team than what you’d get without AI.

  • If you’re a large corporation: You probably already have the capability to roll out AI using ChatGPT. But here’s the thing: that corporate mindset of "If it ain’t made by us, it ain’t working for us" will cause paranoia and hesitance.

    • That hesitation is valid. You don’t want an outside product running internal systems. So here’s my advice: keep waiting, but don’t sit still.

    • Start building a department dedicated to creating and maintaining an in-house ChatGPT-style product.

    • Or begin hiring contractors/third-party companies to create and train an AI specific to your needs. You don’t need to rush it, but you do need to plan.


It feels like ChatGPT is undoubtedly being used by position levels and corporations and businesses of any kind. There’s even speculation that it’s been used by the president! So there’s this underbelly where AI is already slowly taking over the workforce, if not controlling it, by proxy.


So then that begs the question, should a company add ChatGPT utilization to their company and no longer make it a secret? Hell, should they encourage it?


I get the feeling a lot of industries are already working and drafting policy for AI usage. I think about doctors or police who have so much paperwork that keeps them from doing their actual job. AI is absolutely good enough to do the paperwork if it’s trained proper and at the very least fact-checked to calm the human paranoia.


I would absolutely advise to begin policy rework to encourage the use of AI on non-confidential paperwork. Allow doctors to be doctors instead of being slowed down by paperwork.


If you decide to go down the pro-AI policy route, here's some advice:


  • Keep it boring: Don’t make it a big deal. Avoid a big announcement. You’re not launching a new product or rebranding a logo, just updating workplace tools.

  • Use internal communication only: A simple internal memo will do. State that the use of ChatGPT is now allowed, but only in a strict, well-defined sense. It should apply to non-confidential or routine tasks only.

  • Set clear limitations: Frame the policy as a guide. People should know when and how AI can be used, but also when it absolutely should not.

  • Anticipate the public reaction: Even a minor AI policy could go viral if someone screenshots it. So word it like a compliance update. Because even then, if the news or social media gets a handle on it, there’s still a chance it gets spun negatively. “Company encourages workers to use ChatGPT” sounds more dystopian to the public than beneficial.

  • Be prepared to be the sacrificial lamb: Hate to say, I really do, but if your policy goes viral and begins making headlines, you may be the sacrifice that launches forward the movement. The next section of this paper shall explain this.


AI right now has a bad stigma for, in my opinion, ignorant reasons. So it stands to reason why companies are so hesitant to utilize AI and are probably okay with their workers thinking they have to use it in secret.


Perhaps all it takes is one brave soul, or a group of them, of businesses to white knuckle it and rollout that pro-AI policy. They will get crucified, they may not survive, but they will open the floodgates for the rest. They took the bullet and now, even with all the backlash, the rest of the businesses, companies, corporations can rush through.

  • It can’t be some small business to take the bullet. Because that’s a roll of the dice with little chance of headlines going nationwide.

    • Although, it could happen, but not very likely. It would just have to go viral and make nationwide headlines, or begin a snowball effect that ends with nationwide headlines.

  • A group of small companies increases those odds for sure. You could get those same increased odds with a massive company too, but they have a lot to lose so it’s unlikely they would take the bullet.

  • There’s also just waiting and letting nature do its thing where AI continues to improve and this underbelly slow rollout continues to grow until it begins to seep to the surface and people begin to notice, but it’d be too late for them by then.


And I think that’s the most likely scenario. The third option. The quiet path.


So, yes, waiting it out might be the best option, but why stop there? At the same time, why not give yourself a head start? Begin preparing by training managers or those most loyal to the company on AI usage. Those most loyal are less likely to spill secrets.


They’ll build internal AI fluency under the radar. So when, not if, the tides turn, you and your team will be ahead of the rest.


Now, about AGI...


OpenAI launched the LLM explosion, and now, somehow, smaller companies have LLMs too. That’s interesting. So the question becomes: if OpenAI achieves AGI, do those smaller companies suddenly have AGI too?


Capitalism says yes. And honestly? I agree.


The atomic bomb was gatekept and created in secret, yet the Soviets had one just a few years later. No technology is capable of being truly hidden forever. Government is slow. They will not be fast enough to recognize AGI’s emergence and stop it from spreading.


Zero chance.


So yes, AGI will be accessible to other companies just like LLMs are today.


OpenAI and Google are basically running amok with LLM tech and AGI-focused missions, and the government hasn’t stepped in entirely. So when AGI drops, even if it starts locked up in a lab, it will not stay there. Someone will replicate it. Someone will leak it. Or build a version that’s good enough. It will trickle down.


Companies likely won’t have the infrastructure or budget to build it themselves. It’ll be too expensive. So again, they’ll lean on the underbelly. They’ll use it indirectly, through tools, through employees, through unofficial channels.


Which means: if AGI exists, everyone gets it eventually.


Not immediately. Not safely.


But inevitably? Absolutely.


So whether you’re leading a startup, a corporation, or just trying to figure out where the wave is headed, the play is simple: be smart, be quiet, and prepare behind the scenes. The AI shift is already happening. You can fear it, deny it, or quietly get ahead of it.


And of course, this is all my opinion. This is not legal advice.


Good luck out there.

Comments


bottom of page