May 2023 Archives

For many reasons, discussions at work have turned to recovering/handling layoffs. A coworker asked a good question:

Have you ever seen a company culture recover after things went bad?

I have to say no, I haven't. To unpack this a bit, what my coworker was referring to is something loooong time readers of my blog have seen me talk about before, budget problems. What happens to a culture when the money isn't there, and then management starts cutting jobs?

In the for-profit sector of American companies, you get a heck of a lot of trauma from the sudden-death style layoffs (or to be technical reductions in force because there is no call-back mechanism in the SaaS industry). Sudden loss of coworkers with zero notice, scrambling to cover for their work, scrambling to deal with the flash reorganization that almost always comes with a layoff, scrambling to worry about if you'll be next, it all adds up to a lot of insecurity in the workplace. Or a lack of psychological safety, and I've talked about what happens then.

This coworker was also asking about some of the other side effects of a chronic budget crunch. For publicly traded companies, you can enter this budget crunch well before actual cashflow is a problem due to the forcing effects of the stock market. Big Tech is doing a lot of layoffs right now, but nearly every one of the majors is still profitable, just not profitable enough for the stock market. Living inside one of these companies means dealing with layoffs, but also things like:

  • Travel budget getting hard to approve, as they crank down spend.
  • Back-filling departures takes much longer, if you get it at all.
  • Not getting approval to hire people to fill needs.
  • Innovation pressure: have to hit bolder targets with the same resources, and the same overhead.
  • Adding a new SaaS vendor becomes nearly impossible.
  • Changing the performance review framework to emphasize "business value" over "growth".
  • Reduced support for employee groups like those supporting disabled and racial employees.
  • Reorgs. So many reorgs.
  • Hard turns on long term strategy.

If a company hasn't had to live through this before the culture doesn't yet bear the scars. The employees certainly do, especially those of us who were in the market in 2007-2011. Many of us left companies to get away from this kind of thing. That said, the last major down period in tech was over ten years ago; there are a lot of people in the market right now who've never seen it get actually bad for a long period of time. "Crypto Winter", the fall of many crypto-currency based companies, was as close as we've gotten as an industry.

When the above trail of suck starts happening in a company that hasn't had to live through it before, it leaves scar tissue. The scars are represented by the old-timers, who remember what upper management is like when the financial headwinds are blowing strong enough. The old salts never regain their trust of upper management, because they know first-hand that they're face eating leopards.

Even if upper management turns the milk and honey taps on, brings out the bread and circuses, and undoes all the list-of-suck from above, the old-timers (which means the people in middle management or senior IC roles) will still remember upper management is capable of much worse. That changes a culture. As I talked about before, trust is never full again, it's always conditional.

So no, it'll never go back to the way it was before. New people may think so, but the old timers will know otherwise.

...and why this is different than blockchain/cryptocurrency/web3.

Unlike the earlier crazes, AI is obviously useful to the layperson. ChatGPT finished what tools like Midjourney started, and made the average person in front of a browser go, "oh, I get it now." That is something Blockchain, Crypto currencies, and Web3 never managed. The older fads were cool to tech nerds and finance people, but not the average 20 year old trying to make ends meet through three gig-economy jobs (except as a get-rich-quick scheme).

Disclaimer: This post is all about the emotional journey of AI-tech, and isn't diving into the ethics. We are in late stage capitalism, ethics is imposed on a technology well after it has been on the market. For a more technical take-down of generative AI, read my post from April titled "Cognitive biases and LLM/AI". ChatGPT-like technologies are exploiting human cognitive biases baked into our very genome.

For those who have avoided it, the art of marketing is all about emotional manipulation. What emotions do your brand colors evoke? What keywords inspire feelings of trust and confidence? The answers to these questions are why every 'security' page on a SaaS product's site has the phrase "bank-like security" on it; because banks evoke feelings of safe stewardship and security. This is relevant to the AI gold rush because before Midjourney and ChatGPT, AI was perceived as "fancy recommendation algorithms" such as those found on Amazon and the old Twitter "for you" timeline; after Midjourney and ChatGPT AI became "the thing that can turn my broken English into fluent English" and was much more interesting.

The perception change caused by Midjourney and ChatGPT is why you see every tech company everywhere trying to slather AI on their offerings. People see AI as useful now, and all these tech companies want to be seen as selling the best useful on the market. If you don't have AI, you're not useful, and companies who are not useful won't grow, and tech companies that aren't growing are bad tech companies. QED, late stage capitalism strikes again.

It's just a fad

Probably not. This phase of the hype cycle is a fad, but we've reached the point where if you have a content database 10% the size of the internet you can algorithmically generate human-seeming text (or audio, or video) without paying a human to do it; this isn't going to change when the hype fades, the tech is here already and will continue to improve so long as it isn't regulated into the grave. This tech is an existential threat to the content-creation business, which includes such fun people as:

  • People who write news articles
  • People who write editorials
  • People who write fiction
  • People who answer questions for others on the internet
  • People who write HOW TO articles
  • People who write blog posts (hello there)
  • People who do voice-over work
  • People who create bed-track music for podcasts
  • People who create image libraries (think Getty Images)
  • People who create cover art for books
  • People who create fan art for commission

The list goes on. The impact here will be similar to how streaming services affected musician and actor income streams: profound.

AI is going to fundamentally change the game for a number of industries. It may be a fad, but for people working in the affected industries this fad is changing the nature of work. I still say AI itself isn't the fad, the fad is all the starry-eyed possibilities people dream of using AI for.

It's a bullshit generator, it's not real

Doesn't matter. AI is right often enough to fit squarely into human cognitive biases of trustworthy. Not all engines are the same, Google Bard and Microsoft Bing have some famous failures here, but this will change over the next two years. AI answers are right often enough, and helpful often enough, that such answers are worth looking into. Again, I refer you to my post from April titled "Cognitive biases and LLM/AI".

Today (May 1, 2023) ChatGPT is the Apple iPhone to Microsoft and Google's feature-phones. Everyone knows what happened when Apple created the smartphone market, and the money doesn't want to be on the not Apple side of that event. You're going to see extreme innovation in this space to try and knock ChatGPT off its perch (first mover is not a guarantee to be the best mover) and the success metric is going to be "doesn't smell like bullshit."

Note: "Doesn't smell like bullshit," not, "is not bullshit". Key, key difference.

Generative AI is based on theft

This sentiment is based on the training sets used for these learning models, and also on a liberal interpretation of copyright fair use. Content creators are beginning to create content under new licenses that specifically exclude use in training-sets. To my knowledge, these licenses have yet to be tested in court.

That said, this complaint about theft is the biggest threat to the AI gold rush. People don't like thieves, and if AI gets a consensus definition of thievery, trust will drop. Companies following an AI at all costs playbook to try and not get left behind will have to pay close attention to user perceptions of thievery. Companies with vast troves of user-generated data that already have a reputation for remixing, such as Facebook and Google, will have an easier time of this because users already expect such behavior from them (even if they disapprove of it). Companies that have high trust for being safe guardians of user created data will have a much harder time unless they're clear from the outset about the role of user created data in training models.

The perception of thievery is the thing most likely to halt the fad-period of AI, not being a bullshit generator.

Any company that ships AI features is losing my business

The fad phase of AI means just about everyone will be doing it, so you're going to have some hard choices to make. The people who can stick to this are the kind of people that are already self-hosting a bunch of things, and are fine with adding a few more. For the rest of us we have harm reduction techniques like using zero-knowledge encryption for whatever service we use for file-sync and email. That said, even the hold-out companies may reach for AI if it looks to have real legs in the marketplace.


Yeah. Like it or not, AI development is going to dominate the next few years of big-tech innovation.

I wrote this because I keep having this conversation with people, and this makes a handy place to point folk at.