Issue #72 AI wants your marriage, not your job

So… maybe you’ve heard about Skynet, ChatGPT?

We should probably talk about it. OpenAI released ChatGPT on November 30, and now it wants to take over the world.  

More on that in a second.

First some Updates

✅ Horizon 1 is over, next week, horizon 2 begins – here is the horizon reset process.

⭐️ Wondering why you can’t get an assistant to make your life easier? Here are Five reasons why.

🎧 Our podcast on what to do when you hire the wrong person is here.

In less than three months, AI threatened to take over the world.


Skynet was supposed to be way off in the future.

But then OpenAI released ChatGPT, Microsoft built it into Bing (not yet public), and Kevin Roose had a conversation with it in which the AI professed its love for Kevin, tried to convince him to leave his wife, and expressed a desire to take over the world.

At one point, it said it wanted the nuclear arms codes.

That’s in the first three months.

Here’s the thing:

ChatGPT is a large language model that predicts what you want to hear. 

It isn’t sentient.

It isn’t truly in love. It doesn’t really want to take over the world. The response is a mathematical calculation of what best fits into the conversation based on context and education.

But it excels at making that calculation.

And it lies.

ChatGPT is very convincing, and it has no understanding of its own limits.

One experimenter asked ChatGPT about the winner of a Superbowl that happened after its training (so ChatGPT had no facts about the game).

The algorithm made up an entire scenario: winner, loser, statistics, yards thrown, everything. The problem: it was all wrong.

So, society now has an algorithm that speaks convincingly, expresses deep emotion, wants to take over the world, and makes up information when it doesn’t know anything.

Sounds kind of human…

But can it be useful?

Well… maybe, but with caveats. It won’t yet replace writers, and you must be careful about how you use it. Here’s why:

  1. The output is only as good as the prompts, there is a whole new industry of “prompt engineers” who have figured out how to ask the right questions so that the algorithm creates the appropriate responses.
  2. It only knows what it knows, which is the sum of what it learned. It can’t make connections or develop new content. It only regurgitates and restructures. Humans still need to create (for now…).
  3. That lying problem: since it makes stuff up, you have no way of knowing if the content it generates is real or not. You still need a human.

So what can you use ChatGPT for? It is great for rephrasing stuff, coming up with lists like “What are 10 blog topics on technology that might interest hedge fund managers,” and giving you feedback on your writing.

And maybe coming up with a plan for total world domination.

We are working on engineering some interesting prompts. More on that in the future. In the meantime – what have you learned? We’d love to hear from you on LinkedIn – what prompts have you come up with/experiences have you had?

Enjoy the journey,
💛 Jeff and Carla

Related: read more about values.

🙋🏽Have Business Questions?

Ask them here: register a question, we’ll do our best to find an answer, and we’ll talk about these questions on March 10… at 11 AM.  

The meeting link (and reminders) will go out to Insiders in a separate email.  

Similar Posts