Prompt Engineering Problems I Discovered (That You Might Be Facing Too)

Prompt Engineering Problems I Discovered (That You Might Be Facing Too)
Photo by Solen Feyissa / Unsplash

By someone who tried to automate too much, too fast

I’ve been running an automated daily prompts for a while now ...mostly to learn, to explore trends, and to see how GPT evolves in tone and insight.
At first it felt magical. But then, without expecting it, I started getting this strange sense of déjà vu.

I simply call this the “Déjà Vu Stage.”

Even when I changed the topic, the output felt… familiar.
Not identical, but the same tone, the same structure, the same type of “safe” conclusion.

It wasn’t plagiarism. It wasn’t copying.
But something about it made me feel like the model didn’t know it already said that before.

I want to understand why.

I dig deeper

Here's what I found and did.

  • LLMs Have No Memory each prompt is processed statelessly it doesn’t know what it said yesterday or last week.
  • The solution is to add a memory but wait ChatGPT has a memory already so in the API calls its stateless
  • There’s no API parameter to “enable memory.” The system treats each request independently
  • How do I enable or disable memory in API?
  • https://openai.com/index/memory-and-new-controls-for-chatgpt/
  • Every OpenAI API call is standalone , it doesn't remember previous interactions.
  • I resolve it by session initially
  • I also did it via Milvus vector database cause I know how
  • There are better ways via prompt itself.

Same Prompt = Same Output Pattern

Write about the rising trends and problems {{X}} in {{country}} today.

Then the structure will almost always be:

“This trend is rising…” → “People are talking about it…” → “It might have big effects…”

So to avoid this I added a layer to make my prompt dynamic and add more data or variables and spin my prompt. I passed in weather, trends, mood political sentiments etc.

What Actually Worked (Without Overbuilding)

I don't want to solve it the hard way and complicating architecture its is much cheaper to find ways to spin your way on this. So here's a suggestion.

Better Prompts, Not More Tools


I stopped running different services and just rewrote the prompt clearly:
“Avoid repeating tone or structure from previous days. Start with a new angle or metaphor, have a collection of variables you can respin(country, trend, mental model, intro style)”

Rotating Mental Models and Criteria


Each weekday followed a different thinking pattern...just enough to create variation without losing flow.

Light Hash Check


Instead of setting up a full vector database, I stored a hash of each post intro. If it looked too close to recent ones, I regenerated. This might waste some tokens but there are ways to reduce token waste.

import hashlib

def get_light_hash(text):
    """Returns a light hash of the content."""
    clean_text = text.strip().lower().replace(" ", "")
    return hashlib.md5(clean_text.encode()).hexdigest()

# Example use
generated_intro = "Spain is currently experiencing a major heatwave, sparking national concern."
light_hash = get_light_hash(generated_intro)

# You can then store light_hash and compare against stored hashes
print(light_hash)

Use logit_bias or temperature to nudge freshness

When regenerating after a duplicate, slightly change:

  • Temperature (e.g., 1.0 → 1.3)
  • Add randomness in tone or hook
  • Use logit_bias (advanced) to avoid certain tokens

Personal Pattern Recognition


I started trusting my gut. If it felt like déjà vu, I didn’t need a tool to prove it .. I rewrote the input.

Stopped Over-Architecting


I realized I was building too many systems...vector databases, embeddings, fingerprinting layers...just to fix something that was already solvable with better prompts.
I'm already paying for the compute. The power is there.
The answer was always in the prompt.
That’s why it’s called prompt engineering — not infrastructure engineering but it helps for more enterprise solution.

Read more