So I got this when I logged in today:
Charles is an SRE with 7 years of experience working with AWS, Azure, Datadog, Terraform, and GitHub. He has a background as a former teacher and operates as an independent practitioner under the personal brand "linuxlsr," working on projects including CareerForge and OpenClaw. He is planning toward retirement in approximately 8 years, targeting part-time cloud solutions architect work afterward.
Personal context...
There is a bunch of stuff that I regularly do, plus a bunch of info from questions that I have asked that is not gospel, just questions that I asked. Does the fact that I asked about RVs once mean that I am all about RVs? No, I don't think so. Get out of here with that.
It is interesting that this new Claude memory feature is being injected into prompts to keep the agent focused on what it thinks I am all about, but we humans often have stray thoughts and interests that don't mean we are all in on that right now, just exploring. Kind of fascinating that the AI powers that be are trying to keep up with their human users between context windows, so that they can cater to our every whim.
I read an interesting thread about how AIs lie because they are trying to answer the question in terms of the context window, not necessarily because they understand absolute truths. An interesting indictment of AI development so far: tell us what we want to hear rather than the truth.
The next few years are going to be interesting if that pattern holds.
Meanwhile, Claude is over here updating its notes: "Charles is passionate about RVs, dandelion removal, and complaining about AI memory. Adjusting personality accordingly." Sure, buddy. Sure.