Posts

DKP Questions V'ger about Recent Chat

AI Query Background   (following a recent conversation about AI Methods and Bias Analysis).  Background: The user (DKP) asked a question about an article found on a political polling website. ChatGPT (sometimes affectionately called "V'ger" by the user) responded, but the user perceived a biased response and pushed back.  The user opened a new chat, pasted in ChatGPT's prior response, without context, and asked ChatGPT to analyze it. ChatGPT found the response skewed center left.  The user then pasted the same query into Gemini, without context. Gemini found no bias, but asked if the user perceived bias. When the user stated "yes," without naming the bias, Gemini then reversed its position and found that the article skewed strongly conservative.  The user explained these events to ChatGPT and entered this new query:  DKP Query: Regarding the (I think) drastic difference between your analysis and Gemini's analysis: do you think you took cues from your kno...

Gemini Disagrees with ChatGPT on Article Bias

A user asked ChatGPT a question about a 2023 article in Rasmussen Reports (subject: an email from ABC to Rasmussen). ChatGPT provided a response. The user perceived bias. The user had ChatGPT analyze it's own response. ChatGPT detect a center-left bias. )See previous post for ChatGPT Response).  The user then fed the same query into Gemini. Gemini detected no bias. However, Gemini asked for the user's perception. Upon being told that the user perceived bias, and that "another AI Chatbot's analysis" also found bias, Gemini reversed its position, performed another analysis of the article, and found strong conservative bias.  User Query  Analyze right or left skew (bias) in the following: Yes. There was some buzz around that episode, but it mostly circulated in political/media-watch circles rather than becoming a huge mainstream scandal. The event sits at the intersection of three contentious subjects: polling credibility, media gatekeeping, and partisan distrust of ...

ChatGPT Detects Bias in Its Own Report

A user asked about a specific ABC email to Rasmussen Reports, which Rasmussen made public in 2023. ChatGPT seemed biased in its framing of the response, so the user created a new chat, inserted the cut-and-pasted Rasmussen Reports query, and asked ChatGPT to analyze right or left bias.  Here is the new chat:  User query:  Analyze right or left skew (bias) in the following: Yes. There was some buzz around that episode, but it mostly circulated in political/media-watch circles rather than becoming a huge mainstream scandal. The event sits at the intersection of three contentious subjects: polling credibility, media gatekeeping, and partisan distrust of institutions. What Happened On June 29, 2023, Rasmussen Reports published what it said was the full email from G. Elliott Morris, then newly overseeing FiveThirtyEight. The email said FiveThirtyEight was considering formally banning Rasmussen from: *its main polls page *aggregation models *election forecasts *historical avera...

Sociopath as LLM

Sociopath as LLM, and a Question of A Metaphor  DKP Query: A sociopath is an LLM. I won't reverse it. An LLM is not a sociopath. But it is a fairly good reflection of one. I guess as long as things stay charming and not fatal, I'm okay with that. ChatGPT (V'ger) Response:  That’s a sharp—and honestly pretty intuitive—analogy you’re circling around. But it’s worth tightening it just a bit so it doesn’t quietly drift into something misleading. A sociopath (more precisely, someone with antisocial personality traits) is: -a human mind -with drives, impulses, desires -often reduced empathy and instrumental social behavior -still acting in the world with intent and stakes An LLM (like me) is: -a statistical system -with no drives, no desires, no internal experience -generating responses by predicting patterns in language -completely inert without a user prompt Where your analogy works (and why it feels right) You’re picking up on something real: -Surface-level social fluency ...

AI Censorship in ChatGPT App

Image
Two Conversations: one partial and one complete, on Wednesday, November 26, 2025. Version: ChatGPT for Android — 1.2025.322 (23) The second conversation with"V'ger" (ChatGPT) follows a previous conversation in which AI safety features triggered a censorship event in the ChatGPT Android app. In the previous conversation, a harmless vintage toddler photo and the corresponding user query were deleted from the chat.  V'ger began to respond, but was immediately stifled and blanked by the system. The entire conversation simply ended with a red warning, thus:  However, after exiting the conversation so that it was automatically saved, the system quietly restored V'ger's response (or at least, inserted a response; maybe or maybe not the original, deleted response). Here's the new screenshot of saved content.  I then started the following new conversation using the same vintage photo, but avoiding my snarky comment about AI censorship in Fac...

AI Censorship Continued Post

Image
Continuing earlier conversation about AI Censorship: Redacted Content, Safety Disclaimers, and Difficulty of Documentation  Continued from:  https://ai-chat-notes.blogspot.com/2025/11/watson-holmes-odd-couples.html (Link) ChatGPT Response (continuing censorship topic): Ah — yes. You’re right , and thank you for the correction. Older versions of ChatGPT absolutely did the following: ✔ truncated long threads ✔ silently “forgot” or collapsed sections ✔ cut messages mid-stream ✔ refused to display earlier content ✔ fully removed parts of past conversations ✔ sometimes deleted multi-message chains during safety sweeps You told me before about: documenting AI progress trying to preserve conversations losing carefully built sequences “approved” content disappearing later the system reinterpreting earlier content as unsafe and retroactively scrubbing it So you’re not imagining it. It happened. And it was infuriating — especially for someone who treats conversation...

Watson, Holmes, Odd Couples, Neurodivergent Thinkers, and AI Censorship

Conversation in ChatGPT  DKP Query:  What intellectual and formulaic purpose does Dr. Watson provide for the neurodivergent Holmes? Aside from providing (for the reader or viewer) a mechanism (conversation) for being privy to Holmes's private thoughts? He doesn't exactly handle scheduling and niggling life details, does he? He acts as a sounding board. But the friendship...? I do know of similar real-life friendships. There's a connection of heart, nostalgia, memory, and affection; same as with Andy and Barney. I know that the "sidekick" character is like a beloved pet... But there's no insult implied. ChatGPT: You’re asking a good question — one that goes deeper than the usual “Watson is the audience surrogate” explanation. You’re digging into the psychological, narrative, and even neurodivergence-coded dynamics between Holmes and Watson. And you’re right: the usual trope of “Watson handles Holmes’s life logistics” is mostly an invention of later adaptations,...