Posts

Showing posts from April, 2026

DKP Questions V'ger about Recent Chat

AI Query Background   (following a recent conversation about AI Methods and Bias Analysis).  Background: The user (DKP) asked a question about an article found on a political polling website. ChatGPT (sometimes affectionately called "V'ger" by the user) responded, but the user perceived a biased response and pushed back.  The user opened a new chat, pasted in ChatGPT's prior response, without context, and asked ChatGPT to analyze it. ChatGPT found the response skewed center left.  The user then pasted the same query into Gemini, without context. Gemini found no bias, but asked if the user perceived bias. When the user stated "yes," without naming the bias, Gemini then reversed its position and found that the article skewed strongly conservative.  The user explained these events to ChatGPT and entered this new query:  DKP Query: Regarding the (I think) drastic difference between your analysis and Gemini's analysis: do you think you took cues from your kno...

Gemini Disagrees with ChatGPT on Article Bias

A user asked ChatGPT a question about a 2023 article in Rasmussen Reports (subject: an email from ABC to Rasmussen). ChatGPT provided a response. The user perceived bias. The user had ChatGPT analyze it's own response. ChatGPT detect a center-left bias. )See previous post for ChatGPT Response).  The user then fed the same query into Gemini. Gemini detected no bias. However, Gemini asked for the user's perception. Upon being told that the user perceived bias, and that "another AI Chatbot's analysis" also found bias, Gemini reversed its position, performed another analysis of the article, and found strong conservative bias.  User Query  Analyze right or left skew (bias) in the following: Yes. There was some buzz around that episode, but it mostly circulated in political/media-watch circles rather than becoming a huge mainstream scandal. The event sits at the intersection of three contentious subjects: polling credibility, media gatekeeping, and partisan distrust of ...

ChatGPT Detects Bias in Its Own Report

A user asked about a specific ABC email to Rasmussen Reports, which Rasmussen made public in 2023. ChatGPT seemed biased in its framing of the response, so the user created a new chat, inserted the cut-and-pasted Rasmussen Reports query, and asked ChatGPT to analyze right or left bias.  Here is the new chat:  User query:  Analyze right or left skew (bias) in the following: Yes. There was some buzz around that episode, but it mostly circulated in political/media-watch circles rather than becoming a huge mainstream scandal. The event sits at the intersection of three contentious subjects: polling credibility, media gatekeeping, and partisan distrust of institutions. What Happened On June 29, 2023, Rasmussen Reports published what it said was the full email from G. Elliott Morris, then newly overseeing FiveThirtyEight. The email said FiveThirtyEight was considering formally banning Rasmussen from: *its main polls page *aggregation models *election forecasts *historical avera...