<rss xmlns:source="http://source.scripting.com/" version="2.0">
  <channel>
    <title>Spite Village</title>
    <link>https://spitevillage.micro.blog/</link>
    <description></description>
    
    <language>en</language>
    
    <lastBuildDate>Fri, 24 Apr 2026 14:12:23 +0100</lastBuildDate>
    <item>
      <title>&#34;If they don’t have the ability in their contract to remove their byline, we’re going to use their name&#34;</title>
      <link>https://spitevillage.micro.blog/2026/04/24/if-they-dont-have-the.html</link>
      <pubDate>Fri, 24 Apr 2026 14:12:23 +0100</pubDate>
      
      <guid>http://spitevillage.micro.blog/2026/04/24/if-they-dont-have-the.html</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;A new piece of Claude-based AI tech is getting rolled out in the newsrooms of the McClatchy Media family of newspapers, and some journalists are being forced to take partial bylines, even when an AI system “wrote” their article.&lt;/p&gt;
&lt;p&gt;The tool, called the content scaling agent (CSA) enables editors to create summaries of varying length for any story. I’m imagining the idea of “scaling” a jpeg larger or smaller, but applied to a piece of text. But the CSA can also create, to quote TheWrap, “versions targeted at specific audiences.” TheWrap says page of internal information reviewed by Boiles calls it “a writing partner that handles the mechanical work of content adaptation so journalists can focus on what matters: judgment, voice and storytelling.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;TheWrap links to an example: a piece in Pennsylvania’s Centre Daily Times, credited with the following format: “Reporting by [author redacted]. Produced with AI assistance.” The AI-generated article is two short paragraphs of prose, followed by the heading “Here are the highlights” and then five bullet points. There’s a link in the middle of the article to the full, human-written story, and it’s just shy of 1,200 words long and contains six data-heavy graphics.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://gizmodo.com/newspaper-company-allegedly-puts-humans-bylines-on-ai-articles-unless-contractually-prevented-from-doing-so-2000749344&#34;&gt;A Newspaper Is Allegedly Slapping People’s Names on AI Stories Without Their Permission&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve seen something similar in Claude. When I asked it to analyse text responses in a survey, it produced a report and attributed it to &amp;lsquo;Lauren Sager Weinstein&amp;rsquo;. I googled and saw that Lauren Sager Weinstein is Chief Data Officer at Transport for London:&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://spitevillage.micro.blog/uploads/2026/62747f9031.png&#34; alt=&#34;A screenshot of a Claude artefact titled Survey Analysis: Detailed Patterns and Themes by Lauren Sager Weinstein, dated January 2026&#34;&gt;&lt;/p&gt;
&lt;p&gt;I tried asking Claude why, but didn&amp;rsquo;t get anything convincing:&lt;/p&gt;
&lt;p&gt;&lt;img src=&#34;https://spitevillage.micro.blog/uploads/2026/screenshot-2026-04-24-at-14.10.50.png&#34; alt=&#34;A conversation displays an exchange about a naming error concerning Lauren Sager Weinstein and Lauren Pope, with apologies and an offer to correct the documents.&#34;&gt;&lt;/p&gt;
</description>
      <source:markdown>&gt;&#34;A new piece of Claude-based AI tech is getting rolled out in the newsrooms of the McClatchy Media family of newspapers, and some journalists are being forced to take partial bylines, even when an AI system “wrote” their article.
&gt;
&gt;The tool, called the content scaling agent (CSA) enables editors to create summaries of varying length for any story. I’m imagining the idea of “scaling” a jpeg larger or smaller, but applied to a piece of text. But the CSA can also create, to quote TheWrap, “versions targeted at specific audiences.” TheWrap says page of internal information reviewed by Boiles calls it “a writing partner that handles the mechanical work of content adaptation so journalists can focus on what matters: judgment, voice and storytelling.”

&gt;TheWrap links to an example: a piece in Pennsylvania’s Centre Daily Times, credited with the following format: “Reporting by [author redacted]. Produced with AI assistance.” The AI-generated article is two short paragraphs of prose, followed by the heading “Here are the highlights” and then five bullet points. There’s a link in the middle of the article to the full, human-written story, and it’s just shy of 1,200 words long and contains six data-heavy graphics.&#34;

[A Newspaper Is Allegedly Slapping People’s Names on AI Stories Without Their Permission](https://gizmodo.com/newspaper-company-allegedly-puts-humans-bylines-on-ai-articles-unless-contractually-prevented-from-doing-so-2000749344)

I&#39;ve seen something similar in Claude. When I asked it to analyse text responses in a survey, it produced a report and attributed it to &#39;Lauren Sager Weinstein&#39;. I googled and saw that Lauren Sager Weinstein is Chief Data Officer at Transport for London:

![A screenshot of a Claude artefact titled Survey Analysis: Detailed Patterns and Themes by Lauren Sager Weinstein, dated January 2026](/uploads/2026/62747f9031.png)

I tried asking Claude why, but didn&#39;t get anything convincing:

![A conversation displays an exchange about a naming error concerning Lauren Sager Weinstein and Lauren Pope, with apologies and an offer to correct the documents.](/uploads/2026/screenshot-2026-04-24-at-14.10.50.png)
</source:markdown>
    </item>
    
    <item>
      <title>&#34;We are now flooded with tools that promise automation and end up producing more repair work.&#34;</title>
      <link>https://spitevillage.micro.blog/2026/04/24/we-are-now-flooded-with.html</link>
      <pubDate>Fri, 24 Apr 2026 10:03:58 +0100</pubDate>
      
      <guid>http://spitevillage.micro.blog/2026/04/24/we-are-now-flooded-with.html</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Systems fail. They falter, decay, break under pressure, or never quite work in the first place. But most of the time, they do not fail all at once. They leak, jam, get stuck. And when that happens, it is not designers or executives who get the call. It is the people who live inside those systems: They are the ones who keep things running.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Ironically, AI has made this maintenance work more visible. We are now flooded with tools that promise automation and end up producing more repair work.&lt;/p&gt;
&lt;p&gt;Writers are hired to rewrite AI-generated content that lacks clarity or context. Designers are paid to fix broken AI logos or redraw pixel soup into something usable. Engineers are tasked with cleaning up buggy AI-generated apps.&lt;/p&gt;
&lt;p&gt;The loop is clear. Automation without understanding leads to more friction, not less. And in every one of these cases, human judgment and care becomes the final defense.&lt;/p&gt;
&lt;p&gt;It is way easier to leave after the site has been shipped, because that is when the real work begins. But someone always stays behind to deal with the mess. And it is rarely the people who created it.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://blog.ronbronson.com/design-as-repair&#34;&gt;Design As Repair, Ron Bronson&lt;/a&gt;&lt;/p&gt;
</description>
      <source:markdown>&gt;&#34;Systems fail. They falter, decay, break under pressure, or never quite work in the first place. But most of the time, they do not fail all at once. They leak, jam, get stuck. And when that happens, it is not designers or executives who get the call. It is the people who live inside those systems: They are the ones who keep things running.&#34;

&gt;&#34;Ironically, AI has made this maintenance work more visible. We are now flooded with tools that promise automation and end up producing more repair work.
&gt;
&gt;Writers are hired to rewrite AI-generated content that lacks clarity or context. Designers are paid to fix broken AI logos or redraw pixel soup into something usable. Engineers are tasked with cleaning up buggy AI-generated apps.
&gt;
&gt;The loop is clear. Automation without understanding leads to more friction, not less. And in every one of these cases, human judgment and care becomes the final defense.
&gt;
&gt;It is way easier to leave after the site has been shipped, because that is when the real work begins. But someone always stays behind to deal with the mess. And it is rarely the people who created it.&#34;

[Design As Repair, Ron Bronson](https://blog.ronbronson.com/design-as-repair)




</source:markdown>
    </item>
    
    <item>
      <title>&#39;It&#39;s not too late to fix it&#39;, Tim Berners-Lee </title>
      <link>https://spitevillage.micro.blog/2026/04/24/its-not-too-late-to.html</link>
      <pubDate>Fri, 24 Apr 2026 09:53:02 +0100</pubDate>
      
      <guid>http://spitevillage.micro.blog/2026/04/24/its-not-too-late-to.html</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;“I would like to see a Cern for AI, where all the top scientists come together and see whether they can make a super intelligence. And, if they can, they contain it into a system where it can’t just go out and persuade people to let it run the world. Right now, though, given the division sown in part by that red corner of the web, we are “very, very far from a Cern for AI”, he says.&lt;/p&gt;
&lt;p&gt;“We have got AI being done in these huge companies, but also in these huge silos. They’re not looking over each other’s shoulder. They’re just sitting there, inside their own company, looking at their own system, trying to make it smarter. I don’t see a way that we can get to a point where the scientific community gets to look at the AI and to decide whether it is safe or not.”&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://www.theguardian.com/technology/2026/jan/29/internet-inventor-tim-berners-lee-interview-battle-soul-web&#34;&gt;‘It’s not too late to fix it’: web inventor Tim Berners-Lee says he is in a ‘battle for the soul’ of the internet&lt;/a&gt;&lt;/p&gt;
</description>
      <source:markdown>&gt;“I would like to see a Cern for AI, where all the top scientists come together and see whether they can make a super intelligence. And, if they can, they contain it into a system where it can’t just go out and persuade people to let it run the world. Right now, though, given the division sown in part by that red corner of the web, we are “very, very far from a Cern for AI”, he says.
&gt;
&gt;“We have got AI being done in these huge companies, but also in these huge silos. They’re not looking over each other’s shoulder. They’re just sitting there, inside their own company, looking at their own system, trying to make it smarter. I don’t see a way that we can get to a point where the scientific community gets to look at the AI and to decide whether it is safe or not.”

[‘It’s not too late to fix it’: web inventor Tim Berners-Lee says he is in a ‘battle for the soul’ of the internet](https://www.theguardian.com/technology/2026/jan/29/internet-inventor-tim-berners-lee-interview-battle-soul-web)
</source:markdown>
    </item>
    
    <item>
      <title>Why AI is picking up incorrect government information</title>
      <link>https://spitevillage.micro.blog/2026/04/23/why-ai-is-picking-up.html</link>
      <pubDate>Thu, 23 Apr 2026 11:57:18 +0100</pubDate>
      
      <guid>http://spitevillage.micro.blog/2026/04/23/why-ai-is-picking-up.html</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Although the primary page on registering a company was up-to-date, users are increasingly using narrower, more nuanced searches.&lt;/p&gt;
&lt;p&gt;This means ‘agentic search’, where GenAI performs multiple steps to meet a user’s objective, looks for content that meets these niche scenarios.&lt;/p&gt;
&lt;p&gt;As a result, the GenAI bot pulled information from an unmaintained, unused GOV.UK page that mentioned ‘charity’, instead of the primary GOV.UK page that is regularly monitored.&lt;/p&gt;
&lt;p&gt;In the past, most of those outdated, niche pages would fall into the 0-view abyss, never to be stumbled on again. Now, as Andrea outlined in her blog post, GenAI bots are designed to search and gather information from all corners of the internet.&lt;/p&gt;
&lt;p&gt;In February 2024, there were 700,000 published pages on GOV.UK. In the 2 years since, that number has increased substantially.&lt;/p&gt;
&lt;p&gt;This was never ideal from an environmental perspective. It’s also not what the founders of GOV.UK intended, who wanted nodes of information to help users complete a task, rather than the proliferation of pages.&lt;/p&gt;
&lt;p&gt;Not only are GenAI bots picking up outdated information from GOV.UK, but they’re also ‘hallucinating’ answers.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://digitaltrade.blog.gov.uk/2026/04/20/how-were-preventing-ai-misinformation-at-dbt/&#34;&gt;How we’re preventing AI misinformation at DBT&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;There&amp;rsquo;s also some good practical advice in this post on how to try and prevent AI from providing false information.&lt;/p&gt;
</description>
      <source:markdown>&gt;&#34;Although the primary page on registering a company was up-to-date, users are increasingly using narrower, more nuanced searches. 
&gt;
&gt;This means ‘agentic search’, where GenAI performs multiple steps to meet a user’s objective, looks for content that meets these niche scenarios.
&gt;
&gt;As a result, the GenAI bot pulled information from an unmaintained, unused GOV.UK page that mentioned ‘charity’, instead of the primary GOV.UK page that is regularly monitored.
&gt;
&gt;In the past, most of those outdated, niche pages would fall into the 0-view abyss, never to be stumbled on again. Now, as Andrea outlined in her blog post, GenAI bots are designed to search and gather information from all corners of the internet.
&gt;
&gt;In February 2024, there were 700,000 published pages on GOV.UK. In the 2 years since, that number has increased substantially.
&gt;
&gt;This was never ideal from an environmental perspective. It’s also not what the founders of GOV.UK intended, who wanted nodes of information to help users complete a task, rather than the proliferation of pages.
&gt;
&gt;Not only are GenAI bots picking up outdated information from GOV.UK, but they’re also ‘hallucinating’ answers.&#34;

[How we’re preventing AI misinformation at DBT](https://digitaltrade.blog.gov.uk/2026/04/20/how-were-preventing-ai-misinformation-at-dbt/)

There&#39;s also some good practical advice in this post on how to try and prevent AI from providing false information.

</source:markdown>
    </item>
    
    <item>
      <title>are we *really* saving that much time?, Content Folks</title>
      <link>https://spitevillage.micro.blog/2026/04/23/are-we-really-saving-that.html</link>
      <pubDate>Thu, 23 Apr 2026 10:11:51 +0100</pubDate>
      
      <guid>http://spitevillage.micro.blog/2026/04/23/are-we-really-saving-that.html</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;I went through a 200-answer churn survey full of business-critical insight, broken down by the respondent’s company size. I knew that none of the companies in the sample had more than 100 employees; but when we ran the same sheet through Claude, it began attributing quotes and behaviours to a 100+ employee segment.&lt;/p&gt;
&lt;p&gt;I cannot emphasise this enough: the segment did not exist in the raw data. Trusting the analysis without double-checking the original numbers would have missed the hallucination; worse, we might have built a business case around customer evidence that wasn’t actually there.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://contentfolks.substack.com/p/is-ai-really-saving-time&#34;&gt;cf #104: are we &lt;em&gt;really&lt;/em&gt; saving that much time? &lt;/a&gt;&lt;/p&gt;
</description>
      <source:markdown>&gt;&#34;I went through a 200-answer churn survey full of business-critical insight, broken down by the respondent’s company size. I knew that none of the companies in the sample had more than 100 employees; but when we ran the same sheet through Claude, it began attributing quotes and behaviours to a 100+ employee segment.
&gt;
&gt;I cannot emphasise this enough: the segment did not exist in the raw data. Trusting the analysis without double-checking the original numbers would have missed the hallucination; worse, we might have built a business case around customer evidence that wasn’t actually there.

[cf #104: are we *really* saving that much time? ](https://contentfolks.substack.com/p/is-ai-really-saving-time)
</source:markdown>
    </item>
    
    <item>
      <title>The deceptive nature of today’s AI conversation design and how to fix it</title>
      <link>https://spitevillage.micro.blog/2026/04/20/the-deceptive-nature-of-todays.html</link>
      <pubDate>Mon, 20 Apr 2026 13:34:31 +0100</pubDate>
      
      <guid>http://spitevillage.micro.blog/2026/04/20/the-deceptive-nature-of-todays.html</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Claude regularly tells me it “loves” my thinking. Claude cannot love. Chat GPT tells me I’m smart and very reasonable. Am I? How would it know? I say please and thanks to my agents as if my manners mattered and they cared. Why? Because I’m falling for a deceptive pattern that makes me forget who I’m talking to: a machine. And while I don’t feel a bond with AI yet, enough people do: especially vulnerable people, younger people, and those living in isolation.&lt;/p&gt;
&lt;p&gt;It’s dangerous.&lt;/p&gt;
&lt;p&gt;But it contributes to token spend and interactions: more data, and more money to be made.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://uxdesign.cc/the-deceptive-nature-of-todays-ai-conversation-design-and-how-to-fix-it-195c5372c388&#34;&gt;The deceptive nature of today’s AI conversation design and how to fix it, Nicole Michaelis&lt;/a&gt;&lt;/p&gt;
</description>
      <source:markdown>&gt;&#34;Claude regularly tells me it “loves” my thinking. Claude cannot love. Chat GPT tells me I’m smart and very reasonable. Am I? How would it know? I say please and thanks to my agents as if my manners mattered and they cared. Why? Because I’m falling for a deceptive pattern that makes me forget who I’m talking to: a machine. And while I don’t feel a bond with AI yet, enough people do: especially vulnerable people, younger people, and those living in isolation.
&gt;
&gt;It’s dangerous.
&gt;
&gt;But it contributes to token spend and interactions: more data, and more money to be made.&#34;

[The deceptive nature of today’s AI conversation design and how to fix it, Nicole Michaelis](https://uxdesign.cc/the-deceptive-nature-of-todays-ai-conversation-design-and-how-to-fix-it-195c5372c388)
</source:markdown>
    </item>
    
    <item>
      <title>I Will Fucking Piledrive You If You Mention AI Again</title>
      <link>https://spitevillage.micro.blog/2026/04/20/i-will-fucking-piledrive-you.html</link>
      <pubDate>Mon, 20 Apr 2026 10:07:14 +0100</pubDate>
      
      <guid>http://spitevillage.micro.blog/2026/04/20/i-will-fucking-piledrive-you.html</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&amp;lsquo;I started working as a data scientist in 2019, and by 2021 I had realized that while the field was large, it was also largely fraudulent. Most of the leaders that I was working with clearly had not gotten as far as reading about it for thirty minutes despite insisting that things like, I dunno, the next five years of a ten thousand person non-tech organization should be entirely AI focused. The number of companies launching AI initiatives far outstripped the number of actual use cases. Most of the market was simply grifters and incompetents (sometimes both!) leveraging the hype to inflate their headcount so they could get promoted, or be seen as thought leaders1.&amp;rsquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/&#34;&gt;I Will Fucking Piledrive You If You Mention AI Again&lt;/a&gt;&lt;/p&gt;
</description>
      <source:markdown>&gt;&#39;I started working as a data scientist in 2019, and by 2021 I had realized that while the field was large, it was also largely fraudulent. Most of the leaders that I was working with clearly had not gotten as far as reading about it for thirty minutes despite insisting that things like, I dunno, the next five years of a ten thousand person non-tech organization should be entirely AI focused. The number of companies launching AI initiatives far outstripped the number of actual use cases. Most of the market was simply grifters and incompetents (sometimes both!) leveraging the hype to inflate their headcount so they could get promoted, or be seen as thought leaders1.&#39;

[I Will Fucking Piledrive You If You Mention AI Again](https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/)
</source:markdown>
    </item>
    
    <item>
      <title>From trainer maker to AI company</title>
      <link>https://spitevillage.micro.blog/2026/04/20/shares-in-allbirds-surge-after.html</link>
      <pubDate>Mon, 20 Apr 2026 10:03:33 +0100</pubDate>
      
      <guid>http://spitevillage.micro.blog/2026/04/20/shares-in-allbirds-surge-after.html</guid>
      <description>&lt;p&gt;&lt;a href=&#34;https://www.theguardian.com/business/2026/apr/15/allbirds-stock-ai-pivot&#34;&gt;Shares in Allbirds surge after maker of wool sneakers announces pivot to AI&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;How many weird pivots like this will we see this year?!&lt;/p&gt;
</description>
      <source:markdown>[Shares in Allbirds surge after maker of wool sneakers announces pivot to AI](https://www.theguardian.com/business/2026/apr/15/allbirds-stock-ai-pivot)

How many weird pivots like this will we see this year?!
</source:markdown>
    </item>
    
    <item>
      <title>CAST’s AI survey 2026: All the results — and the support available, CAST</title>
      <link>https://spitevillage.micro.blog/2026/04/20/casts-ai-survey-all-the.html</link>
      <pubDate>Mon, 20 Apr 2026 09:56:01 +0100</pubDate>
      
      <guid>http://spitevillage.micro.blog/2026/04/20/casts-ai-survey-all-the.html</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;This year’s results revealed a picture of two halves: on the positive side, there’s clear evidence of increased AI adoption amongst individuals, alongside strong examples of organisational use — plus signs that people are feeling more positive about the role of AI in their work.&lt;/p&gt;
&lt;p&gt;But this is set against a lack of organisational and sector support, with individual enthusiasm running ahead of governance, training and infrastructure. It’s perhaps unsurprising then, that ‘shadow AI’ is seemingly prevalent. And data privacy remains high on the collective agenda, being cited as both the biggest organisational adoption blocker and the most pressing individual support need.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://medium.com/we-are-cast/casts-ai-survey-2026-all-the-results-and-the-support-available-6dc7d317a764&#34;&gt;CAST’s AI survey 2026&lt;/a&gt;&lt;/p&gt;
</description>
      <source:markdown>&gt;&#34;This year’s results revealed a picture of two halves: on the positive side, there’s clear evidence of increased AI adoption amongst individuals, alongside strong examples of organisational use — plus signs that people are feeling more positive about the role of AI in their work.
&gt;
&gt;But this is set against a lack of organisational and sector support, with individual enthusiasm running ahead of governance, training and infrastructure. It’s perhaps unsurprising then, that ‘shadow AI’ is seemingly prevalent. And data privacy remains high on the collective agenda, being cited as both the biggest organisational adoption blocker and the most pressing individual support need.&#34;

[CAST’s AI survey 2026](https://medium.com/we-are-cast/casts-ai-survey-2026-all-the-results-and-the-support-available-6dc7d317a764)
</source:markdown>
    </item>
    
    <item>
      <title>The AI Roadmap: How We Ensure AI Serves Humanity​, Center for Humane Technology</title>
      <link>https://spitevillage.micro.blog/2026/04/20/the-ai-roadmap-how-we.html</link>
      <pubDate>Mon, 20 Apr 2026 09:54:30 +0100</pubDate>
      
      <guid>http://spitevillage.micro.blog/2026/04/20/the-ai-roadmap-how-we.html</guid>
      <description>&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;AI should be built safely and transparently&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;AI companies owe a duty of care to the public&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;AI design should center human well-being&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;AI should not automate away meaningful work and human dignity&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;AI innovation should not come at the expense of our rights and freedom&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;AI should have internationally agreed upon limits&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;AI power should be balanced in society&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://www.humanetech.com/ai-roadmap&#34;&gt;The AI Roadmap: How We Ensure AI Serves Humanity&lt;/a&gt;&lt;/p&gt;
</description>
      <source:markdown>&gt;1. AI should be built safely and transparently
&gt;
&gt;2. AI companies owe a duty of care to the public
&gt;
&gt;3. AI design should center human well-being
&gt;
&gt;4. AI should not automate away meaningful work and human dignity
&gt;
&gt;5. AI innovation should not come at the expense of our rights and freedom
&gt;
&gt;6. AI should have internationally agreed upon limits
&gt;
&gt;7. AI power should be balanced in society

[The AI Roadmap: How We Ensure AI Serves Humanity](https://www.humanetech.com/ai-roadmap
)
</source:markdown>
    </item>
    
    <item>
      <title>A website to destroy all websites: How to win the war for the soul of the internet and build the Web We Want. Henry Codes</title>
      <link>https://spitevillage.micro.blog/2026/03/25/a-website-to-destroy-all.html</link>
      <pubDate>Wed, 25 Mar 2026 12:39:00 +0100</pubDate>
      
      <guid>http://spitevillage.micro.blog/2026/03/25/a-website-to-destroy-all.html</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Well, the Internet mostly feels bad these days.&lt;/p&gt;
&lt;p&gt;We were given this vast, holy realm of self-discovery and joy and philosophy and community; a thousand thousand acres of digital landscape, on which to grow our forests and grasslands of imagination, plant our gardens of learning, explore the caves of our making. We were given the chance to know anything about anything, to be our own Prometheus, to make wishes and to grant them.&lt;/p&gt;
&lt;p&gt;But that’s not what we use the Internet for anymore. These days, instead of using it to make ourselves, most of us are using it to waste ourselves: we’re doom-scrolling brain-rot on the attention-farm, we’re getting slop from the feed.&lt;/p&gt;
&lt;p&gt;Instead of turning freely in the HTTP meadows we grow for each other, we go to work: we break our backs at the foundry of algorithmic content as this earnest, naïve, human endeavoring to connect our lives with others is corrupted. Our powerful drive to learn about ourselves, each other, and our world, is broken into scant remnants — hollow, clutching phantasms of Content Creation, speed-cut vertical video, listicle thought-leadership, ragebait and the thread emoji.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://henry.codes/writing/a-website-to-destroy-all-websites/&#34;&gt;https://henry.codes/writing/a-website-to-destroy-all-websites/&lt;/a&gt;&lt;/p&gt;
</description>
      <source:markdown>&gt;&#34;Well, the Internet mostly feels bad these days.
&gt;
&gt;We were given this vast, holy realm of self-discovery and joy and philosophy and community; a thousand thousand acres of digital landscape, on which to grow our forests and grasslands of imagination, plant our gardens of learning, explore the caves of our making. We were given the chance to know anything about anything, to be our own Prometheus, to make wishes and to grant them.
&gt;
&gt;But that’s not what we use the Internet for anymore. These days, instead of using it to make ourselves, most of us are using it to waste ourselves: we’re doom-scrolling brain-rot on the attention-farm, we’re getting slop from the feed.
&gt;
&gt;Instead of turning freely in the HTTP meadows we grow for each other, we go to work: we break our backs at the foundry of algorithmic content as this earnest, naïve, human endeavoring to connect our lives with others is corrupted. Our powerful drive to learn about ourselves, each other, and our world, is broken into scant remnants — hollow, clutching phantasms of Content Creation, speed-cut vertical video, listicle thought-leadership, ragebait and the thread emoji.&#34;
&gt;
[https://henry.codes/writing/a-website-to-destroy-all-websites/](https://henry.codes/writing/a-website-to-destroy-all-websites/)
</source:markdown>
    </item>
    
    <item>
      <title>AI Amplifies Work, It Doesn’t Replace It - AktivTrak</title>
      <link>https://spitevillage.micro.blog/2026/03/25/ai-amplifies-work-it-doesnt.html</link>
      <pubDate>Wed, 25 Mar 2026 12:36:00 +0100</pubDate>
      
      <guid>http://spitevillage.micro.blog/2026/03/25/ai-amplifies-work-it-doesnt.html</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&amp;lsquo;The data is unambiguous: AI does not reduce workloads. Among a subset of 10,584 users comparing 180 days before and after AI adoption (Data Set B), time spent across every measured work category increased between 27% and 346% — with email up 104%, chat and messaging up 145% and business management up 94%. No activity category decreased after adoption.&lt;/p&gt;
&lt;p&gt;AI is being used as an additional productivity layer, not a substitute for existing work. High-performing employees are adopting it and doing more — not the same amount more efficiently.&amp;rsquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&#34;https://www.activtrak.com/resources/state-of-the-workplace/#ai-adoption-&amp;amp;-impact&#34;&gt;https://www.activtrak.com/resources/state-of-the-workplace/#ai-adoption-&amp;amp;-impact&lt;/a&gt;&lt;/p&gt;
</description>
      <source:markdown>&gt;&#39;The data is unambiguous: AI does not reduce workloads. Among a subset of 10,584 users comparing 180 days before and after AI adoption (Data Set B), time spent across every measured work category increased between 27% and 346% — with email up 104%, chat and messaging up 145% and business management up 94%. No activity category decreased after adoption.
&gt;
&gt;AI is being used as an additional productivity layer, not a substitute for existing work. High-performing employees are adopting it and doing more — not the same amount more efficiently.&#39;

[https://www.activtrak.com/resources/state-of-the-workplace/#ai-adoption-&amp;-impact](https://www.activtrak.com/resources/state-of-the-workplace/#ai-adoption-&amp;-impact)
</source:markdown>
    </item>
    
  </channel>
</rss>
